Code Monkey home page Code Monkey logo

googlecloudplatform / pgadapter Goto Github PK

View Code? Open in Web Editor NEW
54.0 25.0 20.0 47.02 MB

PostgreSQL wire-protocol proxy for Cloud Spanner

Home Page: https://cloud.google.com/spanner/docs/postgresql-interface#postgresql-client-support

License: Apache License 2.0

Dockerfile 0.19% Java 90.52% Shell 1.54% Go 3.07% Batchfile 0.02% Python 1.69% C# 0.98% PLpgSQL 0.01% TypeScript 1.27% JavaScript 0.25% Ruby 0.30% PHP 0.17%
cloud google google-cloud google-cloud-spanner postgresql spanner sql

pgadapter's Introduction

Google Cloud Spanner PGAdapter

PGAdapter is a proxy that translates the PostgreSQL wire-protocol into the equivalent for Spanner databases that use the PostgreSQL interface. It enables you to use standard PostgreSQL drivers and tools with Cloud Spanner and is designed for the lowest possible latency.

Note: JVM-based applications can add PGAdapter as a compile-time dependency and run the proxy in the same process as the main application. See samples/java/jdbc for a small sample application that shows how to do this.

Drivers and Clients

PGAdapter can be used with the following drivers and clients:

  1. psql: Versions 11, 12, 13 and 14 are supported. See psql support for more details.
  2. IntelliJ, DataGrip and other JetBrains IDEs. See Connect Cloud Spanner PostgreSQL to JetBrains for more details.
  3. JDBC: Versions 42.x and higher are supported. See JDBC support for more details.
  4. pgx: Version 4.15 and higher are supported. See pgx support for more details.
  5. psycopg2: Version 2.9.3 and higher are supported. See psycopg2 for more details.
  6. psycopg3: Version 3.1.x and higher are supported. See psycopg3 support for more details.
  7. node-postgres: Version 8.8.0 and higher are supported. See node-postgres support for more details.
  8. npgsql: Version 6.0.x and higher have experimental support. See npgsql support for more details.
  9. PDO_PGSQL: The PHP PDO driver has experimental support. See PHP PDO for more details.
  10. postgres_fdw: The PostgreSQL foreign data wrapper has experimental support. See Foreign Data Wrapper sample for more details.

ORMs, Frameworks and Tools

PGAdapter can be used with the following frameworks and tools:

  1. Hibernate: Version 5.3.20.Final and higher are supported. See hibernate support for more details.
  2. Spring Data JPA: Spring Data JPA in combination with Hibernate is also supported. See the Spring Data JPA Sample Application for a full example.
  3. Liquibase: Version 4.12.0 and higher are supported. See Liquibase support for more details. See also this directory for a sample application using Liquibase.
  4. gorm: Version 1.23.8 and higher are supported. See gorm support for more details. See also this directory for a sample application using gorm.
  5. SQLAlchemy 2.x: Version 2.0.1 and higher are supported. See also this directory for a sample application using SQLAlchemy 2.x.
  6. SQLAlchemy 1.x: Version 1.4.45 and higher has experimental support. It is recommended to use SQLAlchemy 2.x instead of SQLAlchemy 1.4.x for the best possible performance. See also this directory for a sample application using SQLAlchemy 1.x.
  7. pgbench can be used with PGAdapter, but with some limitations. See pgbench.md for more details.
  8. Ruby ActiveRecord: Version 7.x has experimental support and with limitations. Please read the instructions in PGAdapter - Ruby ActiveRecord Connection Options carefully for how to set up ActiveRecord to work with PGAdapter.
  9. Knex.js query builder can be used with PGAdapter. See Knex.js sample application for a sample application.
  10. Sequelize.js ORM can be used with PGAdapter. See Sequelize.js sample application for a sample application.

FAQ

See Frequently Asked Questions for answers to frequently asked questions.

Performance

See Latency Comparisons for benchmark comparisons between using PostgreSQL drivers with PGAdapter and using native Cloud Spanner drivers and client libraries.

Insights

See OpenTelemetry in PGAdapter for how to use OpenTelemetry to collect and export traces to Google Cloud Trace.

Usage

PGAdapter can be started both as a Docker container, a standalone process as well as an in-process server (the latter is only supported for Java and other JVM-based applications).

Docker

Replace the project, instance and database names and the credentials file in the example below to run PGAdapter from a pre-built Docker image.

docker pull gcr.io/cloud-spanner-pg-adapter/pgadapter
docker run \
  -d -p 5432:5432 \
  -v /path/to/credentials.json:/credentials.json:ro \
  gcr.io/cloud-spanner-pg-adapter/pgadapter \
  -p my-project -i my-instance -d my-database \
  -c /credentials.json -x

The -x argument turns off the requirement that all TCP connections must come from localhost. This is required when running PGAdapter in a Docker container.

See Options for an explanation of all further options.

Distroless Docker Image

We also publish a distroless Docker image for PGAdapter under the tag gcr.io/cloud-spanner-pg-adapter/pgadapter-distroless. This Docker image runs PGAdapter as a non-root user.

docker pull gcr.io/cloud-spanner-pg-adapter/pgadapter-distroless
docker run \
  -d -p 5432:5432 \
  -v /path/to/credentials.json:/credentials.json:ro \
  gcr.io/cloud-spanner-pg-adapter/pgadapter-distroless \
  -p my-project -i my-instance -d my-database \
  -c /credentials.json -x

Standalone with pre-built jar

A pre-built jar and all dependencies can be downloaded from https://storage.googleapis.com/pgadapter-jar-releases/pgadapter.tar.gz

wget https://storage.googleapis.com/pgadapter-jar-releases/pgadapter.tar.gz \
  && tar -xzvf pgadapter.tar.gz
java -jar pgadapter.jar -p my-project -i my-instance -d my-database

Use the -s option to specify a different local port than the default 5432 if you already have PostgreSQL running on your local system.

You can also download a specific version of the jar. Example (replace v0.36.1 with the version you want to download):

VERSION=v0.36.1
wget https://storage.googleapis.com/pgadapter-jar-releases/pgadapter-${VERSION}.tar.gz \
  && tar -xzvf pgadapter-${VERSION}.tar.gz
java -jar pgadapter.jar -p my-project -i my-instance -d my-database

See Options for an explanation of all further options.

Standalone with locally built jar

  1. Build a jar file and assemble all dependencies by running
mvn package -P assembly
  1. Execute (the binaries are in the target/pgadapter folder)
cd target/pgadapter
java -jar pgadapter.jar -p my-project -i my-instance -d my-database

See Options for an explanation of all further options.

In-process

This option is only available for Java/JVM-based applications.

  1. Add google-cloud-spanner-pgadapter as a dependency to your project by adding this to your pom.xml file:
<!-- [START pgadapter_dependency] -->
<dependency>
  <groupId>com.google.cloud</groupId>
  <artifactId>google-cloud-spanner-pgadapter</artifactId>
  <version>0.36.1</version>
</dependency>
<!-- [END pgadapter_dependency] -->
  1. Build a server using the com.google.cloud.spanner.pgadapter.ProxyServer class:
class PGProxyRunner {
  public static void main(String[] args) {
      OptionsMetadata.Builder builder =
          OptionsMetadata.newBuilder()
              .setProject("my-project")
              .setInstance("my-instance")
              .setDatabase("my-database")
              .setCredentialsFile("/path/to/credentials.json")
              // Start PGAdapter on any available port.
              .setPort(0);
      ProxyServer server = new ProxyServer(builder.build());
      server.startServer();
      server.awaitRunning();
    }
}

See samples/java/jdbc for a small sample application that adds PGAdapter as a compile-time dependency and runs it together with the main application.

Emulator

A pre-built Docker image that contains both PGAdapter and the Spanner Emulator can be started with these commands:

docker pull gcr.io/cloud-spanner-pg-adapter/pgadapter-emulator
docker run \
  -d -p 5432:5432 \
  gcr.io/cloud-spanner-pg-adapter/pgadapter-emulator
sleep 2
psql -h localhost -p 5432 -d test-database

This Docker container configures PGAdapter to connect to a Cloud Spanner Emulator running inside the same container. You do not need to first create a Spanner instance or database on the Emulator before connecting to them. Instead, the instance and database are automatically created on the Emulator when you connect to PGAdapter.

Additional Information

See this document for more information on how to connect PGAdapter to the Cloud Spanner Emulator.

Connecting to the Cloud Spanner Emulator is supported with:

  1. PGAdapter version 0.26.0 and higher.
  2. Cloud Spanner Emulator 1.5.12 and higher.

Options

The following list contains the most frequently used startup options for PGAdapter.

-p <projectname>
  * The project name where the Spanner database(s) is/are running. If omitted, all connection
    requests must use a fully qualified database name in the format
    'projects/my-project/instances/my-instance/databases/my-database'.
    
-i <instanceid>
  * The instance ID where the Spanner database(s) is/are running. If omitted, all connection
    requests must use a fully qualified database name in the format
    'projects/my-project/instances/my-instance/databases/my-database'.

-d <databasename>
  * The default Spanner database name to connect to. This is only required if you want PGAdapter to
    ignore the database that is given in the connection request from the client and to always
    connect to this database.
  * If set, any database given in a connection request will be ignored. \c commands in psql will not
    change the underlying database that PGAdapter is connected to.
  * If not set, the database to connect to must be included in any connection request. \c commands
    in psql will change the underlying database that PGAdapter connects to.

-c <credentialspath>
  * This argument should not be used in combination with -a (authentication mode).
  * This is only required if you have not already set up default credentials on the system where you
    are running PGAdapter. See https://cloud.google.com/spanner/docs/getting-started/set-up#set_up_authentication_and_authorization
    for more information on setting up authentication for Cloud Spanner.
  * The full path for the file containing the service account credentials in JSON format.
  * Do remember to grant the service account sufficient credentials to access the database. See
    https://cloud.google.com/docs/authentication/production for more information.

-s <port>
  * The inbound port for the proxy. Defaults to 5432. Choose a different port if you already have
    PostgreSQL running on your local system.

-dir <socket-file-directory>
  * This proxy's domain socket directory. Defaults to '/tmp'.
    Note: Some distributions of PostgreSQL and psql use '/var/run/postgresql' as the
    default directory for Unix domain socket file names. When connecting to PGAdapter
    using psql from one of these distributions, you either need to use 'psql -h /tmp'
    or change the default Unix domain socket directory used by PGAdapter to '/var/run/postgresql'.

-ddl <ddl-transaction-mode>
  * Determines the behavior of the proxy when DDL statements are executed in transactions.
    See DDL options for more information.

-a
  * Turns on authentication for the proxy server. Clients are then requested
    to supply a username and password during a connection request.
  * The username and password must contain one of the following:
    1. The password field must contain the JSON payload of a credentials file, for example from a service account key file. The username will be ignored in this case.
    2. The password field must contain the private key from a service account key file. The username must contain the email address of the corresponding service account.
  * Note that SSL is not supported for the connection between the client and
    PGAdapter. The proxy should therefore only be used within a private network.
    The connection between PGAdapter and Cloud Spanner is always secured with SSL.

-x
  * PGAdapter by default only allows TCP connections from localhost or Unix Domain Socket connections.
    Use the -x switch to turn off the localhost check. This is required when running PGAdapter in a
    Docker container, as the connections from the host machine will not be seen as a connection from
    localhost in the container.

Example - Connect to a Single Database

This example starts PGAdapter and instructs it to always connect to the same database using a fixed set of credentials:

java -jar pgadapter.jar \
     -p <project-id> -i <instance-id> -d <database-id> \
     -c <path to credentials file> -s 5432
psql -h localhost

The psql -d command line argument will be ignored. The psql \c meta-command will have no effect.

Example - Require Authentication and Fully Qualified Database Name

This example starts PGAdapter and requires the client to supply both credentials and a fully qualified database name. This allows a single instance of PGAdapter to serve connections to any Cloud Spanner database.

java -jar pgadapter.jar -a

# The credentials file must be a valid Google Cloud credentials file, such as a
# service account key file or a user credentials file.
# Note that you must enclose the database name in quotes as it contains slashes.
PGPASSWORD=$(cat /path/to/credentials.json) psql -h /tmp \
  -d "projects/my-project/instances/my-instance/databases/my-database"

The psql \c meta-command can be used to switch to a different database.

Details

Google Cloud Spanner PGAdapter is a simple, MITM, forward, non-transparent proxy, which translates the PostgreSQL wire-protocol into the Cloud Spanner equivalent. It can only be used with Cloud Spanner databases that use the PostgreSQL interface. By running this proxy locally, any PostgreSQL client (including the SQL command-line client PSQL) should function seamlessly by simply pointing its outbound port to this proxy's inbound port. The proxy does not support all parts of the PostgreSQL wire-protocol. See Limitations for a list of current limitations.

In addition to translation, this proxy also concerns itself with authentication and to some extent, connection pooling. Translation for the most part is simply a transformation of the PostgreSQL wire protocol except for some cases concerning PSQL, wherein the query itself is translated.

Simple query mode and extended query mode are supported, and any data type supported by Spanner is also supported. Cloud Spanner databases created with PostgreSQL dialect do not support all pg_catalog tables.

Though the majority of functionality inherent in most PostgreSQL clients are included out of the box, the following items are not supported:

  • SSL
  • COPY <table_name> FROM <filename | PROGRAM program>

See COPY FROM STDIN for more information on the COPY operations that are supported.

Only the following psql meta-commands are supported:

  • \d <table>
  • \dt <table>
  • \dn <table>
  • \di <table>
  • \l

Other psql meta-commands are not supported.

Limitations

PGAdapter has the following known limitations at this moment:

  • Only password authentication using the password method is supported. All other authentication methods are not supported.
  • The COPY protocol only supports COPY TO|FROM STDOUT|STDIN [BINARY]. COPY TO|FROM <FILE|PROGRAM> is not supported. See COPY for more information.
  • DDL transactions are not supported. PGAdapter allows DDL statements in implicit transactions, and executes SQL strings that contain multiple DDL statements as a single DDL batch on Cloud Spanner. See DDL transaction options for more information.

Logging

PGAdapter uses java.util.logging for logging.

Default Logging

PGAdapter by default configures java.util.logging to do the following:

  1. Log messages of level WARNING and higher are logged to stderr.
  2. Log messages of level INFO are logged to stdout.
  3. Log messages of higher levels than INFO are not logged.

You can supply your own logging configuration with the -Djava.util.logging.config.file System property. See the next section for an example.

The default log configuration described in this section was introduced in version 0.33.0 of PGAdapter. Prior to that, PGAdapter used the default java.util.logging configuration, which logs everything to stderr.

You can disable the default PGAdapter log configuration and go back to the standard java.util.logging configuration by starting PGAdapter with the command line argument -legacy_logging.

Custom Logging Configuration

Create a logging.properties file to configure logging messages. See the following example for an example to get fine-grained logging.

handlers=java.util.logging.ConsoleHandler,java.util.logging.FileHandler
com.google.cloud.spanner.pgadapter.level=FINEST
java.util.logging.ConsoleHandler.level=FINEST
java.util.logging.FileHandler.level=INFO
java.util.logging.FileHandler.pattern=%h/log/pgadapter-%u.log
java.util.logging.FileHandler.append=false
io.grpc.internal.level = WARNING

java.util.logging.SimpleFormatter.format=[%1$tY-%1$tm-%1$td %1$tH:%1$tM:%1$tS.%1$tL] [%4$s] (%2$s): %5$s%6$s%n
java.util.logging.FileHandler.formatter=java.util.logging.SimpleFormatter

Start PGAdapter with -Djava.util.logging.config.file=logging.properties when running PGAdapter as a jar.

Logging in Docker

You can configure PGAdapter to log to a file on your local system when running it in Docker by following these steps.

  1. Create a logging.properties on your local system like this:
handlers=java.util.logging.FileHandler
java.util.logging.FileHandler.level=INFO
java.util.logging.FileHandler.pattern=/home/pgadapter/log/pgadapter.log
java.util.logging.FileHandler.append=true
io.grpc.internal.level = WARNING

java.util.logging.SimpleFormatter.format=[%1$tY-%1$tm-%1$td %1$tH:%1$tM:%1$tS.%1$tL] [%4$s] (%2$s): %5$s%6$s%n
java.util.logging.FileHandler.formatter=java.util.logging.SimpleFormatter
  1. Start the PGAdapter Docker container with these options:
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/credentials.json
export LOGGING_PROPERTIES=/full/path/to/logging.properties
docker run \
    -d -p 5432:5432 \
    -v ${GOOGLE_APPLICATION_CREDENTIALS}:${GOOGLE_APPLICATION_CREDENTIALS}:ro \
    -v ${LOGGING_PROPERTIES}:${LOGGING_PROPERTIES}:ro \
    -v /home/my-user-name/log:/home/pgadapter/log \
    -e GOOGLE_APPLICATION_CREDENTIALS \
    gcr.io/cloud-spanner-pg-adapter/pgadapter \
    -Djava.util.logging.config.file=${LOGGING_PROPERTIES} \
    -p my-project -i my-instance -d my-database \
    -x

The directory /home/my-user-name/log will be created automatically and the log file will be placed in this directory.

Support Level

We are not currently accepting external code contributions to this project. Please feel free to file feature requests using GitHub's issue tracker or using the existing Cloud Spanner support channels.

pgadapter's People

Contributors

arunsathiya avatar bluphy avatar dependabot[bot] avatar gauravsnj avatar hengfengli avatar libingye816 avatar olavloite avatar pratickchokhani avatar release-please[bot] avatar renovate-bot avatar skuruppu avatar thiagotnunes avatar vizerai avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pgadapter's Issues

Memory leak caused by opening and closing a large number of connections

Opening and closing a large number of connections sequentially will slowly leak memory in PGAdapter. This seems to be caused by the ConnectionHandler#CONNECTION_HANDLERS map, that maps a Connection ID to a ConnectionHandler. Connections that are closed are not removed from this map.

feat: RUN BATCH should return the affected rows in the returned command tag

It is possible to execute DML batches using the START BATCH DML and RUN BATCH session management statements in drivers that do not natively support batching. The RUN BATCH command however only returns the RUN command tag, but not the number of rows that were affected for each statement. The number of affected rows per statement should be included in the returned command tag in the same way as it is returned for DML statements (e.g. RUN 1 1 2 1 for a batch that affected respectively 1, 1, 2, and 1 rows).

const sql = "insert into test (id, value) values ($1, $2)";
// This will start a DML batch for this client. All subsequent
// DML statements will be cached locally until RUN BATCH is executed.
await client.query("start batch dml");
await client.query({text: sql, values: [1, 'One']});
await client.query({text: sql, values: [2, 'Two']});
await client.query({text: sql, values: [3, 'Three']});
// This will send the DML statements to Cloud Spanner as one batch.
const res = await client.query("run batch");
// The command tag printed here will only include `RUN` and not the number of affected rows.
console.log(res);

Invalid JDBC connection

Hi,

I keep running into this issue while executing the jar file:

SEVERE: Something went wrong in establishing a Spanner connection: Invalid JDBC connection. 
Make sure credentials are valid.

What does this error mean? Do I need the preview for this to work?

GKE workload identity pod with PGAdapter as sidecar can't create a table with UNIQUE,serial,timestamp

I created the following deployment in GKE autopilot with GKE workload identity configured properly. I'm able to use psql -h localhost command to connect to the Spanner database of PosgreSQL interface, psql-test under instance hil-test. Here's the deployment file:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: gcloud-psql
  labels:
    app: pgadapter
spec:
  replicas: 1
  selector:
    matchLabels:
      app: pgadapter
  template:
    metadata:
      labels:
        app: pgadapter
    spec:
      serviceAccountName: spanner
      containers:
      - name: gcloud
        image: gcr.io/google.com/cloudsdktool/cloud-sdk:latest
        command: [ "/bin/bash", "-c", "--" ]
        args: [ "while true; do sleep 30; done;" ]
      - name: spanner-pgadapter
        image: gcr.io/cloud-spanner-pg-adapter/pgadapter:latest
        args: [ "-p PROJECT_ID -i hil-test -d psql-test -x" ]

However, I found the PostgreSQL create table syntax compatibility is quite limited. I want to confirm that's indeed the case. I tried an example from the Internet to create a table with the following DDL:

psql-test=> CREATE TABLE accounts (
 user_id serial PRIMARY KEY,
 username VARCHAR ( 50 ) UNIQUE NOT NULL,
 password VARCHAR ( 50 ) NOT NULL,
 email VARCHAR ( 255 ) UNIQUE NOT NULL,
 created_on TIMESTAMP NOT NULL,
        last_login TIMESTAMP 
);
ERROR:  INVALID_ARGUMENT: io.grpc.StatusRuntimeException: INVALID_ARGUMENT: Type <serial> is not supported.
psql-test=> CREATE TABLE accounts (
 user_id bigint PRIMARY KEY,
 username VARCHAR ( 50 ) UNIQUE NOT NULL,
 password VARCHAR ( 50 ) NOT NULL,
 email VARCHAR ( 255 ) UNIQUE NOT NULL,
 created_on TIMESTAMP NOT NULL,
        last_login TIMESTAMP 
);
ERROR:  INVALID_ARGUMENT: io.grpc.StatusRuntimeException: INVALID_ARGUMENT: <UNIQUE> constraint is not supported, create a unique index instead.
psql-test=> CREATE TABLE accounts (
 user_id bigint PRIMARY KEY,
 username VARCHAR ( 50 ) NOT NULL,
 password VARCHAR ( 50 ) NOT NULL,
 email VARCHAR ( 255 )  NOT NULL,
 created_on TIMESTAMP NOT NULL,
        last_login TIMESTAMP 
);
ERROR:  INVALID_ARGUMENT: io.grpc.StatusRuntimeException: INVALID_ARGUMENT: Type <timestamp> is not supported.
psql-test=> CREATE TABLE accounts (
 user_id bigint PRIMARY KEY,
 username VARCHAR ( 50 ) NOT NULL,
 password VARCHAR ( 50 ) NOT NULL,
 email VARCHAR ( 255 )  NOT NULL);
CREATE

It appears I had to delete UNIQUE,serial,timestamp to create a table in a database of PostgreSQL interface. Is this normal?

Unable to run in docker

Attempted to setup an instance using docker compose but it fails to start.

pgadapter_1   | #
pgadapter_1   | # There is insufficient memory for the Java Runtime Environment to continue.
pgadapter_1   | # Cannot create worker GC thread. Out of system resources.
pgadapter_1   | # An error report file with more information is saved as:
pgadapter_1   | # /home/pgadapter/hs_err_pid7.log

The relevant docker compose config is:

  pgadapter:
    image: gcr.io/cloud-spanner-pg-adapter/pgadapter
    restart: always
    volumes:
      - ./xxx.json:/key.json
    ports:
      - 5433:5432
    command: -p xxx -i xxx -d xxx-c /key.json -x

Similar for directly calling docker run:

$ docker run -p 5433:5432 -v $(realpath ./xxx.json):/key.json gcr.io/cloud-spanner-pg-adapter/pgadapter -p xxx -i xxx -d xxx -c /key.json -x
java -jar pgadapter.jar -p xxx -i xxx -d xxx -c /key.json -x
[0.026s][warning][os,thread] Failed to start thread "GC Thread#0" - pthread_create failed (EPERM) for attributes: stacksize: 1024k, guardsize: 4k, detached.
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create worker GC thread. Out of system resources.
# An error report file with more information is saved as:
# /home/pgadapter/hs_err_pid7.log

ERROR: FAILED_PRECONDITION: Statement tags are not supported for COMMIT or ROLLBACK

If a statement tag has been set in a transaction and an invalid statement is executed, PGAdapter will automatically try to rollback the underlying transaction, as the session is in an 'aborted transaction state'. The statement tag is however still present on the connection, and prevents the rollback from being executed, and instead returns the error ERROR: FAILED_PRECONDITION: Statement tags are not supported for COMMIT or ROLLBACK.

Errors are not propagated correctly in a DDL batch

A DDL batch that contains the following:

  1. The first statement is executed correctly.
  2. The second statement is executed, but contains an error.

The error from the second statement is not propagated correctly to the client by PGAdapter.

Could not resolve dependencies

Hi,
I got an issue trying to build an application by running mvn package -P shade

[ERROR] Failed to execute goal on project google-cloud-spanner-pgadapter: Could not resolve dependencies for project com.google.cloud:google-cloud-spanner-pgadapter:jar:0.1.0-pg-SNAPSHOT: Failed to collect dependencies at com.google.cloud:google-cloud-spanner-jdbc:jar:2.4.4-pg-SNAPSHOT: Failed to read artifact descriptor for com.google.cloud:google-cloud-spanner-jdbc:jar:2.4.4-pg-SNAPSHOT: Could not transfer artifact com.google.cloud:google-cloud-spanner-jdbc:pom:2.4.4-pg-SNAPSHOT from/to artifact-registry (artifactregistry://us-west1-maven.pkg.dev/span-cloud-testing/spangres-artifacts): Permission denied on remote repository (or it may not exist).

feat: Add support for the standard copy() function in psycopg2

COPY operations in psycopg2 are only supported through the copy_expert() function. The reason for this is that the standard copy() function in psycopg2 will generate a COPY statement that is not supported by PGAdapter. We should add support for the type of COPY statement that psycopg2 generates when using the standard copy() function.

Internal errors without a message can cause PGAdapter to return a generic error

Internal unexpected errors that have no error message in PGAdapter can cause NullPointerExceptions when creating a PGException, as PGException requires a message. Instead, PGException should by default be created with the message of the causing error (if available), and otherwise the the name of the error class that caused the error.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.