Code Monkey home page Code Monkey logo

postgres's Introduction

Fastest full PostgreSQL nodejs client


Getting started


Good UX with Postgres.js


Installation

$ npm install postgres

Usage

Create your sql database instance

// db.js
import postgres from 'postgres'

const sql = postgres({ /* options */ }) // will use psql environment variables

export default sql

Simply import for use elsewhere

// users.js
import sql from './db.js'

async function getUsersOver(age) {
  const users = await sql`
    select
      name,
      age
    from users
    where age > ${ age }
  `
  // users = Result [{ name: "Walter", age: 80 }, { name: 'Murray', age: 68 }, ...]
  return users
}


async function insertUser({ name, age }) {
  const users = await sql`
    insert into users
      (name, age)
    values
      (${ name }, ${ age })
    returning name, age
  `
  // users = Result [{ name: "Murray", age: 68 }]
  return users
}

ESM dynamic imports

The library can be used with ESM dynamic imports as well as shown here.

const { default: postgres } = await import('postgres')

Table of Contents

Connection

postgres([url], [options])

You can use either a postgres:// url connection string or the options to define your database connection properties. Options in the object will override any present in the url. Options will fall back to the same environment variables as psql.

const sql = postgres('postgres://username:password@host:port/database', {
  host                 : '',            // Postgres ip address[s] or domain name[s]
  port                 : 5432,          // Postgres server port[s]
  database             : '',            // Name of database to connect to
  username             : '',            // Username of database user
  password             : '',            // Password of database user
  ...and more
})

More options can be found in the Connection details section.

Queries

await sql`...` -> Result[]

Postgres.js utilizes Tagged template functions to process query parameters before interpolation. Using tagged template literals benefits developers by:

  1. Enforcing safe query generation
  2. Giving the sql`` function powerful utility and query building features.

Any generic value will be serialized according to an inferred type, and replaced by a PostgreSQL protocol placeholder $1, $2, .... The parameters are then sent separately to the database which handles escaping & casting.

All queries will return a Result array, with objects mapping column names to each row.

const xs = await sql`
  insert into users (
    name, age
  ) values (
    'Murray', 68
  )

  returning *
`

// xs = [{ user_id: 1, name: 'Murray', age: 68 }]

Please note that queries are first executed when awaited โ€“ or instantly by using .execute().

Query parameters

Parameters are automatically extracted and handled by the database so that SQL injection isn't possible. No special handling is necessary, simply use tagged template literals as usual.

const name = 'Mur'
    , age = 60

const users = await sql`
  select
    name,
    age
  from users
  where
    name like ${ name + '%' }
    and age > ${ age }
`
// users = [{ name: 'Murray', age: 68 }]

Be careful with quotation marks here. Because Postgres infers column types, you do not need to wrap your interpolated parameters in quotes like '${name}'. This will cause an error because the tagged template replaces ${name} with $1 in the query string, leaving Postgres to do the interpolation. If you wrap that in a string, Postgres will see '$1' and interpret it as a string as opposed to a parameter.

Dynamic column selection

const columns = ['name', 'age']

await sql`
  select
    ${ sql(columns) }
  from users
`

// Which results in:
select "name", "age" from users

Dynamic inserts

const user = {
  name: 'Murray',
  age: 68
}

await sql`
  insert into users ${
    sql(user, 'name', 'age')
  }
`

// Which results in:
insert into users ("name", "age") values ($1, $2)

// The columns can also be given with an array
const columns = ['name', 'age']

await sql`
  insert into users ${
    sql(user, columns)
  }
`

You can omit column names and simply execute sql(user) to get all the fields from the object as columns. Be careful not to allow users to supply columns that you do not want to be inserted.

Multiple inserts in one query

If you need to insert multiple rows at the same time it's also much faster to do it with a single insert. Simply pass an array of objects to sql().

const users = [{
  name: 'Murray',
  age: 68,
  garbage: 'ignore'
},
{
  name: 'Walter',
  age: 80
}]

await sql`insert into users ${ sql(users, 'name', 'age') }`

// Is translated to:
insert into users ("name", "age") values ($1, $2), ($3, $4)

// Here you can also omit column names which will use object keys as columns
await sql`insert into users ${ sql(users) }`

// Which results in:
insert into users ("name", "age") values ($1, $2), ($3, $4)

Dynamic columns in updates

This is also useful for update queries

const user = {
  id: 1,
  name: 'Murray',
  age: 68
}

await sql`
  update users set ${
    sql(user, 'name', 'age')
  }
  where user_id = ${ user.id }
`

// Which results in:
update users set "name" = $1, "age" = $2 where user_id = $3

// The columns can also be given with an array
const columns = ['name', 'age']

await sql`
  update users set ${
    sql(user, columns)
  }
  where user_id = ${ user.id }
`

Multiple updates in one query

To create multiple updates in a single query, it is necessary to use arrays instead of objects to ensure that the order of the items correspond with the column names.

const users = [
  [1, 'John', 34],
  [2, 'Jane', 27],
]

await sql`
  update users set name = update_data.name, age = (update_data.age)::int
  from (values ${sql(users)}) as update_data (id, name, age)
  where users.id = (update_data.id)::int
  returning users.id, users.name, users.age
`

Dynamic values and where in

Value lists can also be created dynamically, making where in queries simple too.

const users = await sql`
  select
    *
  from users
  where age in ${ sql([68, 75, 23]) }
`

or

const [{ a, b, c }] = await sql`
  select
    *
  from (values ${ sql(['a', 'b', 'c']) }) as x(a, b, c)
`

Building queries

Postgres.js features a simple dynamic query builder by conditionally appending/omitting query fragments. It works by nesting sql`` fragments within other sql`` calls or fragments. This allows you to build dynamic queries safely without risking sql injections through usual string concatenation.

Partial queries

const olderThan = x => sql`and age > ${ x }`

const filterAge = true

await sql`
  select
   *
  from users
  where name is not null ${
    filterAge
      ? olderThan(50)
      : sql``
  }
`
// Which results in:
select * from users where name is not null
// Or
select * from users where name is not null and age > 50

Dynamic filters

await sql`
  select
    *
  from users ${
    id
      ? sql`where user_id = ${ id }`
      : sql``
  }
`

// Which results in:
select * from users
// Or
select * from users where user_id = $1

SQL functions

Using keywords or calling functions dynamically is also possible by using sql`` fragments.

const date = null

await sql`
  update users set updated_at = ${ date || sql`now()` }
`

// Which results in:
update users set updated_at = now()

Table names

Dynamic identifiers like table names and column names is also supported like so:

const table = 'users'
    , column = 'id'

await sql`
  select ${ sql(column) } from ${ sql(table) }
`

// Which results in:
select "id" from "users"

Quick primer on interpolation

Here's a quick oversight over all the ways to do interpolation in a query template string:

Interpolation syntax Usage Example
${ sql`` } for keywords or sql fragments await sql`SELECT * FROM users ${sql`order by age desc` }`
${ sql(string) } for identifiers await sql`SELECT * FROM ${sql('table_name')`
${ sql([] or {}, ...) } for helpers await sql`INSERT INTO users ${sql({ name: 'Peter'})}`
${ 'somevalue' } for values await sql`SELECT * FROM users WHERE age = ${42}`

Advanced query methods

Cursors

await sql``.cursor([rows = 1], [fn])

Use cursors if you need to throttle the amount of rows being returned from a query. You can use a cursor either as an async iterable or with a callback function. For a callback function new results won't be requested until the promise / async callback function has resolved.

callback function
await sql`
  select
    *
  from generate_series(1,4) as x
`.cursor(async([row]) => {
  // row = { x: 1 }
  await http.request('https://example.com/wat', { row })
})
for await...of
// for await...of
const cursor = sql`select * from generate_series(1,4) as x`.cursor()

for await (const [row] of cursor) {
  // row = { x: 1 }
  await http.request('https://example.com/wat', { row })
}

A single row will be returned by default, but you can also request batches by setting the number of rows desired in each batch as the first argument to .cursor:

await sql`
  select
    *
  from generate_series(1,1000) as x
`.cursor(10, async rows => {
  // rows = [{ x: 1 }, { x: 2 }, ... ]
  await Promise.all(rows.map(row =>
    http.request('https://example.com/wat', { row })
  ))
})

If an error is thrown inside the callback function no more rows will be requested and the outer promise will reject with the thrown error.

You can close the cursor early either by calling break in the for await...of loop, or by returning the token sql.CLOSE from the callback function.

await sql`
  select * from generate_series(1,1000) as x
`.cursor(row => {
  return Math.random() > 0.9 && sql.CLOSE // or sql.END
})

Instant iteration

await sql``.forEach(fn)

If you want to handle rows returned by a query one by one, you can use .forEach which returns a promise that resolves once there are no more rows.

await sql`
  select created_at, name from events
`.forEach(row => {
  // row = { created_at: '2019-11-22T14:22:00Z', name: 'connected' }
})

// No more rows

Query Descriptions

await sql``.describe() -> Result[]

Rather than executing a given query, .describe will return information utilized in the query process. This information can include the query identifier, column types, etc.

This is useful for debugging and analyzing your Postgres queries. Furthermore, .describe will give you access to the final generated query string that would be executed.

Rows as Array of Values

sql``.values()

Using .values will return rows as an array of values for each column, instead of objects.

This can be useful to receive identically named columns, or for specific performance/transformation reasons. The column definitions are still included on the result array, plus access to parsers for each column.

Rows as Raw Array of Buffers

sql``.raw()

Using .raw will return rows as an array with Buffer values for each column, instead of objects.

This can be useful for specific performance/transformation reasons. The column definitions are still included on the result array, plus access to parsers for each column.

Queries in Files

await sql.file(path, [args], [options]) -> Result[]

Using a file for a query is also supported with optional parameters to use if the file includes $1, $2, etc

const result = await sql.file('query.sql', ['Murray', 68])

Multiple statements in one query

await sql``.simple()

The postgres wire protocol supports "simple" and "extended" queries. "simple" queries supports multiple statements, but does not support any dynamic parameters. "extended" queries support parameters but only one statement. To use "simple" queries you can use sql``.simple(). That will create it as a simple query.

await sql`select 1; select 2;`.simple()

Copy to/from as Streams

Postgres.js supports COPY ... queries, which are exposed as Node.js streams.

await sql`copy ... from stdin`.writable() -> Writable

import { pipeline } from 'node:stream/promises'

// Stream of users with the default tab delimitated cells and new-line delimitated rows
const userStream = Readable.from([
  'Murray\t68\n',
  'Walter\t80\n'
])

const query = await sql`copy users (name, age) from stdin`.writable()
await pipeline(userStream, query);

await sql`copy ... to stdout`.readable() -> Readable

Using Stream Pipeline
import { pipeline } from 'node:stream/promises'
import { createWriteStream } from 'node:fs'

const readableStream = await sql`copy users (name, age) to stdout`.readable()
await pipeline(readableStream, createWriteStream('output.tsv'))
// output.tsv content: `Murray\t68\nWalter\t80\n`
Using for await...of
const readableStream = await sql`
  copy (
    select name, age
    from users
    where age = 68
  ) to stdout
`.readable()
for await (const chunk of readableStream) {
  // chunk.toString() === `Murray\t68\n`
}

NOTE This is a low-level API which does not provide any type safety. To make this work, you must match your copy query parameters correctly to your Node.js stream read or write code. Ensure Node.js stream backpressure is handled correctly to avoid memory exhaustion.

Canceling Queries in Progress

Postgres.js supports, canceling queries in progress. It works by opening a new connection with a protocol level startup message to cancel the current query running on a specific connection. That means there is no guarantee that the query will be canceled, and due to the possible race conditions it might even result in canceling another query. This is fine for long running queries, but in the case of high load and fast queries it might be better to simply ignore results instead of canceling.

const query = sql`select pg_sleep 100`.execute()
setTimeout(() => query.cancel(), 100)
const result = await query

Execute

await sql``.execute()

The lazy Promise implementation in Postgres.js is what allows it to distinguish Nested Fragments from the main outer query. This also means that queries are always executed at the earliest in the following tick. If you have a specific need to execute the query in the same tick, you can call .execute()

Unsafe raw string queries

Advanced unsafe use cases

await sql.unsafe(query, [args], [options]) -> Result[]

If you know what you're doing, you can use unsafe to pass any string you'd like to postgres. Please note that this can lead to SQL injection if you're not careful.

sql.unsafe('select ' + danger + ' from users where id = ' + dragons)

You can also nest sql.unsafe within a safe sql expression. This is useful if only part of your fraction has unsafe elements.

const triggerName = 'friend_created'
const triggerFnName = 'on_friend_created'
const eventType = 'insert'
const schema_name = 'app'
const table_name = 'friends'

await sql`
  create or replace trigger ${sql(triggerName)}
  after ${sql.unsafe(eventType)} on ${sql.unsafe(`${schema_name}.${table_name}`)}
  for each row
  execute function ${sql(triggerFnName)}()
`

await sql`
  create role friend_service with login password ${sql.unsafe(`'${password}'`)}
`

Transactions

BEGIN / COMMIT await sql.begin([options = ''], fn) -> fn()

Use sql.begin to start a new transaction. Postgres.js will reserve a connection for the transaction and supply a scoped sql instance for all transaction uses in the callback function. sql.begin will resolve with the returned value from the callback function.

BEGIN is automatically sent with the optional options, and if anything fails ROLLBACK will be called so the connection can be released and execution can continue.

const [user, account] = await sql.begin(async sql => {
  const [user] = await sql`
    insert into users (
      name
    ) values (
      'Murray'
    )
    returning *
  `

  const [account] = await sql`
    insert into accounts (
      user_id
    ) values (
      ${ user.user_id }
    )
    returning *
  `

  return [user, account]
})

Do note that you can often achieve the same result using WITH queries (Common Table Expressions) instead of using transactions.

It's also possible to pipeline the requests in a transaction if needed by returning an array with queries from the callback function like this:

const result = await sql.begin(sql => [
  sql`update ...`,
  sql`update ...`,
  sql`insert ...`
])

SAVEPOINT await sql.savepoint([name], fn) -> fn()

sql.begin('read write', async sql => {
  const [user] = await sql`
    insert into users (
      name
    ) values (
      'Murray'
    )
  `

  const [account] = (await sql.savepoint(sql =>
    sql`
      insert into accounts (
        user_id
      ) values (
        ${ user.user_id }
      )
    `
  ).catch(err => {
    // Account could not be created. ROLLBACK SAVEPOINT is called because we caught the rejection.
  })) || []

  return [user, account]
})
.then(([user, account]) => {
  // great success - COMMIT succeeded
})
.catch(() => {
  // not so good - ROLLBACK was called
})

PREPARE TRANSACTION await sql.prepare([name]) -> fn()

Indicates that the transactions should be prepared using the PREPARE TRANSACTION [NAME] statement instead of being committed.

sql.begin('read write', async sql => {
  const [user] = await sql`
    insert into users (
      name
    ) values (
      'Murray'
    )
  `

  await sql.prepare('tx1')
})

Data Transformation

Postgres.js allows for transformation of the data passed to or returned from a query by using the transform option.

Built in transformation functions are:

  • For camelCase - postgres.camel, postgres.toCamel, postgres.fromCamel
  • For PascalCase - postgres.pascal, postgres.toPascal, postgres.fromPascal
  • For Kebab-Case - postgres.kebab, postgres.toKebab, postgres.fromKebab

These built in transformations will only convert to/from snake_case. For example, using { transform: postgres.toCamel } will convert the column names to camelCase only if the column names are in snake_case to begin with. { transform: postgres.fromCamel } will convert camelCase only to snake_case.

By default, using postgres.camel, postgres.pascal and postgres.kebab will perform a two-way transformation - both the data passed to the query and the data returned by the query will be transformed:

// Transform the column names to and from camel case
const sql = postgres({ transform: postgres.camel })

await sql`CREATE TABLE IF NOT EXISTS camel_case (a_test INTEGER, b_test TEXT)`
await sql`INSERT INTO camel_case ${ sql([{ aTest: 1, bTest: 1 }]) }`
const data = await sql`SELECT ${ sql('aTest', 'bTest') } FROM camel_case`

console.log(data) // [ { aTest: 1, bTest: '1' } ]

To only perform half of the transformation (eg. only the transformation to or from camel case), use the other transformation functions:

// Transform the column names only to camel case
// (for the results that are returned from the query)
postgres({ transform: postgres.toCamel })

await sql`CREATE TABLE IF NOT EXISTS camel_case (a_test INTEGER)`
await sql`INSERT INTO camel_case ${ sql([{ a_test: 1 }]) }`
const data = await sql`SELECT a_test FROM camel_case`

console.log(data) // [ { aTest: 1 } ]
// Transform the column names only from camel case
// (for interpolated inserts, updates, and selects)
const sql = postgres({ transform: postgres.fromCamel })

await sql`CREATE TABLE IF NOT EXISTS camel_case (a_test INTEGER)`
await sql`INSERT INTO camel_case ${ sql([{ aTest: 1 }]) }`
const data = await sql`SELECT ${ sql('aTest') } FROM camel_case`

console.log(data) // [ { a_test: 1 } ]

Note that Postgres.js does not rewrite the static parts of the tagged template strings. So to transform column names in your queries, the sql() helper must be used - eg. ${ sql('columnName') } as in the examples above.

Transform undefined Values

By default, Postgres.js will throw the error UNDEFINED_VALUE: Undefined values are not allowed when undefined values are passed

// Transform the column names to and from camel case
const sql = postgres({
  transform: {
    undefined: null
  }
})

await sql`CREATE TABLE IF NOT EXISTS transform_undefined (a_test INTEGER)`
await sql`INSERT INTO transform_undefined ${ sql([{ a_test: undefined }]) }`
const data = await sql`SELECT a_test FROM transform_undefined`

console.log(data) // [ { a_test: null } ]

To combine with the built in transform functions, spread the transform in the transform object:

// Transform the column names to and from camel case
const sql = postgres({
  transform: {
    ...postgres.camel,
    undefined: null
  }
})

await sql`CREATE TABLE IF NOT EXISTS transform_undefined (a_test INTEGER)`
await sql`INSERT INTO transform_undefined ${ sql([{ aTest: undefined }]) }`
const data = await sql`SELECT ${ sql('aTest') } FROM transform_undefined`

console.log(data) // [ { aTest: null } ]

Custom Transform Functions

To specify your own transformation functions, you can use the column, value and row options inside of transform, each an object possibly including to and from keys:

  • to: The function to transform the outgoing query column name to, i.e SELECT ${ sql('aName') } to SELECT a_name when using postgres.toCamel.
  • from: The function to transform the incoming query result column name to, see example below.

Both parameters are optional, if not provided, the default transformation function will be used.

// Implement your own functions, look at postgres.toCamel, etc
// as a reference:
// https://github.com/porsager/postgres/blob/4241824ffd7aa94ffb482e54ca9f585d9d0a4eea/src/types.js#L310-L328
function transformColumnToDatabase() { /* ... */ }
function transformColumnFromDatabase() { /* ... */ }

const sql = postgres({
  transform: {
    column: {
      to: transformColumnToDatabase,
      from: transformColumnFromDatabase,
    },
    value: { /* ... */ },
    row: { /* ... */ }
  }
})

Listen & notify

When you call .listen, a dedicated connection will be created to ensure that you receive notifications instantly. This connection will be used for any further calls to .listen. The connection will automatically reconnect according to a backoff reconnection pattern to not overload the database server.

Listen await sql.listen(channel, onnotify, [onlisten]) -> { state }

.listen takes the channel name, a function to handle each notify, and an optional function to run every time listen is registered and ready (happens on initial connect and reconnects). It returns a promise which resolves once the LISTEN query to Postgres completes, or if there is already a listener active.

await sql.listen('news', payload => {
  const json = JSON.parse(payload)
  console.log(json.this) // logs 'is'
})

The optional onlisten method is great to use for a very simply queue mechanism:

await sql.listen(
  'jobs',
  (x) => run(JSON.parse(x)),
  ( ) => sql`select unfinished_jobs()`.forEach(run)
)

function run(job) {
  // And here you do the work you please
}

Notify await sql.notify(channel, payload) -> Result[]

Notify can be done as usual in SQL, or by using the sql.notify method.

sql.notify('news', JSON.stringify({ no: 'this', is: 'news' }))

Realtime subscribe

Postgres.js implements the logical replication protocol of PostgreSQL to support subscription to real-time updates of insert, update and delete operations.

NOTE To make this work you must create the proper publications in your database, enable logical replication by setting wal_level = logical in postgresql.conf and connect using either a replication or superuser.

Quick start

Create a publication (eg. in migration)

CREATE PUBLICATION alltables FOR ALL TABLES

Subscribe to updates

const sql = postgres({ publications: 'alltables' })

const { unsubscribe } = await sql.subscribe(
  'insert:events',
  (row, { command, relation, key, old }) => {
    // Callback function for each row change
    // tell about new event row over eg. websockets or do something else
  },
  () => {
    // Callback on initial connect and potential reconnects
  }
)

Subscribe pattern

You can subscribe to specific operations, tables, or even rows with primary keys.

operation : schema . table = primary_key

operation is one of * | insert | update | delete and defaults to *

schema defaults to public

table is a specific table name and defaults to *

primary_key can be used to only subscribe to specific rows

Examples

sql.subscribe('*',                () => /* everything */ )
sql.subscribe('insert',           () => /* all inserts */ )
sql.subscribe('*:users',          () => /* all operations on the public.users table */ )
sql.subscribe('delete:users',     () => /* all deletes on the public.users table */ )
sql.subscribe('update:users=1',   () => /* all updates on the users row with a primary key = 1 */ )

Numbers, bigint, numeric

Number in javascript is only able to represent 253-1 safely which means that types in PostgreSQLs like bigint and numeric won't fit into Number.

Since Node.js v10.4 we can use BigInt to match the PostgreSQL type bigint which is returned for eg. count(*). Unfortunately, it doesn't work with JSON.stringify out of the box, so Postgres.js will return it as a string.

If you want to use BigInt you can add this custom type:

const sql = postgres({
  types: {
    bigint: postgres.BigInt
  }
})

There is currently no guaranteed way to handle numeric / decimal types in native Javascript. These [and similar] types will be returned as a string. The best way in this case is to use custom types.

Result Array

The Result Array returned from queries is a custom array allowing for easy destructuring or passing on directly to JSON.stringify or general Array usage. It includes the following properties.

.count

The count property is the number of affected rows returned by the database. This is useful for insert, update and delete operations to know the number of rows since .length will be 0 in these cases if not using RETURNING ....

.command

The command run by the query - eg. one of SELECT, UPDATE, INSERT, DELETE

.columns

The columns returned by the query useful to determine types, or map to the result values when using .values()

{
  name  : String,    // Column name,
  type  : oid,       // PostgreSQL oid column type
  parser: Function   // The function used by Postgres.js for parsing
}

.statement

The statement contains information about the statement implicitly created by Postgres.js.

{
  name    : String,  // The auto generated statement name
  string  : String,  // The actual query string executed
  types   : [oid],   // An array of oid expected as input parameters
  columns : [Column] // Array of columns - same as Result.columns
}

.state

This is the state { pid, secret } of the connection that executed the query.

Connection details

All Postgres options

const sql = postgres('postgres://username:password@host:port/database', {
  host                 : '',            // Postgres ip address[es] or domain name[s]
  port                 : 5432,          // Postgres server port[s]
  path                 : '',            // unix socket path (usually '/tmp')
  database             : '',            // Name of database to connect to
  username             : '',            // Username of database user
  password             : '',            // Password of database user
  ssl                  : false,         // true, prefer, require, tls.connect options
  max                  : 10,            // Max number of connections
  max_lifetime         : null,          // Max lifetime in seconds (more info below)
  idle_timeout         : 0,             // Idle connection timeout in seconds
  connect_timeout      : 30,            // Connect timeout in seconds
  prepare              : true,          // Automatic creation of prepared statements
  types                : [],            // Array of custom types, see more below
  onnotice             : fn,            // Default console.log, set false to silence NOTICE
  onparameter          : fn,            // (key, value) when server param change
  debug                : fn,            // Is called with (connection, query, params, types)
  socket               : fn,            // fn returning custom socket to use
  transform            : {
    undefined          : undefined,     // Transforms undefined values (eg. to null)
    column             : fn,            // Transforms incoming column names
    value              : fn,            // Transforms incoming row values
    row                : fn             // Transforms entire rows
  },
  connection           : {
    application_name   : 'postgres.js', // Default application_name
    ...                                 // Other connection parameters, see https://www.postgresql.org/docs/current/runtime-config-client.html
  },
  target_session_attrs : null,          // Use 'read-write' with multiple hosts to
                                        // ensure only connecting to primary
  fetch_types          : true,          // Automatically fetches types on connect
                                        // on initial connection.
})

Note that max_lifetime = 60 * (30 + Math.random() * 30) by default. This resolves to an interval between 45 and 90 minutes to optimize for the benefits of prepared statements and working nicely with Linux's OOM killer.

Dynamic passwords

When clients need to use alternative authentication schemes such as access tokens or connections to databases with rotating passwords, provide either a synchronous or asynchronous function that will resolve the dynamic password value at connection time.

const sql = postgres(url, {
  // Other connection config
  ...
  // Password function for the database user
  password : async () => await signer.getAuthToken(),
})

SSL

Although vulnerable to MITM attacks, a common configuration for the ssl option for some cloud providers is to set rejectUnauthorized to false (if NODE_ENV is production):

const sql =
  process.env.NODE_ENV === 'production'
    ? // "Unless you're using a Private or Shield Heroku Postgres database, Heroku Postgres does not currently support verifiable certificates"
      // https://help.heroku.com/3DELT3RK/why-can-t-my-third-party-utility-connect-to-heroku-postgres-with-ssl
      postgres({ ssl: { rejectUnauthorized: false } })
    : postgres()

For more information regarding ssl with postgres, check out the Node.js documentation for tls.

Multi-host connections - High Availability (HA)

Multiple connection strings can be passed to postgres() in the form of postgres('postgres://localhost:5432,localhost:5433', ...). This works the same as native the psql command. Read more at multiple host URIs.

Connections will be attempted in order of the specified hosts/ports. On a successful connection, all retries will be reset. This ensures that hosts can come up and down seamlessly.

If you specify target_session_attrs: 'primary' or PGTARGETSESSIONATTRS=primary Postgres.js will only connect to the primary host, allowing for zero downtime failovers.

The Connection Pool

Connections are created lazily once a query is created. This means that simply doing const sql = postgres(...) won't have any effect other than instantiating a new sql instance.

No connection will be made until a query is made.

For example:

const sql = postgres() // no connections are opened

await sql`...` // one connection is now opened
await sql`...` // previous opened connection is reused

// two connections are opened now
await Promise.all([
  sql`...`,
  sql`...`
])

When there are high amount of concurrent queries, postgres will open as many connections as needed up until max number of connections is reached. By default max is 10. This can be changed by setting max in the postgres() call. Example - postgres('connectionURL', { max: 20 }).

This means that we get a much simpler story for error handling and reconnections. Queries will be sent over the wire immediately on the next available connection in the pool. Connections are automatically taken out of the pool if you start a transaction using sql.begin(), and automatically returned to the pool once your transaction is done.

Any query which was already sent over the wire will be rejected if the connection is lost. It'll automatically defer to the error handling you have for that query, and since connections are lazy it'll automatically try to reconnect the next time a query is made. The benefit of this is no weird generic "onerror" handler that tries to get things back to normal, and also simpler application code since you don't have to handle errors out of context.

There are no guarantees about queries executing in order unless using a transaction with sql.begin() or setting max: 1. Of course doing a series of queries, one awaiting the other will work as expected, but that's just due to the nature of js async/promise handling, so it's not necessary for this library to be concerned with ordering.

Since this library automatically creates prepared statements, it also has a default max lifetime for connections to prevent memory bloat on the database itself. This is a random interval for each connection between 45 and 90 minutes. This allows multiple connections to independently come up and down without affecting the service.

Connection timeout

By default, connections will not close until .end() is called. However, it may be useful to have them close automatically when:

  • re-instantiating multiple sql`` instances
  • using Postgres.js in a Serverless environment (Lambda, etc.)
  • using Postgres.js with a database service that automatically closes connections after some time (see ECONNRESET issue)

This can be done using the idle_timeout or max_lifetime options. These configuration options specify the number of seconds to wait before automatically closing an idle connection and the maximum time a connection can exist, respectively.

For example, to close a connection that has either been idle for 20 seconds or existed for more than 30 minutes:

const sql = postgres({
  idle_timeout: 20,
  max_lifetime: 60 * 30
})

Cloudflare Workers support

Postgres.js has built-in support for the TCP socket API in Cloudflare Workers, which is on-track to be standardized and adopted in Node.js and other JavaScript runtimes, such as Deno.

You can use Postgres.js directly in a Worker, or to benefit from connection pooling and query caching, via the Hyperdrive service available to Workers by passing the Hyperdrive connectionString when creating a new postgres client as follows:

// Requires Postgres.js 3.4.0 or later
import postgres from 'postgres'

interface Env {
    HYPERDRIVE: Hyperdrive;
}

export default async fetch(req: Request, env: Env, ctx: ExecutionContext) {
    // The Postgres.js library accepts a connection string directly
    const sql = postgres(env.HYPERDRIVE.connectionString)
    const results = await sql`SELECT * FROM users LIMIT 10`
    return Response.json(results)
}

In wrangler.toml you will need to enable the nodejs_compat compatibility flag to allow Postgres.js to operate in the Workers environment:

compatibility_flags = ["nodejs_compat"]

Auto fetching of array types

Postgres.js will automatically fetch table/array-type information when it first connects to a database.

If you have revoked access to pg_catalog this feature will no longer work and will need to be disabled.

You can disable this feature by setting fetch_types to false.

Environmental variables

It is also possible to connect to the database without a connection string or any options. Postgres.js will fall back to the common environment variables used by psql as in the table below:

const sql = postgres()
Option Environment Variables
host PGHOST
port PGPORT
database PGDATABASE
username PGUSERNAME or PGUSER
password PGPASSWORD
idle_timeout PGIDLE_TIMEOUT
connect_timeout PGCONNECT_TIMEOUT

Prepared statements

Prepared statements will automatically be created for any queries where it can be inferred that the query is static. This can be disabled by using the prepare: false option. For instance โ€” this is useful when using PGBouncer in transaction mode.

Custom Types

You can add ergonomic support for custom types, or simply use sql.typed(value, type) inline, where type is the PostgreSQL oid for the type and the correctly serialized string. (oid values for types can be found in the pg_catalog.pg_type table.)

Adding Query helpers is the cleanest approach which can be done like this:

const sql = postgres({
  types: {
    rect: {
      // The pg_types oid to pass to the db along with the serialized value.
      to        : 1337,

      // An array of pg_types oids to handle when parsing values coming from the db.
      from      : [1337],

      //Function that transform values before sending them to the db.
      serialize : ({ x, y, width, height }) => [x, y, width, height],

      // Function that transforms values coming from the db.
      parse     : ([x, y, width, height]) => { x, y, width, height }
    }
  }
})

// Now you can use sql.typed.rect() as specified above
const [custom] = await sql`
  insert into rectangles (
    name,
    rect
  ) values (
    'wat',
    ${ sql.typed.rect({ x: 13, y: 37, width: 42, height: 80 }) }
  )
  returning *
`

// custom = { name: 'wat', rect: { x: 13, y: 37, width: 42, height: 80 } }

Custom socket

Easily do in-process ssh tunneling to your database by providing a custom socket for Postgres.js to use. The function (optionally async) must return a socket-like duplex stream.

Here's a sample using ssh2

import ssh2 from 'ssh2'

const sql = postgres({
  ...options,
  socket: ({ host: [host], port: [port] }) => new Promise((resolve, reject) => {
    const ssh = new ssh2.Client()
    ssh
    .on('error', reject)
    .on('ready', () =>
      ssh.forwardOut('127.0.0.1', 12345, host, port,
        (err, socket) => err ? reject(err) : resolve(socket)
      )
    )
    .connect(sshOptions)
  })
})

Teardown / Cleanup

To ensure proper teardown and cleanup on server restarts use await sql.end() before process.exit().

Calling sql.end() will reject new queries and return a Promise which resolves when all queries are finished and the underlying connections are closed. If a { timeout } option is provided any pending queries will be rejected once the timeout (in seconds) is reached and the connections will be destroyed.

Sample shutdown using Prexit

import prexit from 'prexit'

prexit(async () => {
  await sql.end({ timeout: 5 })
  await new Promise(r => server.close(r))
})

Reserving connections

await sql.reserve()

The reserve method pulls out a connection from the pool, and returns a client that wraps the single connection. This can be used for running queries on an isolated connection.

const reserved = await sql.reserve()
await reserved`select * from users`
await reserved.release()

reserved.release()

Once you have finished with the reserved connection, call release to add it back to the pool.

Error handling

Errors are all thrown to related queries and never globally. Errors coming from database itself are always in the native Postgres format, and the same goes for any Node.js errors eg. coming from the underlying connection.

Query errors will contain a stored error with the origin of the query to aid in tracing errors.

Query errors will also contain the query string and the parameters. These are not enumerable to avoid accidentally leaking confidential information in logs. To log these it is required to specifically access error.query and error.parameters, or set debug: true in options.

There are also the following errors specifically for this library.

UNSAFE_TRANSACTION

Only use sql.begin or max: 1

To ensure statements in a transaction runs on the same connection (which is required for them to run inside the transaction), you must use sql.begin(...) or only allow a single connection in options (max: 1).

UNDEFINED_VALUE

Undefined values are not allowed

Postgres.js won't accept undefined as values in tagged template queries since it becomes ambiguous what to do with the value. If you want to set something to null, use null explicitly.

MESSAGE_NOT_SUPPORTED

X (X) is not supported

Whenever a message is received from Postgres which is not supported by this library. Feel free to file an issue if you think something is missing.

MAX_PARAMETERS_EXCEEDED

Max number of parameters (65534) exceeded

The postgres protocol doesn't allow more than 65534 (16bit) parameters. If you run into this issue there are various workarounds such as using sql([...]) to escape values instead of passing them as parameters.

SASL_SIGNATURE_MISMATCH

Message type X not supported

When using SASL authentication the server responds with a signature at the end of the authentication flow which needs to match the one on the client. This is to avoid man-in-the-middle attacks. If you receive this error the connection was canceled because the server did not reply with the expected signature.

NOT_TAGGED_CALL

Query not called as a tagged template literal

Making queries has to be done using the sql function as a tagged template. This is to ensure parameters are serialized and passed to Postgres as query parameters with correct types and to avoid SQL injection.

AUTH_TYPE_NOT_IMPLEMENTED

Auth type X not implemented

Postgres supports many different authentication types. This one is not supported.

CONNECTION_CLOSED

write CONNECTION_CLOSED host:port

This error is thrown if the connection was closed without an error. This should not happen during normal operations, so please create an issue if this was unexpected.

CONNECTION_ENDED

write CONNECTION_ENDED host:port

This error is thrown if the user has called sql.end() and performed a query afterward.

CONNECTION_DESTROYED

write CONNECTION_DESTROYED host:port

This error is thrown for any queries that were pending when the timeout to sql.end({ timeout: X }) was reached.

CONNECTION_CONNECT_TIMEOUT

write CONNECTION_CONNECT_TIMEOUT host:port

This error is thrown if the startup phase of the connection (tcp, protocol negotiation, and auth) took more than the default 30 seconds or what was specified using connect_timeout or PGCONNECT_TIMEOUT.

TypeScript support

postgres has TypeScript support. You can pass a row list type for your queries in this way:

interface User {
  id: number
  name: string
}

const users = await sql<User[]>`SELECT * FROM users`
users[0].id // ok => number
users[1].name // ok => string
users[0].invalid // fails: `invalid` does not exists on `User`

However, be sure to check the array length to avoid accessing properties of undefined rows:

const users = await sql<User[]>`SELECT * FROM users WHERE id = ${id}`
if (!users.length)
  throw new Error('Not found')
return users[0]

You can also prefer destructuring when you only care about a fixed number of rows. In this case, we recommend you to prefer using tuples to handle undefined properly:

const [user]: [User?] = await sql`SELECT * FROM users WHERE id = ${id}`
if (!user) // => User | undefined
  throw new Error('Not found')
return user // => User

// NOTE:
const [first, second]: [User?] = await sql`SELECT * FROM users WHERE id = ${id}` // fails: `second` does not exist on `[User?]`
const [first, second] = await sql<[User?]>`SELECT * FROM users WHERE id = ${id}` // don't fail : `second: User | undefined`

We do our best to type all the public API, however types are not always updated when features are added or changed. Feel free to open an issue if you have trouble with types.

Migration tools

Postgres.js doesn't come with any migration solution since it's way out of scope, but here are some modules that support Postgres.js for migrations:

Thank you

A really big thank you to @JAForbes who introduced me to Postgres and still holds my hand navigating all the great opportunities we have.

Thanks to @ACXgit for initial tests and dogfooding.

Also thanks to Ryan Dahl for letting me have the postgres npm package name.

postgres's People

Contributors

0xflotus avatar abdulrahman1s avatar akheron avatar alextes avatar bas950 avatar dandv avatar elithrar avatar eprince-hub avatar iby avatar jaforbes avatar karlhorky avatar micburks avatar miguelvictor avatar minigugus avatar mkubliniak avatar nicksrandall avatar paulovieira avatar pierbover avatar porsager avatar pyrolistical avatar r-priyam avatar rafaelcr avatar reaganiwadha avatar shafkathullah avatar shayan-shojaei avatar stalniy avatar thebearingedge avatar ttfkam avatar xerosanyam avatar zone117x avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

postgres's Issues

Wait for listen

Hi!

Looking at the source for the listen function, it returns the query to listen. Does this mean that it can be awaited to pause execution until the listen has been registered by the database?

return query({ raw: true }, getListener(), 'listen ' + escape(channel)).then(() => channel)

Query building

Is there any plan to add query building functionality to this whereby I could build a query dynamically bit by bit before executing it? Or should I use another library for that sort of thing and call sql.unsafe(queryBuilderResult)?

And a related question, is it possible to build a dynamic query in a manner similar to this?

const query = sql`SELECT * FROM my_table ${order && sql`ORDER BY ${order}`}`;

I know code snippet is nonsense, but hopefully you get the idea.

Speed up running many statements with sql.unsafe

Hello, this library is really promising and it's amazing to see how much it's improving every month!

As part of our testing, we send multiple statements at a time using sql.unsafe(). This is needed both for resetting parts of the DB (i.e. delete all rows in a few tables and insert them from SQL files) and for testing logic inside a transaction (i.e. modify the DB state with a few statements, then run some functions and finally rollback). However, it seems that sending multiple statements is as slow as sending them individually. Is each statement sent over the network individually? If so, would it be possible to them all in a single network request?

That would speed up our testing substantially. Similarly, I can easily imagine other development/testing use cases, where this library would then be much faster.

sql.end({timeout: null}) hangs

Hi and thanks for making this driver available. I'm evaluating it as a replacement to pg-promise.

Our tests hangs in the teardown phase if we end the connection with sql.end({timeout: null}). This is due to the implementation of socket.end available in version 1.0.2

    end: () => {
      return new Promise(r => socket && socket.end(r))
    }

If there's no socket, r is never called.

This has been fixed in 4156c3f but it hasn't been released yet. Can you please release a new version?

Error response during postgres startup not handled properly

I detected an issue where the client gets an unexpected response when the postgres server is still starting. This resulted in a crash instead of a rejected query promise.
I switched back to pg for now ๐Ÿ˜…to get everything working.

To reproduce it, just try to do a query against a postgres database that is starting.
E.g. execute docker restart postgres and execute some queries while it's starting up.

Error: the database system is starting up
    at Object.generic (/app/node_modules/postgres/lib/types.js:200:5)
    at Object.ErrorResponse (/app/node_modules/postgres/lib/backend.js:125:22)
    at Socket.data (/app/node_modules/postgres/lib/connection.js:167:25)
    at Socket.emit (events.js:210:5)
    at addChunk (_stream_readable.js:309:12)
    at readableAddChunk (_stream_readable.js:290:11)
    at Socket.Readable.push (_stream_readable.js:224:10)
    at TCP.onStreamRead (internal/stream_base_commons.js:182:23) {
  severity_local: 'FATAL',
  severity: 'FATAL',
  code: '57P03',
  file: 'postmaster.c',
  line: '2265',
  routine: 'ProcessStartupPacket'
}

TypeError: Cannot read property 'result' of undefined
    at Object.RowDescription (/app/node_modules/postgres/lib/backend.js:175:23)
    at Socket.data (/app/node_modules/postgres/lib/connection.js:167:25)
    at Socket.emit (events.js:210:5)
    at addChunk (_stream_readable.js:309:12)
    at readableAddChunk (_stream_readable.js:290:11)
    at Socket.Readable.push (_stream_readable.js:224:10)
    at TCP.onStreamRead (internal/stream_base_commons.js:182:23)

How can I catch ERR_UNHANDLED_ERROR

I'm using this in a unit test suite and our test harness force closes all connections at the server. This is done by opening a master db connection and running this:

SELECT pg_terminate_backend(pid) FROM pg_stat_activity
    WHERE datname = 'dbname' AND pid <> pg_backend_pid();

When this is run, any open connection throws this error.

Error [ERR_UNHANDLED_ERROR]: Unhandled error. (error: terminating connection due to administrator command
          at Connection.Object.<anonymous>.Connection.parseE (/path-to-app/node_modules/pg/lib/connection.js:614:13)
          at Connection.Object.<anonymous>.Connection.parseMessage (/path-to-app/node_modules/pg/lib/connection.js:413:19)
          at Socket.<anonymous> (/path-to-app/node_modules/pg/lib/connection.js:129:22)

Question

How can I catch this exception so I can prevent from tearing down the whole process? (obv we can do process.on('uncaughtException', but I'd prefer not to do that)

Better documentation for tagged template and sql() usage

I thought I had finished the docs for sql() usage and template strings, but I see they're incomplete.

I think at the very start it would be great with a completely basic description of how the tagged template sql fn are to be used.

Any raw value inside a template string will be replaced with $n and sent as a parameter to postgresql inferring the correct type. If a value needs to be passed directly in the query sql(value) is to be used which ensures escaping happens properly. It also has overloads to expand the most common scenarios in the correct context.

Eg: sql({ a: 'first', b: new Date() })

  • in insert context it will expand to (a, b) values ($1, $2)
  • in update / where context it will expand to a = $1, b = $2
  • in select context it will expand to $1 as a, $2 as b
    and $1 and $2 will be sent as parameters with correct types (string, timestamp) to postgres

As extra help you can supply an array of objects to insert multiple rows in insert context, and also extra arguments to only pluck specific columns from the object. (encouraged for safety).

row mode equvalent?

equivalent options exist?
sorry for poor english.

https://node-postgres.com/features/queries

Row mode
By default node-postgres reads rows and collects them into JavaScript objects with the keys matching the column names and the values matching the corresponding row value for each column. If you do not need or do not want this behavior you can pass rowMode: 'array' to a query object. This will inform the result parser to bypass collecting rows into a JavaScript object, and instead will return each row as an array of values.

JSON Support

Hi Rasmus,

replacing pg with postgres on our dev branch worked like a charm. Testing the library for a bit longer and stress-testing it is now necessary. One thing I noted was that JSON-values must be stringified manually, which pg did behind the scenes. I don't know if this is possible here as well, such as

let ob = {foo: 1, bar: 2}

sql`SELECT ${ob}`

which only works when stringify is used manually atm like this:

let ob = {foo: 1, bar: 2}

sql`SELECT ${JSON.stringify(ob)}`

Thanks for sharing this amazing library!

Robert

Error: could not determine data type of parameter $1

Trying to create a user/role dynamically, if I hard code the values, it does work... but parameterized it doesn't...

  console.log(`Check/Create user (${dbInfo.username}:${dbInfo.password})`);
  await sql`
    DO
    $do$
    BEGIN
      IF NOT EXISTS (
          SELECT
          FROM   pg_catalog.pg_roles
          WHERE  rolname = ${dbInfo.username}
      ) THEN
          CREATE ROLE ${sql(dbInfo.username)} LOGIN PASSWORD ${dbInfo.password};
      END IF;
    END
    $do$;
  `;

Question: Does `sql.listen` reconnect?

Hey, as the title says, I'm wondering how any errors within sql.listen are handled or should be handled. Since there's no Pool and connections are created lazily (except for sql.listen) I'm curious what the approach here should be.

Thanks!

How to update default value in db

Hi there

I have the following table

BEGIN TRANSACTION;

CREATE EXTENSION IF NOT EXISTS "uuid-ossp";

CREATE TABLE users (
  id uuid DEFAULT uuid_generate_v4 (),
  firstname VARCHAR(100) NOT NULL,
  lastname VARCHAR(100) NOT NULL,
  email text UNIQUE NOT NULL,
  password VARCHAR NOT NULL,
  role TEXT NOT NULL DEFAULT 'basic',
  isActivated BOOLEAN NOT NULL DEFAULT FALSE,
  createdAt timestamp WITH time zone DEFAULT CURRENT_TIMESTAMP,
  updatedAt timestamp WITH time zone DEFAULT CURRENT_TIMESTAMP,
  PRIMARY KEY (id)
);

COMMIT;

This is my code to insert values

const [newUser] = await sql`INSERT INTO users ${sql(
    user,
    'firstname',
    'lastname',
    'email',
    'password',
  )} returning *`

This is my req.body values i pass through to server

{
	"firstname": "Test",
	"lastname": "test",
	"email": "[email protected]",
	"password": "password",
	"role": "admin"
}

You'll notice from the table the role property defaults to basic if no role is specified. However, in this example I'm specifying role with value admin.

When I check the inserted user values, I see role is not updated to admin.

Here's the payload of my JWT, role property defaults to basic and is not updated

{
  "id": "10a9ac24-9440-4d80-a099-4b7dbf52046a",
  "firstname": "Test",
  "lastname": "test",
  "email": "[email protected]",
  "role": "basic",
  "isactivated": false,
  "createdat": "2020-04-21T10:48:10.217Z",
  "updatedat": "2020-04-21T10:48:10.217Z",
  "iat": 1587466100,
  "exp": 1587467000,
  "aud": "EMAIL_SERVICE",
  "iss": "AUTH_SERVICE",
  "sub": "10a9ac24-9440-4d80-a099-4b7dbf52046a"
}

How do I refactor this snippet to update role property if specified?

const [newUser] = await sql`INSERT INTO users ${sql(
    user,
    'firstname',
    'lastname',
    'email',
    'password',
  )} returning *`

Better dynamic query building

Hi, First and foremost, I think this is a great start at making something really simple and effective. Thank you!!

However... I have always found template strings a little odd especially in this context.
Let us say we have

const query = `SELECT ${distinct}, ${columns} FROM ${table} ${innerJoins} WHERE ${where}`;
(Where distinct, columns, table etc are their own strings made up of other functions elsehwhere)
which basically outputs...

SELECT DISTINCT(person.uid), person.name, movie.title, movie.year
FROM person 
INNER JOIN movie_ref ON movie_ref.person_id = person.id
INNER JOIN movie ON movie.id = movie_ref.movie_id
WHERE person.name ILIKE '%luke%'

I should then be able to do sql`${query}`.then((data) => console.log(data) or await etc..

But instead I receive a syntax error:
Error: syntax error at or near "$1" at Object.generic (../node_modules/postgres/lib/types.js:200:5)

Gonna take a look at how this tagged template function is actually working as Im aware it takes the values $1, $2 etc.. and so forth to evaluate them and pass them to the statement e.g. select * from table where id = $1 => select * from table where id = somevalue

At the moment, it means I have to directly place all my select statements big or small in this method. Which will start looking very messy especially when queries become more complex and the app grows.

knex integration

It would be nice if it's possible to use this with knex to easily build requests.

It's just a suggestion to improve usage of this library in many projects.

I think, with that, we have the solution for building queries dynamically. I know it's not simple to do that and, maybe it's not compatible with this library to have the integration with knex.

Thanks in advance

Multi params query handling

Hello,

First of all, thank you for your great work, I've been playing around with the library for a while and I'm considering using it in production.

Before that I would like to know if it's possible to build requests depending on params.
An exemple is better than a long talk so here it is:

async function getUsers (filters) {

    const conditions = [];
    const params = [];

    if (filters.name) {
        conditions.push('name = ?');
        params.push(filters.name);
    }

    if (filters.age) {
        conditions.push('age = ?');
        params.push(filters.age);
    }

    let sql = `
        select * from users
        where active = 1
    `;
    
    if (conditions.length) {
        sql = `${sql} and ${conditions.join(' and ')}`;
    }

    // getSQL is an util function which converts '?' to numbered params
    return db.query(util.getSQL(sql), params);
}

How would you do that (building a query conditionally) using your library ?
Thank you :)

add BigInt support

right now any bigint passed from database is converted to Number() instead of BigInt()

this can be already done with types:

postgres({
  types: {
    bigint: {
      to: 20,
      from: [20],
      parse: data => BigInt(data),
      serialize: bigint => bigint.toString(),
    },
  },
}

but would be great if it was in the library

sql string?

Hi again porsager,

I like your lightweight library quite a lot, but I'm facing a problem:

As far as I understand, sql`...` builds the SQL String and submits it instantaneously to the DB for execution.

And I can't pass an artibrary string to sql insted of a template expression, can I?

Is there any way I can store an SQL query as a String or a kind of "prepared statement without parameters", that is just stored and maybe executed later in another function?

Add benchmarks

Hey, API looks promising but can you please back up the claim it's the fastest library today?

Selecting * dynamically

I have a bit of code where I'd like to select only specified columns, if provided, or else all columns. I can't seem to figure out how to select * dynamically within the template string.

I'd like to do something like one of these, but neither work:

sql`SELECT ${sql(columns || '*')} FROM my_table`
sql`SELECT ${columns ? sql(columns) : '*'} FROM my_table`

Here are a couple of failed attempts to select * dynamically:

sql`SELECT ${sql('*')} from my_table` // becomes: SELECT "*" FROM my_table
sql`SELECT ${'*'} FROM my_table` // becomes: SELECT $1 FROM my_table [{ value: '*' }]

Share: Migration Tool

Hey~!

Feel free to insta-close this by the way. Didn't know how else to contact you :)

I discovered postgres a few weeks ago & just started migrating some of my projects over to it. It's working great so far :)

One roadblock I ran into was the lack of a migration tool.
All the ones I've used in the past were made explicitly for a particular client driver, which meant I had to keep pg installed just so that I could continue running migrations.

I didn't like that โ€“ so I made ley that I could use postgres purely!

It relies on ENV vars for connection by default (PGHOST, PGUSER, etc). You can use something like dotenv to load them up:

$ ley -r dotenv/config up
$ ley -r dotenv/config down

You can also add a ley.config.js file to project root to declare a config object.

Note: It's only not 1.0 because I fell into a nice abstraction that can easily accomodate any/all client drivers. Of course, postgres was the first one I added. Once I add another 2-3 drivers I'll consider it done.

That's all from me. Keep up the great work~!

Is there a way to get SELECT field object keys in camelCase

SELECT usernameCount, emailCount FROM 
   (
      SELECT COUNT(username) as usernameCount FROM users WHERE username = ${options.username}
   ) X,
   (
	SELECT COUNT(email) as emailCount FROM users WHERE email = ${options.email}
   ) Y
`

Here in the above query even if I specify to get column names usernameCount, emailCount in camelCase they still get converted into lowercase column names. Is there a way to get column names as specified

Stream speed management using async/await (cursor)

The current stream mechanics make it easy to process rows as they are read from the query rather than return them in a block; however in ultra-large result set cases it's possible the rows will be read faster than they can be processed. See this Reddit thread for more comprehensive specifics. An example use case would be executing a query that yields 1 million+ rows then sending them in blocks of 200 to an external web service that takes several seconds to process each request. In this case it could be possible to send hundreds or thousands of concurrent outbound requests overwhelming heap space and the external service.

There are two potential fixes:

  1. Allow iterator functions to return a promise which is then awaited to ensure more data is not sent to the iterator function than it can handle. The following is a hypothetical use case that buffers up to 200 rows at a time then sends it to an external service with limited speed:
	var rowBuffer = [];
	await sql`select created_at, name from events`.stream(async row => {
		if (rowBuffer.push(row) >= 200) await transformAndSend(rowBuffer.splice(0,200));
	});
	if (rowBuffer.length > 0) await transformAndSend(rowBuffer.splice(0,200));

From a syntax standpoint all that would change from existing functionality is the addition of the optional async. But from the backend the return from the result of the iterator would need to be checked if it is a promise, then awaited. This is an overtly simplified example:

	while (moreRows) {
		var row = await getRow();
		var P = StreamIterator(row);
		if (P instanceof Promise) await P;
	}

I'm not sure how the rows are received weather each is electively read or they come in as events which can be paused. If they are purposefully read then this would be pretty easy. You'd just await the callback and get the next row. If they come in as events you'd need to buffer them to be sent to the callback and pause the incoming results if that buffer gets too large.I

*Ideally the promise resolving to a true or a specified enum should stop the query from completing. So if for example the user executed a query that botches a join and results in 10 billion rows of nonsense or the designation service won't accept the results in the callback, it would be nice to have a means to gracefully and forcefully stop it from reading more rows.

  1. Alternatively (or additionally) making use of Symbol.asyncIterator would allow a standard "for await" loop as a means to asynchronously process data from an async source. This would be (very) easy to add in after altering to existing stream functionality to watch for an await returned promises and could be officially extended from the query result as an .iterate() that returns an Symbol.asyncIterator object to manage this flow for the end user. That would look something like the following in practice:
	var rowBuffer = [];
	for await (let row of sql`select created_at, name from events`.iterate()) {
		if (rowBuffer.push(row) >= 200) await transformAndSend(rowBuffer.splice(0,200));
	});
	if (rowBuffer.length > 0) await transformAndSend(rowBuffer.splice(0,200));

I'm at your disposal if you need assistance in implementing this feature or need to test it once complete. If you can make efficiently iterating through large queries as easy as a "for await" loop you'll have introduced an quality of life improvement well ahead of the curve in terms of future proofing.

Error: remaining connection slots are reserved for non-replication superuser connections

Related to #40 (comment)

Finally encountered it live, when I'm doing user-side actions quite fast

Error: remaining connection slots are reserved for non-replication superuser connections
    at Object.generic (/home/nonroot/transac/node_modules/postgres/lib/types.js:200:5)
    at Object.ErrorResponse (/home/nonroot/transac/node_modules/postgres/lib/backend.js:125:22)
    at TLSSocket.data (/home/nonroot/transac/node_modules/postgres/lib/connection.js:167:25)
    at TLSSocket.emit (events.js:321:20)
    at TLSSocket.EventEmitter.emit (domain.js:485:12)
    at addChunk (_stream_readable.js:297:12)
    at readableAddChunk (_stream_readable.js:273:9)
    at TLSSocket.Readable.push (_stream_readable.js:214:10)
    at TLSWrap.onStreamRead (internal/stream_base_commons.js:186:23) {
  severity_local: 'FATAL',
  severity: 'FATAL',
  code: '53300',
  file: 'postinit.c',
  line: '789',
  routine: 'InitPostgres'
}

Better error reporting. Add original query and params to the thrown exception.

Currently db exception looks like this:

Error: column "appliedat" of relation "some_table" does not exist
    at Object.generic (...backend/node_modules/postgres/lib/types.js:200:5)
    at Object.ErrorResponse (...backend/node_modules/postgres/lib/backend.js:124:33)
    at Socket.data (...backend/node_modules/postgres/lib/connection.js:161:25)
    at Socket.emit (events.js:223:5)
    at addChunk (_stream_readable.js:309:12)
    at readableAddChunk (_stream_readable.js:290:11)
    at Socket.Readable.push (_stream_readable.js:224:10)
    at TCP.onStreamRead (internal/stream_base_commons.js:181:23) {
  severity_local: 'ERROR',
  severity: 'ERROR',
  code: '42703',
  position: '40',
  file: 'parse_target.c',
  line: '1033',
  routine: 'checkInsertTargets'
}

There are two problems here:

  1. Stack trace doesn't tell anything about my own source code. Although async stack trace is enabled in node 12. May be node's socket API doesn't support async stack traces. It seems to be hard problem to solve and may be out of the scope of the library. But it should be considered.

  2. There is technical information from postgres server included, but no useful info about original query and params. I would be great if the query and params will be manually injected into the Error object before thrown to the app code.

Add quotes around column names/table names

Postgres supports camel cased column and table names - but only when you quote your table/column names. Same for column names which use reserved words - they also need to be quoted.

sql`SELECT * from ${sql('tableName')} where ${sql({firstName: 'John'})}`

This generates the sql query SELECT * from tableName where firstName = $1
Which will fail if you used camelCased names while creating the table
SELECT * from "tableName" where "firstName" = $1 would be the expected query.

Not that important, but also related and unexpected is that dollar quoted names become quoted

sql`SELECT * from ${sql('$$table_name$$')} where ${sql({firstName: 'John'})}`

This generates the sql query SELECT * from "$$table_name$$" where firstName = $1
Which will fail as $$ are already quotes
SELECT * from $$table_name$$ where "firstName" = $1 would be the expected query.

How would you handle migrations / ensuring schema?

So, I have a GraphQL application. What is the recommended way to ensure a certain database schema every time the server starts? I currently do sql.fileI(schema.sql) and then start the server, but every time I get table already exists. So, I add CREATE TABLE IF NOT EXISTS but this does not work with Postgres scalars. Also, what if I need to alter a table? I come from MongoDB so I have no idea how to manage this.

Allow adding helper methods

I find the more I use this library the more I'd like for there to be a way that I could augment sql with common helper methods that I use frequently. For example, instead of importing helper functions throughout my code and calling them like checkTableExists(sql, 'my_table') it would be a lot nicer to be able to add that to sql and do something like sql.checkTableExists('my_table'), or perhaps sql.helpers.checkTableExists('my_table'). I know it's trivial to set up the second example by doing sql.helpers = { checkTableExists: table => ... } but then the helpers aren't available within transactions, etc. I've hacked around this by overriding begin and savepoint to attach the helpers object, but I think it would be really cool if a helpers object was a first-class notion in this library that you could set up within options and it would always be available. Thoughts?

Quote all columns in generated inserts, updates

I use camelCase for table and column names. These need to be quoted. Currently when I do something like:

    await sql`
      INSERT INTO "Address" ${sql(address)}
    `

The fields aren't being quoted.

Typescript support

What about supporting Typescript ?
Just a simple types.d.ts would be enough, no need to fully rewrite the project ๐Ÿ˜„

Pro

  • Much more easy than reading the doc
  • Direct documentation with JSdoc and parameters validation (intellisense)

Con

  • Must be in sync with the implemented code

Front Javascript support

Hi,
I discovered that postgres is not supported by front-end javascript. I mean the real vanilla javascript (not the node.js one).
I am asking it because I'd have liked to use it in a Cloudflare Service Worker.
That's too bad because postgres seems really promising. In the aspect of lightweight (small webpack bundle size), the fastest (interesting for small execution time) and few dependencies (it looks like only few lib are making trouble, fs, etc..)

What do you think?

Fundamental tools for this project

@porsager Greetings!

First of all, congratulations on making good progress with this project! The community does need it a lot, considering just how badly stale things are with the old node-postgres maintenance, which feels like it is going nowhere. Somebody had to shake off the ongoing apathy, rethink everything and get started from scratch ๐Ÿ‘

We had a discussion not so long ago (I am the author of pg-promise library), in relation to the performance considerations, but that's another topic.

I mentioned that I did a similar attempt within pg-core, but unfortunately, never made enough progress there. Your attempt picked up very nicely here!

My only wish is that you were to choose TypeScript from day one for this project ๐Ÿ˜‰

Finally, as to why I opened this issue, in addition to what I said above...

I've done tons of development over the years, for postgres and in general. And there are two modules in particular that I wrote, which I think you should definitely adopt within your project, either directly, or at least within your documentation and examples:

These are essential, basic tools for just about every app that uses Postgres in NodeJS. Each is fundamental, and without any dependencies.

browserify: Replace net package by net-browserify?

Thanks a lot for solving issue #22 so fast. As you might have read there, I am considering to use your library on the client side, that is, in the browser(!)

Browserify does a great job on supplying node.js built-in libraries such as Buffer, but is lacking the net package (i. e. TCP connections) out of the box.

Fortunately, the npm package net-browserify does a great job at tunnelling TCP over websockets(!)

At this point you might say,
Nice story, but what does that have to do with the porsager/postgres package?

So, to get to the point:

Is it too much to ask you to add the following to your package.json file:

`browser": {"net": "net-browserify"}`

That would mean you instruct browserify to replace the net package by net-browserify.

I have already tested and I would work, but I had to manipulate your package in the way described above.

Add the option to lazy connect to postgres

Thank you for the awesome client. I was wondering if there is a possibility of adding an option to disable automatic connection to Postgres, and instead add a function that starts the connection manually. It is similar to ioredis' lazyConnect option shown here.

I basically have logic in place at the startup of my server to connect to each service I am using such as the database, cache and message queue, before I start listening on the port, something like this:

async function startServer(): Promise<void> {
  await Promise.all([
    connectToDB(),
    connectToES(),
    connectToCache(),
    connectToEventQueue()
  ]);
  server = http.createServer(app);
  await listen();
  const addr = server.address();
  const bind = typeof addr === 'string' ? `pipe ${addr}` : `port ${addr.port}`;
  loggingService.log(`Listening on ${bind}`);
}

async function closeServer(): Promise<void> {
  await unlisten();
  server = null;
  await Promise.all([
    disconnectFromDB(),
    disconnectFromES(),
    disconnectFromCache(),
    disconnectFromEventQueue()
  ]);
}

So what I need is to call a function that starts the connection manually for Postgres, is that possible?

sql.array error on empty array

I got a "projectIds" field if integer[] with the following query:

await sql`
      UPDATE
        users
      SET
        "projectIds" = ${sql.array(projectIds)}
      WHERE
        "id" = ${userId}
      ;
`

It works when it's not empty, but when empty I'm getting the following error:

column "projectIds" is of type integer[] but expression is of type text[]

Any way to fix this? Thank you.

os.userInfo is not a function

user : o.user || o.username || auth[0] || env.PGUSERNAME || env.PGUSER || os.userInfo().username,

Node v12.13.0
OS: Windows 10
Usage: inside Electron + React

Anyway this line is giving me error - TypeError: os.userInfo is not a function.

As per SO question and answer -> https://stackoverflow.com/a/56253238 os.userInfo() cant be called from browser.

One solution is deleting os.userInfo().username and replacing it with empty quotes, which i did in my project. User should be able to put username inside env var or options.

Reverse transform of column names when inserting or updating

Transforming column names in both ways would greatly improved ergonomics of the library. Consider this example:

const sql = postgres({
  transform: {
    column: postgres.toCamel,
  },
}) 
const entity = {
  appliedAt: new Date,  // db column name is "applied_at"
}
// error: column "appliedat" of relation "some_table" does not exist
sql`INSERT INTO some_table ${sql(entity)}`

Transform option should work both ways, I think. May be configured like this:

{
  transform: {
    column: {
      fromDb: postgres.toCamel,
      toDb: postgres.toSnake,
    },
  },
}

Omitting pathname on database url generates "Cannot read property 'slice' of null" error

Not likely to be a common problem but easy to fix; omitting a slash at the end of the url if you want to connect to the default database generates an error:

Error generating example:

var sql = postgres("postgres://username:[email protected]");
var res = await sql`SELECT 1 AS one;`;
await sql.end();
// > TypeError: Cannot read property 'slice' of null
// > at parseOptions (...\node_modules\postgres\lib\index.js:474:55)

No Error generating example:

var sql = postgres("postgres://username:[email protected]/");
var res = await sql`SELECT 1 AS one;`;
await sql.end();
// > No error

[Proposed fix @ index.js:457 (cursor branch)]

Update line from:

    database    : o.database || o.db || url.pathname.slice(1) || env.PGDATABASE || 'postgres',

to:

    database    : o.database || o.db || url.pathname && url.pathname.slice(1) || env.PGDATABASE || 'postgres',

methods Buffer.utf8Write and Buffer.utf8slice do not exist

According to

there are no methods Buffer.utf8Write and Buffer.utf8slice .

Please replace your calls to

Thank you! Then your package is not just conforming to the nodejs Buffer API, but also seems to work client-side using browserify and net-browserify package.

Better documentation for connection pooling

First of all - thank you very much for this great piece of work! This is by far the best approach to a postgres client.

Can we have a bit more information about connection pooling please?

As far as I can see there is no need for releasing a connection, even in case of a failed query? Does postgres handle all those cases?

What happens if the pool is full? Are queries queued and executed in a fifo manner?

Is there a timeout for queries?

What happens if a db becomes unavailable (temporarily or permanently), or if the master node changes in a high availability setup? Does it try to reconnect and to continue executing the queue?

Error: write CONNECTION_CLOSED false

First of all, thank you so much for this amazing library! It is the best postgres driver so far.

We have a service with ~60 req/s to the database. The postgres package is set to default, only connection stuffs (host, port, username, password, dbName, ...) are set. Everything works like a charm, but after more or less 2 hours, small number of errors occur:

Error: write CONNECTION_CLOSED false
    at Object.connection (/node-reverse/node_modules/postgres/lib/types.js:191:5)
    at close (/node-reverse/node_modules/postgres/lib/connection.js:173:18)
    at Socket.onclose (/node-reverse/node_modules/postgres/lib/connection.js:199:5)
    at Object.onceWrapper (events.js:286:20)
    at Socket.emit (events.js:198:13)
    at TCP._handle.close (net.js:607:12)

The number of the errors increases with the time of application runtime. It seems like bad pool handling to me. We are testing it right now with the timeout option, so I will let you know if it helps or not.

Non prepared execution mode

Hi,

First I want to say that this is a great library, simple but powerful.

Anyway, I just found a case where executing a query with bind parameters results in much more slow execution time (5s) vs running it with hardcoded values (30ms). I believe this is caused due to the use of prepared statement, where the server created an execution plan without looking at the types of the bind parameters, thus some indexes are not used. To fix this we could explicitly cast the bind parameter, e.g. ${value}::INT, but it is a bit of an annoyance to have to cast every bind parameter. So if possible I think it would be great to have a non-prepared execution mode as an alternative.

For comparison, the same query runs fast with python/psycopg2 because although it supports bind parameter in the client (via %s format) it sends the query to the server in non-prepared mode, that is it directly replaces the bind parameter placeholder with its values.

(Connection Pool) Error: prepared statement "p1" already exists

Heya, really enjoying this project.

When I switch to Connection Pool in DigitalOcean PostgreSQL 11, I get this error:

Error: prepared statement "p1" already exists
    at Object.generic (C:\cygwin64\home\1234\transac\node_modules\postgres\lib\types.js:200:5)
    at Object.ErrorResponse (C:\cygwin64\home\1234\transac\node_modules\postgres\lib\backend.js:124:33)
    at TLSSocket.data (C:\cygwin64\home\1234\transac\node_modules\postgres\lib\connection.js:167:25)

Any thoughts?

SSL Connection

Hej Rasmus,

I'm trying to use postgres on Heroku, and they require you connect over SSL. Setting the SSL flag to true did not help much, eg:

var sql = postgres(process.env.DATABASE_URL, { ssl: true })

Gives the following error:


Unhandled Rejection at: Promise {
  <rejected> Error: self signed certificate
      at TLSSocket.onConnectSecure (_tls_wrap.js:1473:34)
      at TLSSocket.emit (events.js:311:20)
      at TLSSocket._finishInit (_tls_wrap.js:916:8)
      at TLSWrap.ssl.onhandshakedone (_tls_wrap.js:686:12) {
    code: 'DEPTH_ZERO_SELF_SIGNED_CERT'
  }
} reason: Error: self signed certificate
    at TLSSocket.onConnectSecure (_tls_wrap.js:1473:34)
    at TLSSocket.emit (events.js:311:20)
    at TLSSocket._finishInit (_tls_wrap.js:916:8)
    at TLSWrap.ssl.onhandshakedone (_tls_wrap.js:686:12) {
  code: 'DEPTH_ZERO_SELF_SIGNED_CERT'
}

I wonder whether this is because Heroku selfsigns their postgres ssl certs or if it is a flag that need to be set on the connection?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.