Code Monkey home page Code Monkey logo

sql-pg's Introduction

Build Status Coverage Status

Latest Release: v10.4.1, Latest Major Release: v10.0.0, All Releases

SQL-PG

Complex queries can be written with normal SQL, including the values needs to be bound and prefixed with the sql tag.

SQL Make the id variable? SQL
Or even more simple? SQL

Features

  • Built on top of pg as database driver
  • Simple data manipulation and selection methods without the need to write SQL
  • SQL Tag and Tag Helpers to write queries looks like native SQL and be secure by design
  • Write easy unit testable queries
  • Possibility to add own Tag Helpers to extend the functionality

Installation

npm install --save pg sql-pg

Initialisation

Use it in your project:

const sql = require('sql-pg')()

The connection use per default the env var DATABASE_URL. It will only establish a database connection if manipulation/selection methods are used. Alternatively it's possible to provide a sql.js to customize the database connecting and sql object initializing.

Usage

Manipulation Methods

Simple data manipulation can be done without writing any SQL Statements.

E.g. some user data manipulation:

const id = await sql.insert(
  'users',
  { name: 'Sharaal', email: '[email protected]', passwordhash: '...' }
)

await sql.update('users', { validated: 1 }, { id })

await sql.delete('users', { id })

More complex data manipulation can be done with the SQL Tag.

Selection Methods

Often needed convenient methods to check and extract query results are available with the Selection Methods.

E.g. select all not validated users:

const users = await sql.any('users', ['name', 'email'], { validated: 0 })

Also the Selection Methods supports SQL Tag as parameter for more complex selections. Because they are highly inspired by pg-promise, there are the Selection Methods any/manyOrNone, many, oneOrNone and one available.

SQL Tag and Tag Helpers

If it becomes more complex the SQL Tag and Tag Helpers are the way to go.

They are as near as possible to native SQL queries to be readable and easy to write. All variables can be directly used and will be exchanged via placeholders and given to the database separately as values. For non native values like lists, for table/column names and conditions there are Tag Helpers.

E.g. list of not activated users filtered by name:

const name = 'raa'

const users = await sql.any(
  sql`
    SELECT "name", "email" FROM "users"
      WHERE
        "validated" = 0
        AND
        "name" LIKE ${`%${name}%`}
  `
)

There are a lot more Tag Helpers available like .identifier, .table, .column(s), .value(s), .valuesList, .assignments, .conditions, .limit, .offset, .pagination, .if, .jsonColumnObject and .jsonColumnText.

More

Alternative initialization, available Tag Helpers, Nested Queries, Transaction, Writing Tag Helpers, Migrations, Syntax Highlighting in Atom... All additional documentation can be found in the Wiki.

Contact

Found a bug or missing a feature? -> Create a new Issue

Found a security issue? -> Look at the Security Policy

Having questions, want to give feedback or talk to me? -> E-Mail me [email protected]

sql-pg's People

Contributors

greenkeeper[bot] avatar sharaal avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

sql-pg's Issues

Integration tests with an actual database

Through #40 I learned integration tests with an actual database and the native client of pg is highly important to ensure the whole library actually works.
So let's have a docker-compose.yml setting up PostgreSQL for local development and adding the database also as service for the Travis tests.
Splitting up the npm test into npm test:unit and npm test:integration, only the unit tests should be run in the git hook.

Support composable queries

In some query builder like knex.js it's possible to compose SQL queries, e.g.:

function getLatest (query, limit) { 
  return query.orderBy('created_at', 'DESC').limit(limit)
}

With that function every query can be extended by composing the orderBy and limit part.

Add value formatter support and a default one with JSON

Currently only primitive values are allowed. It would be very convenient if also complex values are accepted which will be transformed to JSON by default.

e.g.:

const object = { key: 'value' }
sql`INSERT INTO "table" (json) VALUES (${object})`

// text: INSERT INTO "table" (json) VALUES ($1)
// parameters: ['{"key":"value"}']

The value formatter should be overwrite-able.

Possible enhancements could also be transforming Date objects into PostgreSQL timestamp format. But maybe this is not easily possible, because there are multiple timestamps formats available.

But for JSON it seems to be unproblematic.

Final end to end testing

Far to late, but better now than never, I final end to end tested all functionalities.

With that I found a few errors:

  • Migrations not worked correctly if used multiple times because of duplicate trigger creation
    => Added a "DROP TRIGGER IF EXISTS" before the creation
  • The lib not worked with the native client instance, because the function had a name interpreted by pg as transaction name
    => Refactor the design sql`` returns the function without the .text and .values assigned and sql.query() call the function before give it to sql.client.query(). This also optimise the workflow, because the query building is only done once now before the query is executed
  • The native client.query() use .values for the array of values, not .parameters
    => Changed all the code
  • The * in SELECT "*" FROM table got interpreted to be a column with that name instead of selecting all columns
    => Changed to SELECT * FROM table
  • There are use cases identifier escaping is needed which are not tables or columns, e.g. trigger names
    => Changed the identifier escaping to be a public available Tag Helper via sql.identifier()

Also there is a minor optimisation with adding a complete initialisation example to the README.md.

Tests for the migration

Currently the migration script is the only piece of code without tests.

The difficulty here are the dependencies, which are directly required from the script and the script itself which run directly per require and needs to be re-required multiple times for the testing.

One solution could be mock-require, which seems to solve both requirements.

Use error codes for the sql-pg errors and provide them in the sql object

Currently sql-pg throws some own errors, like the .one() selection method. But it's not handy to catch them with the error message. So it would be good to provide an error code also and provide all error codes in the sql object as constants.

So something like this would be possible:

try {
  const user = sql.one('users', { email: req.body.email })
} catch (e) {
  if (e.code === sql.NO_ROW_ERROR) {
    res.status(403).send('incorrect email')
    return
  } else {
    res.sendStatus(500)
  }
}

Split key into table and column

Technically they are the same, but it makes sense to split them, because they are semantically different. This also enables later features like a defaultSchemaName all tables get.

Integrate `sql-pg-restrict` and `sql-pg-helper`

As a proposal integrate both additional packages into the core sql-pg package.

This results in a few advantages:

  • All methods are provided with the sql tag, so it will be sql.query, sql.insert and so on available
  • All of these methods are optional available after assign the pg client to sql.client
  • sql.query will be automatically restricted to queries created with the sql tag
  • Avoid complex usage if want to use them together

The disadvantage is of course the higher complexity of the core sql-pg package.

Migration helpers for create tables with serial id and timestamp columns

It should be the default every table has a serial id and timestamps columns.

So all tables should get the following basic schema:

CREATE TABLE "example" {
  "id" SERIAL NOT NULL PRIMARY KEY,
  "created_at" TIMESTAMPTZ NOT NULL DEFAULT NOW(),
  "updated_at" TIMESTAMPTZ NOT NULL DEFAULT NOW()
}

But especially the updated_at column, which should be automatically updated if something changes in the row, is not natively provided by PostgreSQL and needs more logic:

  • A trigger which updates the updated_at with a new timestamp every time the row is updated:
CREATE OR REPLACE FUNCTION trigger_set_timestamp()
RETURNS TRIGGER AS $$
BEGIN
  NEW.updated_at = NOW();
  RETURN NEW;
END;
$$ LANGUAGE plpgsql;
  • The trigger needs to be assigned to the table:
CREATE TRIGGER set_timestamp_example
BEFORE UPDATE ON example
FOR EACH ROW
EXECUTE PROCEDURE trigger_set_timestamp();

For all of this there should be helpers for the migration files.

For more details see: https://x-team.com/blog/automatic-timestamps-with-postgresql/

PR: #38

Evaluate splitting the implementation into modular files

Maybe it's time to think about the implementation is to large for only one file.
So try out some approaches to modularise the implementation.

Requirements:

  • No API changes, the usage should be the same
  • Still no dependencies

Remove unneeded complex object destruction default syntax

Currently I always use a complex structure to define defaults in object destructions, e.g.:


module.exports = ({
  client,
  defaultSerialColumn: defaultSerialColumn = 'id',
  defaultSerialColumn = 'id',
  defaultSchema,
  defaultFallbackLimit: defaultFallbackLimit = 10,
  defaultMaxLimit: defaultMaxLimit = 100,
  defaultPageSize: defaultPageSize = 10
  defaultFallbackLimit = 10,
  defaultMaxLimit = 100,
  defaultPageSize = 10
} = {}) => {
...
}

But I don't know why I always these complex structure and it can be reduced to the simpler one, e.g.:

module.exports = ({
  client,
  defaultSerialColumn = 'id',
  defaultSerialColumn = 'id',
  defaultSchema,
  defaultFallbackLimit = 10,
  defaultMaxLimit = 100,
  defaultPageSize = 10
  defaultFallbackLimit = 10,
  defaultMaxLimit = 100,
  defaultPageSize = 10
} = {}) => {
...
}

Support Query Methods to check and extract query results

pg-promise has a lot of convenient query methods to check and extract query results.

E.g. to get the data for an user needs to check if there is a result in result.rows:

const userId = 1

const results = await sql.query(
  sql`SELECT * FROM users WHERE id = ${userId} AND validated = 1`
)

if (results.rows !== 1) {
  throw new Error(`Missing user`)
}

const user = results.rows[0]

These kind of checking and extracting is not complicated, but often needed and so it can be worth to provide these Query Methods.

E.g. to get the data for an user with the Query Methods:

const userId = 1

const user = await sql.one(
  sql`SELECT * FROM users WHERE id = ${userId} AND validated = 1`
)

The Query Methods pg-promise supports are: any/manyOrNone, many, oneOrNone, one. It would be nice to use these names and provide the same behaviour with support of the SQL Tags.

ToDo's:

  • Implementation of the Query Methods
  • Add a short introduction to the README.md
  • Add Wiki entry for the Query Methods
  • Add unit tests for every Query Method:
    • any
    • manyOrNone
    • many
    • oneOrNone
    • one

Pull request: #12

Find a solution for accidentally missing sql tags

Currently if the sql tag is accidentally missing and only primitive values are used the query would work, but wide open the door for sql injections. That shouldn't be possible, so therefor a good solution is needed to decrease this risk.

Example:

// bad
const result = await client.query(`
  SELECT * FROM users WHERE email = ${email} AND passwordhash = ${passwordhash}
`)

// good
const result = await client.query(sql`
  SELECT * FROM users WHERE email = ${email} AND passwordhash = ${passwordhash}
`)

Ideas how to solve that in better ways:

  • Linting rules are maybe also good to report .query calls with template strings without the sql tag. That's also good because for linters there are options to ignore that rule at some calls if there should be a use case for that. Additionally it can also report usages of manual concatenating with strings or template strings

Combine the concept of CRUD Methods with Query Methods

Currently with any/manyOrNone, many, oneOrNone and one there are convenient Query Methods to select data supporting SQL Tag. But also there is the select method which removes the need to write SQL for the basic use cases.

So it would be nice the Query Methods also accept the same parameters as the select.

But after some thinking about the topic it makes sense to completely remove the .select() in favour of the Query Methods (renaming it to Select Methods) and extend the remaining CRUD Method to support SQL Tags.

ToDo's:

  • Implementation
  • README.md
  • Tests
  • Wiki

Pull request: #14

Support JSON columns

For simple usages of JSON columns there should be Tag Helpers. They can only be used in the SQL Tag, the .column(s) Tag Helper and so also in the columns part of the .select Manipulation Method.

Only the -> chain is supported and as last operator only -> and ->>.

The path operators #> and #>> won't be supported by the Tag Helpers.

Add command to create a new migration file

Acceptance criteria

  • To be similar to knex the name of the command should be migrate:make
  • As parameter the name is supported
  • If the name doesn't have a file ending, .sql is used as default
  • The current unix timestamp is added as prefix to the name
  • Depending on the file ending a different template is used as file content:
    • .js:
    module.exports = async sql => sql.query(
      sql``
    )
    
    • .sql: empty file

Schema for table names and table names for column names

Keys can be combined in case of schema for table names and table names for column names, e.g.:

SELECT * FROM "schema"."table"

It would be nice to support this with in all tag helpers handling keys.

Also add a defaultSchemaName attribute which will be used for all keys not having a explicit schema name. For this I think it would be needed to split the key tag helpers to separated table and column tag helpers (separated this into #22).

Needs to be done after: #22.

Refactor library initialization to allow different settings

Currently the require('sql-pg') directly returns the sql tag. But with all the default options coming in it makes sense to be able to have different sql tag objects with different options. So the require('sql-pg') should return an initialization function setting all default options by passing them as params.

`.valuesList()` needs to extract the values by the keys of the first row

Currently .valuesList() can brake in case of objects are given with different order of the attributes. To fix it, it's needed to extract the keys of the first row and use them to extract the values from the following rows.

So the keys and values of the first row define the order.

Alternatively / Additionally it should be possible to define the keys which are used. Also for .values() and .insert().

Support Migrations

To be more independent of knex it would be nice to have an own solution for migrations.

The supported formats should be .sql (if only plain SQL is needed) and .js (for more complex migrations).

There shouldn't be support for rollbacks in favour always go forward with additional migrations.

Refactoring tests to reduce test code duplication

Currently most of the tests have the same behavior:

  • Create the actual sql/tag helper object
  • In case of sql object assert the text and parameter against expected ones and check the symbol
  • In case of tag helper assert the text, parameter and symbol are undefined
  • Call the actual as function to test the usage as tag helper with different parameter position

This are a lot of redundant test code to check that things in every test case.

It would be nice to have some test functions to wrap the test steps, e.g.:

const value1 = 'value1'
const value2 = 'value2'
const value3 = 'value3'
testSql(
  sql`SELECT "*" FROM "table" WHERE "column1" = ${value1} AND "column2" = ${value2} AND "column3" = ${value3}`,
  {
    text: {
      0: 'SELECT "*" FROM "table" WHERE "column1" = $1 AND "column2" = $2 AND "column3" = $3',
      5: 'SELECT "*" FROM "table" WHERE "column1" = $6 AND "column2" = $7 AND "column3" = $8'
    },
    parameters: ['value1', 'value2', 'value3']
  }
)

And similar for the tag helpers, e.g.:

testTagHelper(
  sql.values(['value1', 'value2', 'value3']),
  {
    text: {
      0: '$1, $2, $3',
      5: '$6, $7, $8'
    },
    parameters: ['value1', 'value2', 'value3']
  }
)

It also increase the readability of the test cases.

Refactor the documentation by using the Wiki

Currently the complete documentation is in the README.md. As the functionality of the library get bigger and bigger, especially by #4, it lacks of overview.
It would be better to have a very short, clean README.md with a getting started / first steps and use the Wiki or a GitHub page with all the details.

Allow presents of `$` in hardcoded strings of text fragments

Because all $ will be replaced with a numbered one, it brakes if that character is present in hardcoded strings of text fragments. Also problematic in text fragments of nested queries.

Input

sql`SELECT * FROM table WHERE column = "$"`

Output

Expected:

SELECT * FROM table WHERE column = "$"

Actual:

SELECT * FROM table WHERE column = "$1"

Possible fixes

a) It's maybe needed to change the regex with a right parsing to determine a $ is inside of a string (" or ') to get ignored by the replacing.
b) Replace first all $ to \$ before adding the ones for the binding of variables, ignore them in the replace and replace them back in the end. But this approach is maybe not possible for nested queries
c) Use another more unique character(s) during the creation, e.g. $# and replace at the end to $n

Test the publishing package

In the past it happened the tests worked correctly, but the package was not right published, e.g. missing files in the includes entry of the package.json.

This issues needs to be covered by the automatic tests before the package will be published.

Approach:

  1. Using npm pack to create the tarball how the package also will be published
  2. Unpack these tarball locally, install the dependencies copy the tests and files which are needed for the tests
  3. Run the unit and integration tests on it

Decrease complexity by removing objects as Tag Helpers

Currently it's possible a Tag Helper is only an object:

sql.example = { text: 'example', parameters: [] }

This can be made consistent with the Tag Helpers, which are functions because of needing to include the parameter position, also if it's not actual needed:

sql.example = () => { text: 'example', parameters: [] }

By this it's possible to reduce the internal complexity and makes it more clear to extend the library with new Tag Helpers, because all have the same interface:

// Tag Helper Interface
parameterPosition => { text: '', parameters: [] }

This includes to change all provided Tag Helpers which are currently returning objects: .if, .key, .keys, .limit, .offset, .pagination.

Disallow update/delete without conditions

I run into hard to revert issues if conditions are accidentally missing in update/delete.
So it's a good idea to disallow using them without conditions and for that functionality add the manipulation methods updateAll/deleteAll.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.