Code Monkey home page Code Monkey logo

sqlb's Introduction

sqlb Build Status

sqlb is a Golang library designed for efficiently constructing SQL expressions. Instead of hand-constructing strings containing raw SQL, users of the sqlb library instead construct query expressions and the sqlb library does the work of producing the raw strings that get sent to a database.

Building SQL expressions, not strings

It's best to learn by example, so let's walk through a common way in which Golang applications might typically work with an underlying SQL database and transform this application to instead work with the sqlb library, showing the resulting gains in both code expressiveness, application speed and memory efficiency.

Our example will be a simple blogging application.

Imagine we have the following set of tables in our database:

CREATE TABLE users (
  id INT NOT NULL,
  email VARCHAR(100) NOT NULL,
  name VARCHAR(100) NOT NULL,
  is_author CHAR(1) NOT NULL,
  profile TEXT NULL,
  created_on DATETIME NOT NULL,
  updated_on DATETIME NOT NULL,
  PRIMARY KEY (id),
  UNIQUE INDEX (email)
);

CREATE TABLE articles (
  id INT NOT NULL,
  title VARCHAR(200) NOT NULL,
  content TEXT NOT NULL,
  created_by INT NOT NULL,
  published_on DATETIME NULL,
  PRIMARY KEY (id),
  INDEX ix_title (title),
  FOREIGN KEY fk_users (created_by) REFERENCES users (id)
);

Our blogging application's default home page might return information about the last ten articles published. It's reasonable to believe that the following SQL expression might be used to grab this information from the database:

SELECT
  articles.title,
  articles.content,
  articles.created_on
  users.name,
FROM articles
JOIN users
 ON articles.created_by = users.id
ORDER BY articles.created_on DESC
LIMIT 10

Our Golang code for the server side of our application might look something like this:

package main

import (
    "database/sql"
    "fmt"
    "log"
    "net/http"
)

const (
    DSN = "root:password@/blogdb"
)

var db *sql.DB

type Article struct {
    Title string
    AuthorName string
    PublishedOn string
    Content string
}

func getArticles() []*Article {
    qs := `
SELECT
  articles.title,
  articles.content,
  articles.created_on
  users.name,
FROM articles
JOIN users
 ON articles.created_by = users.id
ORDER BY articles.created_on DESC
LIMIT 10
`
    articles := make([]*Article, 0)
    rows, err := db.Query(qs)
    if err != nil {
        log.Fatal(err)
    }
    defer rows.Close()
    for rows.Next() {
        article := &Article{}
        err := rows.Scan(&article.Title, &article.Content,
                         &article.PublishedOn, &article.AuthorName)
        if err != nil {
            log.Fatal(err)
        }
        articles = append(articles, article)
    }
    if err := rows.Err(); err != nil {
        log.Fatal(err)
    }
    return articles
}

func handler(w http.ResponseWriter, r *http.Request) {
    articleTemplate := `%s
-----------------------------------------------------
by %s on %s

%s
`
    articles := getArticles()
    for _, article := range articles {
        fmt.Fprintf(w, articleTemplate, article.Title, article.AuthorName,
                    article.PublishedOn, article.Content)
    }
}

func main() {
    if db, err := sql.Open("mysql", DSN); err != nil {
        log.Fatal(err)
    }
    http.HandleFunc("/", handler)
    http.ListenAndServe(":8080", nil)
}

Note: Clearly, I'm not doing proper error handling and I'm hard-coding things like the DSN that should be pulled from a configuration system in this example code.

The above code works, but it's fragile in the face of inevitable change to the application. What if we want to make the number of articles returned configurable? What if we want to allow users to list only articles by a particular author? In both of these cases, we will need to modify the getArticles() function to modify the SQL query string that it constructs:

func getArticles(numArticles int, byAuthor string) []*Articles {
    // Our collection of query arguments
    qargs := make([]interface{}, 0)
    qs := `
SELECT
  articles.title,
  articles.content,
  articles.created_on
  users.name,
FROM articles
JOIN users
 ON articles.created_by = users.id
`
    if byAuthor != "" {
        qs = qs + "WHERE users.name = ? "
        qargs = append(qargs, byAuthor)
    }
    qs = qs + `ORDER BY articles.created_on DESC
LIMIT ?`
    qargs = append(qargs, numArticles)
    articles := make([]*Article, 0)
    rows, err := db.Query(qs, qargs...)
    if err != nil {
        log.Fatal(err)
    }
    defer rows.Close()
    for rows.Next() {
        article := &Article{}
        err := rows.Scan(&article.Title, &article.Content,
                         &article.PublishedOn, &article.AuthorName)
        if err != nil {
            log.Fatal(err)
        }
        articles = append(articles, article)
    }
    if err := rows.Err(); err != nil {
        log.Fatal(err)
    }
    return articles
}

As you can see above, the minor enhancements to our application of allowing a configurable number of articles and filtering by author have already begun to make the getArticles() function unwieldy. The string being generated for our SQL SELECT statement is both more difficult to read and less efficient to construct (due to the multiple string concatenations and memory allocations being performed). Adding more filtering capability would bring yet more conditionals and more string concatenation, leading to ever-increasing complexity and reduced code readability.

sqlb is designed to solve this problem.

Rewriting our application to use sqlb

Let's rewrite our example application above to use the sqlb library instead of manually constructing SQL strings.

We start by initializing sqlb's reflection system in our application's main() entrypoint:

import (
    "github.com/jaypipes/sqlb"
)

var meta sqlb.Meta

func main() {
    if db, err := sql.Open("mysql", DSN); err != nil {
        log.Fatal(err)
    }
    if err := sqlb.Reflect(sqlb.DIALECT_MYSQL, db, &meta); err != nil {
        log.Fatal(err)
    }
}

The sqlb.Meta struct is now populated with information about the database, including metadata about tables, columns, indexes, and relations. You can use this meta information when constructing sqlb Query Expressions.

Let's transform our original getArticles() function -- before we added support for a configurable number of articles and filtering by author -- to use sqlb:

func getArticles() []*Article {
    articles := meta.Table("articles")
    users := meta.Table("users")

    q := sqlb.Select(articles.C("title"), articles.C("content"),
                     articles.C("created_by"), users.C("name"))
    q.Join(users, sqlb.Equal(articles.C("author"), users.C("id")))
    q.OrderBy(articles.C("created_by").Desc())
    q.Limit(10)

    articles := make([]*Article, 0)
    rows, err := sqlb.Query(db, q)
    if err != nil {
        log.Fatal(err)
    }
    defer rows.Close()
    for rows.Next() {
        article := &Article{}
        err := rows.Scan(&article.Title, &article.Content,
                         &article.PublishedOn, &article.AuthorName)
        if err != nil {
            log.Fatal(err)
        }
        articles = append(articles, article)
    }
    if err := rows.Err(); err != nil {
        log.Fatal(err)
    }
    return articles
}

The above code ends up producing an identical SQL string as the original code, however the sqlb version uses only a single memory allocation to construct the SQL string (when q.String() is called).

Let's add in functionality to have a configurable number of returned articles and optionally filter for a specific author's articles.

func getArticles(numArticles int, byAuthor string) []*Articles {
    articles := meta.Table("articles")
    users := meta.Table("users")

    q := sqlb.Select(articles.C("title"), articles.C("content"),
                     articles.C("created_by"), users.C("name"))
    q.Join(users, sqlb.Equal(articles.C("author"), users.C("id")))
    if byAuthor != "" {
        q.Where(sqlb.Equal(users.C("name"), byAuthor))
    }
    q.OrderBy(articles.C("created_by").Desc())
    q.Limit(numArticle)

    articles := make([]*Article, 0)
    rows, err := sqlb.Query(db, q)
    if err != nil {
        log.Fatal(err)
    }
    defer rows.Close()
    for rows.Next() {
        article := &Article{}
        err := rows.Scan(&article.Title, &article.Content,
                         &article.PublishedOn, &article.AuthorName)
        if err != nil {
            log.Fatal(err)
        }
        articles = append(articles, article)
    }
    if err := rows.Err(); err != nil {
        log.Fatal(err)
    }
    return articles
}

No more manually constructing and reconstructing strings or tracking query arguments. sqlb handles the SQL string construction for you as well as properly construct the slice of query arguments, allowing you to write custom query code in a more natural and efficient manner.

License

sqlb is licensed under the Apache license version 2. See the COPYING file for more information.

sqlb's People

Contributors

dependabot[bot] avatar jaypipes avatar

Stargazers

 avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

Forkers

etsangsplk leobcn

sqlb's Issues

HAVING clause

Yeah... we don't yet support the HAVING clause. We should. :)

Joining to multiple tables missing last selection

When trying to produce the subquery for all organizations a user belongs to in Procession, I'm using the following Go code:

 func (s *IAMStorage) subqPrivateOrgs(userId int64) *sqlb.SelectQuery {
     m := s.Meta()
     o1 := m.TableDef("organizations").As("o1")
     o2 := m.TableDef("organizations").As("o2")
     ou := m.TableDef("organization_users").As("ou")
     colOUUserId := ou.Column("user_id")
     colOUOrgId := ou.Column("organization_id")
     colOrgId1 := o1.Column("id")
     colOrgId2 := o2.Column("id")
     colOrgRootId1 := o1.Column("root_organization_id")
     colOrgRootId2 := o2.Column("root_organization_id")
     colOrgNestedLeft1 := o1.Column("nested_set_left")
     colOrgNestedLeft2 := o2.Column("nested_set_left")
     colOrgNestedRight2 := o2.Column("nested_set_right")
     nestedJoin := sqlb.And(
         sqlb.Equal(colOrgRootId1, colOrgRootId2),
         sqlb.Between(colOrgNestedLeft1, colOrgNestedLeft2, colOrgNestedRight2),
     )
     ouJoin := sqlb.And(
         sqlb.Equal(colOrgId2, colOUOrgId),
         sqlb.Equal(colOUUserId, userId),
     )
     q := sqlb.Select(
         colOrgId1,
     ).Join(o2, nestedJoin).Join(ou, ouJoin)
     return q
 }

The expected SQL should be this:

SELECT o1.id
FROM organizations AS o1
JOIN organizations AS o2
ON o1.root_organization_id = o2.root_organization_id
AND o1.nested_set_left BETWEEN o2.nested_set_left AND o2.nested
JOIN organization_users AS ou
ON o2.id = ou.organization_id
AND ou.user_id = ?

however the actual SQL in the buffer ends up being this:

SELECT o1.id
FROM organizations AS o1
JOIN organizations AS o2
ON o1.root_organization_id = o2.root_organization_id
AND o1.nested_set_left BETWEEN o2.nested_set_left AND o2.nested

Allow RDBMS-specific dialect handling

While we certainly want to keep the SQL generated by sqlb as generic as possible (and executable on many platforms), there are certain areas where different underlying RDBMS are just different. Examples include differences in names of and presence of certain SQL functions (e.g. TRIM vs BTRIM), differences in using LIMIT vs. TOP in SQL Server, etc

When >1 table involved in SELECT, push down fully-qualified column names

By default, when there is more than a single table involved in a SELECT expression, we want to ensure that projected column names use the fully-qualified name (or aliased name). This ensures that the common scenario with multiple tables having like-named columns will not result in the the SQL server returning an error about ambiguous column names.

Scannable interface for Column and ColumnList

A sqlb.Column struct should implement the Scanner interface's Size() and Scan() methods. A sqlb.ColumnList struct should also implement this interface and be a concrete implementation for an array of Column pointers.

Add expression and operator support

In order to represent JOIN conditions and WHERE conditions, we need a set of operator functions that return a concrete class that implements Scannable.

For example, let's say I want to express WHERE id = "some value", I would like to be able to do something like this:

 a := meta.Table("articles")
u := meta.Table("users")

s := sqlb.Select(u.Column("id")).Where(sqlb.Equal(u.Column("email"), "[email protected]"))

Note the sqlb.Equal() function is necessary and must return some Scannable implementation that will produce the string "users.email = ?"

Use gofmt

Even though I am not a fan of the canonical Go code style, sqlb is a Go library and should follow go code formatting guidelines. Therefore, I need to update my vimrc for .go files and use gofmt for reformatting the existing files.

Join to an aliased SelectQuery results in index out of range

The following:

 157     q.Where(
 158         sqlb.Or(
 159             sqlb.Equal(colOrgVisibility, 1),
 160             sqlb.And(
 161                 sqlb.Equal(colOrgVisibility, 0),
 162                 sqlb.IsNotNull(colPrivOrgId),
 163             ),
 164         ),
 165     )

is resulting in a panic:

com/jaypipes/procession/vendor/github.com/jaypipes/sqlb.(*value).scan(0xc4201b4a80, 0xc42025e223, 0x23, 0x23, 0xc4201b
 /home/jaypipes/src/github.com/jaypipes/procession/vendor/github.com/jaypipes/sqlb/value.go:58 +0x234
com/jaypipes/procession/vendor/github.com/jaypipes/sqlb.(*Expression).scan(0xc4201b4ab0, 0xc42025e214, 0x32, 0x32, 0xc
 /home/jaypipes/src/github.com/jaypipes/procession/vendor/github.com/jaypipes/sqlb/expression.go:110 +0x170
com/jaypipes/procession/vendor/github.com/jaypipes/sqlb.(*Expression).scan(0xc4201b4b10, 0xc42025e213, 0x33, 0x33, 0xc
 /home/jaypipes/src/github.com/jaypipes/procession/vendor/github.com/jaypipes/sqlb/expression.go:110 +0x170
com/jaypipes/procession/vendor/github.com/jaypipes/sqlb.(*Expression).scan(0xc4201b4b40, 0xc42025e1fe, 0x48, 0x48, 0xc
 /home/jaypipes/src/github.com/jaypipes/procession/vendor/github.com/jaypipes/sqlb/expression.go:110 +0x170
com/jaypipes/procession/vendor/github.com/jaypipes/sqlb.(*whereClause).scan(0xc4201b8ce0, 0xc42025e1f7, 0x4f, 0x4f, 0x
 /home/jaypipes/src/github.com/jaypipes/procession/vendor/github.com/jaypipes/sqlb/where.go:36 +0x24a
com/jaypipes/procession/vendor/github.com/jaypipes/sqlb.(*selectClause).scan(0xc4201bc230, 0xc42025e000, 0x246, 0x246,
 /home/jaypipes/src/github.com/jaypipes/procession/vendor/github.com/jaypipes/sqlb/select_clause.go:102 +0x752
com/jaypipes/procession/vendor/github.com/jaypipes/sqlb.(*SelectQuery).StringArgs(0xc4201be320, 0xef4d88, 0x0, 0x0, 0x
 /home/jaypipes/src/github.com/jaypipes/procession/vendor/github.com/jaypipes/sqlb/select.go:50 +0xbb
com/jaypipes/procession/pkg/iam/iamstorage.(*IAMStorage).OrganizationList(0xc4201a6fa0, 0xc420200220, 0x15, 0x0, 0x0, 
 /home/jaypipes/src/github.com/jaypipes/procession/pkg/iam/iamstorage/organization.go:180 +0xc99
com/jaypipes/procession/pkg/iam/server.(*Server).OrganizationList(0xc420131f80, 0xc420200220, 0xd1ba80, 0xc4201f61e0, 
 /home/jaypipes/src/github.com/jaypipes/procession/pkg/iam/server/organization.go:26 +0x12e
com/jaypipes/procession/proto._IAM_OrganizationList_Handler(0xa0f520, 0xc420131f80, 0xd1b240, 0xc42022e000, 0xc4201ca5
 /home/jaypipes/src/github.com/jaypipes/procession/proto/service_iam.pb.go:710 +0x113
com/jaypipes/procession/vendor/google.golang.org/grpc.(*Server).processStreamingRPC(0xc420132c60, 0xd1bde0, 0xc420079e
 /home/jaypipes/src/github.com/jaypipes/procession/vendor/google.golang.org/grpc/server.go:865 +0xa61
com/jaypipes/procession/vendor/google.golang.org/grpc.(*Server).handleStream(0xc420132c60, 0xd1bde0, 0xc420079e40, 0xc
 /home/jaypipes/src/github.com/jaypipes/procession/vendor/google.golang.org/grpc/server.go:959 +0x1539
com/jaypipes/procession/vendor/google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc4201665e0, 0xc420132c60, 0xd1b
 /home/jaypipes/src/github.com/jaypipes/procession/vendor/google.golang.org/grpc/server.go:517 +0xa9
 by github.com/jaypipes/procession/vendor/google.golang.org/grpc.(*Server).serveStreams.func1
 /home/jaypipes/src/github.com/jaypipes/procession/vendor/google.golang.org/grpc/server.go:518 +0xa1

Split docs into tutorials, reference and how-tos

I can't seem to find a link to the great article I read on medium.com about the four different types of documentation, but I do remember that the article explained why it was important to have tutorials, how-tos, and reference documentation separated from each other and use different writing strategies for each.

Need to be able to refer to a derived table's columns

When creating a derived table by, for example, creating a subselect and joining to it:

subq := sqlb.Select(colOrgId).As("o1")

In order to join the above subquery to another table, we need to do something like this:

joinCond := sqlb.Equal(colOrgId, subq.Column("id"))
q := sqlb.Select(orgs).Join(subq, joinCond)

The problem is that SelectQuery (which is returned from the SelectQuery.As() method) does not have a Column() method and therefore we have no way to refer to the subquery's projections (columns).

Make all interfaces non-exported

There's no good reason to export the interfaces in use in sqlb nor any of the interfaces' fields. Make them unexported/private.

Table alias should be passed down to projected columns

If I construct a SQL expression like so:

users := meta.Table("users").As("u")

sel := sqlb.Select(users.Column("email"))

currently sel.String() outputs "SELECT email FROM users AS u", however it should output the table alias as a prefix of the column name, i.e "SELECT u.email FROM users AS u"

Build INSERT/DELETE/UPDATE statements from Table[Def] structs

Now that SELECT statements are handled pretty smoothly, I want to move on to supporting the construction of INSERT, UPDATE and DELETE statements.

A possible API for this might look like this:

users := meta.Table("users")
del := users.Delete().Where(sqlb.Equal(users.Column("id"), 100))
qs, qargs := del.StringArgs()

and qs would end up with the following SQL string:

DELETE FROM users WHERE users.id = ?

with qargs containing []interface{}{100}

For INSERT statements, a similar API would be driven from the Table[Def] struct:

users := meta.TableDef("users")
vals := make(map[string]interface{}, 0)
vals["id"] = nil
vals["name"] = ...
vals["created_at"] = ...
ins := users.Insert().Values(vals)

Not possible to pass *sqlFunc to Select()

When doing the following:

t := meta.Table("articles")
q := sqlb.Select(Min(t.Column("created_on")))
qs, qargs := q.StringArgs()

we'd end up with the following SQL string in the qs variable:

SELECT ?

Turns out we don't have a *sqlFunc case in the type switch in Select()

Support subselects in the SELECT clause (scalar subselects)

This SQL:

SELECT id, name, (SELECT COUNT(*) FROM t2 WHERE t2.id = t1.id) AS t2_count
FROM t1;

has a subselect in the SELECT clause, which is sometimes called a scalar subselect.

We'd like to support this query construct naturally, so perhaps something like this calling convention might work:

t1 := meta.Table("t1")
t2 := meta.Table("t2")

ss := sqlb.Select(t2.Count()).Where(sqlb.Equal(t1.Column("id"), t2.Column("id"))
s := sqlb.Select(t1.Column("id"), t1.Column("name"), ss.Scalar())

One might think we could just do:

s := sqlb.Select(t1.Column("id"), t1.Column("name"), ss)

however because the behaviour of sqlb.Select() is to consider sqlb.SelectQuery structs supplied as parameters to be derived tables and not scalar subqueries, we need a way of saying that the intent is to use the subquery in the SELECT clause and not the FROM clause. Thus the need for the sqlb.SelectQuery.Scalar() function.

Joining with BETWEEN expression in the ON clause results in incorrect SQL

The following sqlb calls:

548     m := s.Meta()
 549     o1tbl := m.TableDef("organizations").As("o1")
 550     o2tbl := m.TableDef("organizations").As("o2")
 551     colOrgId1 := o1tbl.Column("id")
 552     colOrgId2 := o2tbl.Column("id")
 553     colNestedSetLeft1 := o1tbl.Column("nested_set_left")
 554     colNestedSetLeft2 := o2tbl.Column("nested_set_left")
 555     colNestedSetRight2 := o2tbl.Column("nested_set_right")
 556     colRootOrgId1 := o1tbl.Column("root_organization_id")
 557     colRootOrgId2 := o2tbl.Column("root_organization_id")
 558     joinCond := sqlb.And(
 559         sqlb.Equal(colRootOrgId1, colRootOrgId2),
 560         sqlb.Between(colNestedSetLeft1, colNestedSetLeft2, colNestedSetRight2),
 561     )
 562     q := sqlb.Select(colOrgId1).Join(o2tbl, joinCond)
 563     q.Where(sqlb.Equal(colOrgId2, parentId))
 564     qs, qargs := q.StringArgs()

should result in the following SQL:

SELECT o1.id
FROM organizations AS o1
JOIN organizations AS o2
ON o1.root_organization_id = o2.root_organization_id
AND o1.nested_set_left BETWEEN o2.nested_set_left AND o2.nested_set_right
WHERE o2.id = ?

However this is actually resulting in the following SQL:

SELECT o1.id FROM organizations AS o1 WHERE o2.id = ?

so clearly something's not working with the SelectQuery.Join() method in analyzing the ON condition properly to add the joinClause appropriately

Allow prettyprint formatting

Instead of only using a single space character between clauses like WHERE and FROM, it would be nice to be able to control the formatting of the SQL string buffer in some ways.

For example, the following code:

orgs := meta.TableDef("organizations")
q := sqlb.Select(orgs.Column("id"), orgs.Column("root_organization_id")).Where(sqlb.Equal(orgs.Column("display_name"), "foo"))
qs, qargs := q.StringArgs()

will produce the following string in the qs buffer:

SELECT id, root_organization_id FROM organizations WHERE display_name = ?

It would be nice to be able to do something like this:

sqlf := &sqlb.FormatOptions{
    SeparateClausesWithNewline: true,
}
q.Format(sqlf)
qs, qargs := q.StringArgs()

if I wanted to have the SQL look like this:

SELECT id, root_organization_id
FROM organizations
WHERE display_name = ?

Handle INNER JOIN operations

A user should be able to manually specify a join to another table or Selectable, like so:

u := meta.Table("users").As("u")
a := meta.Table("articles").As("a")

join := sqlb.Join(u, a, sqlb.Equal(u.Column("id"), a.Column("author")))
sel := sqlb.Select(a.Column("title"), u.Column("name"), join).Limit(10)

// sel.String() == "SELECT a.title, u.name FROM users AS u JOIN articles AS a ON u.id = a.author LIMIT 10

Support for LIMIT clause

We need support for the LIMIT clause so users can do:

users := meta.Table("users")

sel := sqlb.Select(users).Limit(10)

Consolidate aliased projections

The reason we have separate ColumnDef and a Column structs is to provide a way to handle aliasing. I'd like to simplify the public APIs for sqlb around metadata to only having a single Column public struct and no more ColumnDef struct.

Instead of this:

users := meta.Table("users")
userId := users.ColumnDef("id")
userName := users.Column("name")

there would be only a single "TableDef.C()" method that would always return a Column struct for a given named field:

userId := users.C("id")

to alias a column definition, the user would continue to use the As() method:

userIdAliased := userId.As("user_id")

Add reference documentation for using derived tables

Derived tables (subqueries in the FROM clause) are a little bit of an advanced topic so should probably not be in the README. But it would be good to add some reference documentation that uses a nested sets model schema to demonstrate use of derived tables.

Reflection API

As documented in the README, there should be a sqlb.Reflect() function that accepts a DB driver name, a pointer to an opened sql.DB struct, and a pointer to a sqlb.Meta struct that will be populated with metadata about the database's objects.

  • Table descriptors populated for the database
  • Column descriptors populated for each table visited
  • Relation descriptors populated for each relation discovered

Handle LEFT and CROSS join variants

We need to handle outer and cross variants of the JOIN SQL statement.

I think it's probably best to go with a separate sqlb.OuterJoin() and sqlb.CrossJoin() function that will generate a JoinClause with a joinType of JOIN_OUTER and JOIN_CROSS

Support GROUP BY clause

We need to allow the user to specify one or more grouping fields in the GROUP BY clause for a Selectable.

Selectable.GroupBy() should accept Column or ColumnDefs. The following code should be runnable:

users := meta.Table("users")

emailCol := users.Column("email")

sel1 := sqlb.Select(users).GroupBy(emailCol)
sel2 := sqlb.Select(users).GroupBy(users.ColumnDef("email"))

// sel1.String() == sel2.String()

Handle derived tables (SELECT in FROM clause)

SELECTs in the FROM clause are also known as derived tables. We need to support the ability to pass sqlb.Select another sqlb.Selectable.

For example, we should be able to do the following:

a := meta.Table("articles")
u := meta.Table("users")

ss := sqlb.Select(u.Column("id")).Where(sqlb.Equal(u.Column("email"), "[email protected]"))

s := sqlb.Select(a).Join(ss).On(sqlb.Equal(ss.Column("id"), a.Column("author"))

Tests should validate that the following SQL is generated:

SELECT articles.* FROM articles
JOIN (SELECT id FROM users WHERE email = ?) AS derived1
ON derived1.id = articles.author

Put together benchmark tests

We want to know if all the work we're doing to minimize memory allocation and string manipulation of generated SQL strings is actually paying off compared to the more "traditional" way of building SQL strings. So, a benchmark package with tests is appropriate.

Support for datetime-related SQL functions

Want to support the following SQL functions:

  • NOW()
  • CURRENT_TIMESTAMP()
  • CURRENT_DATE()
  • CURRENT_TIME()
  • EXTRACT()

All of the above functions exist in both MySQL and PostgreSQL

Allow INSERT and UPDATE to reference columns or function results

I'd like to allow both the sqlb.Update() and sqlb.Insert() functions to be able to reference another column or the result of a SQL function in their column value lists.

For example, let's say I want to set the created_on column to the result of the NOW() SQL function.

I'd like to be able to do:

vals := map[string]interface{}{
    "name": username,
    "created_on": sqlb.Now(),
}
q := sqlb.Insert(users, vals)
qs, qargs := q.StringArgs()

and have the qs variable contain:

INSERT INTO users (name, created_on) VALUES (?, NOW())

Link up TravisCI

Get a TravisCI env setup and running on each proposed pull request and merge

Support ORDER BY clause

We need to allow the user to specify one or more sort fields and directions in the ORDER BY clause for a Selectable.

Selectable.OrderBy() should accept Column, ColumnDef or string literals. The following code should be runnable:

users := meta.Table("users")

emailCol := users.Column("email")

sel1 := sqlb.Select(users).OrderBy(emailCol)
sel2 := sqlb.Select(users).OrderBy(users.ColumnDef("email"))
sel3 := sqlb.Select(users).OrderBy("email")

// sel1.String() == sel2.String() == sel3.String()

Add UNION support

A UNION of two SELECT statements should be handled. I think it's best to be explicit about the union'ing. We should add a sqlb.Union(a *SelectClause, b *SelectClause) *SelectClause method that can be used to UNION two distinct sets together.

Allow function projections

We need a way for users to be able to specify SQL functions like COUNT(), SUM(), CONCAT(), etc in their Selectables.

I'm think something akin to this interface:

users := meta.Table("users")

sel := sqlb.Select(sqlb.Max(users.Column("created_on")).As("max_created_on"))

// sel.String == "SELECT MAX(created_on) AS max_created_on FROM users"
  • SUM(), MIN(), MAX(), AVG()
  • COUNT(*)
  • COUNT(DISTINCT col)
  • CAST(col AS type)
  • TRIM MySQL function extension (PostgreSQL has BTRIM())
  • CHAR_LENGTH() and BIT_LENGTH()
  • ASCII()
  • CONCAT(), CONCAT_WS()

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.