Code Monkey home page Code Monkey logo

pg's Introduction

PostgreSQL client and ORM for Golang

Maintenance mode

go-pg is in a maintenance mode and only critical issues are addressed. New development happens in Bun repo which offers similar functionality but works with PostgreSQL, MySQL, MariaDB, and SQLite.


Go PkgGoDev Documentation Chat

Tutorials

Ecosystem

Features

Installation

go-pg supports 2 last Go versions and requires a Go version with modules support. So make sure to initialize a Go module:

go mod init github.com/my/repo

And then install go-pg (note v10 in the import; omitting it is a popular mistake):

go get github.com/go-pg/pg/v10

Quickstart

package pg_test

import (
    "fmt"

    "github.com/go-pg/pg/v10"
    "github.com/go-pg/pg/v10/orm"
)

type User struct {
    Id     int64
    Name   string
    Emails []string
}

func (u User) String() string {
    return fmt.Sprintf("User<%d %s %v>", u.Id, u.Name, u.Emails)
}

type Story struct {
    Id       int64
    Title    string
    AuthorId int64
    Author   *User `pg:"rel:has-one"`
}

func (s Story) String() string {
    return fmt.Sprintf("Story<%d %s %s>", s.Id, s.Title, s.Author)
}

func ExampleDB_Model() {
    db := pg.Connect(&pg.Options{
        User: "postgres",
    })
    defer db.Close()

    err := createSchema(db)
    if err != nil {
        panic(err)
    }

    user1 := &User{
        Name:   "admin",
        Emails: []string{"admin1@admin", "admin2@admin"},
    }
    _, err = db.Model(user1).Insert()
    if err != nil {
        panic(err)
    }

    _, err = db.Model(&User{
        Name:   "root",
        Emails: []string{"root1@root", "root2@root"},
    }).Insert()
    if err != nil {
        panic(err)
    }

    story1 := &Story{
        Title:    "Cool story",
        AuthorId: user1.Id,
    }
    _, err = db.Model(story1).Insert()
    if err != nil {
        panic(err)
    }

    // Select user by primary key.
    user := &User{Id: user1.Id}
    err = db.Model(user).WherePK().Select()
    if err != nil {
        panic(err)
    }

    // Select all users.
    var users []User
    err = db.Model(&users).Select()
    if err != nil {
        panic(err)
    }

    // Select story and associated author in one query.
    story := new(Story)
    err = db.Model(story).
        Relation("Author").
        Where("story.id = ?", story1.Id).
        Select()
    if err != nil {
        panic(err)
    }

    fmt.Println(user)
    fmt.Println(users)
    fmt.Println(story)
    // Output: User<1 admin [admin1@admin admin2@admin]>
    // [User<1 admin [admin1@admin admin2@admin]> User<2 root [root1@root root2@root]>]
    // Story<1 Cool story User<1 admin [admin1@admin admin2@admin]>>
}

// createSchema creates database schema for User and Story models.
func createSchema(db *pg.DB) error {
    models := []interface{}{
        (*User)(nil),
        (*Story)(nil),
    }

    for _, model := range models {
        err := db.Model(model).CreateTable(&orm.CreateTableOptions{
            Temp: true,
        })
        if err != nil {
            return err
        }
    }
    return nil
}

See also

pg's People

Contributors

0rax avatar 23doors avatar a-kr avatar anmic avatar burner-account avatar dankeder avatar dependabot[bot] avatar elliotcourant avatar fernandez14 avatar huttarichard avatar hypnoglow avatar iam047801 avatar jayschwa avatar marvin-hansen avatar mattnibs avatar maxatome avatar mdelapenya avatar mik-volkov avatar moogle19 avatar nathanbaulch avatar pymq avatar rbeuque74 avatar rcmonitor avatar renovate-bot avatar shoshinnikita avatar szubtsovskiy avatar vmihailenco avatar xakep666 avatar xin-tsla avatar yuzefovich avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pg's Issues

Cannot map embedded structs

It seems that this package cannot map fields from embedded structs when executing queries. For example:

type Entity struct {
    Id string
}

type Article struct {
    *Entity
    Name string
}

...

result, err := db.ExecOne(`INSERT INTO "Articles" (id, name) VALUES (?id, ?name)`, &Article{"id-string", "Article name"})

Would be nice if it could

"Cursor" support

Hi. Is there any support for queries of large datasets where you want to "stream" the results instead of loading them all up in memory? So in other words, I'd like memory consumption to not necessarily be proportional to the size of the result set.

In the standard library (the database/sql Rows struct) it seems to work like a cursor.

Arrays not properly escaped

package pg_test

import (
    "fmt"
    "gopkg.in/pg.v4"
)

func ExampleArray() {
    src := []string{"[email protected]", "[email protected]", "three@'example.com"}
    var dst []string
    _, err := db.QueryOne(pg.Scan(pg.Array(&dst)), `SELECT ?`, pg.Array(src))
    if err != nil {
        panic(err)
    }
    fmt.Println(dst)
    // Output: [[email protected] [email protected] three@'example.com]
}
go test
[...]
--- FAIL: ExampleArray (0.00s)
panic: ERROR #42601 syntax error at or near ".":  [recovered]
    panic: ERROR #42601 syntax error at or near ".":

How to call a function with array params

I have a function for batch insert, which has array parameters like so:

insert_batch(_dates date[], _values integer[])

I am trying to call it like the following:

SELECT * FROM insert_batch($1, $2)

I read through the souce code however I couldn't find a way I can do this. Is it possible to call raw select sql with bind parameters.

About many to many relations

I want a table to relates with itself,how to define Model?The struct like:

type   User struct{
    Id
    Name
    Users []User
}

Array_agg and array_to_json

Consider the following structs

type ItemWithTags struct { 
    *Item
    Tags  []Tag
}
type Tag struct {
    Id int
    Name string
}

Is there a way to bind the following result to ItemWithTags?:

SELECT i.*, array_to_json(array_remove(array_agg(t),NULL)) AS tags
FROM items i
LEFT OUTER JOIN tags t ON t.item_id=i.id
GROUP BY i.id

The example output from this query contains regular column values from Items table and the following tags array:

"tags": [
        {
            "tag_id": 304,
            "name": "Football",
        },
        {
            "tag_id": 311,
            "name": "Eurosport",
        }
    ]

Canonical way to handle json columns

I have a few json columns in a table:

CREATE TABLE items (
    id serial PRIMARY KEY,
    group_name text NOT NULL REFERENCES groups(name),

    name text NOT NULL,
    description text,
    user_id integer NOT NULL REFERENCES users(id),
    -- ...
    data json,
    -- ...
);

Currently I'm using the lib/pq package (from before issue #14 was fixed) and doing something like this:

type Item struct {
    Id              int
    GroupName       string
    Name            string
    Description     string
    UserId          int
        // ...
    Data            map[string]interface{}
    dataEncoded     []byte
        // ...
}

    var items Items
    for rows.Next() {
        var item Item
        if err := rows.Scan(&item.Id, &item.Type, &item.Duration,
            &item.DefaultDuration, &item.dataEncoded); err != nil {
            return nil, err
        }
        if err := json.Unmarshal(item.dataEncoded, &item.Data); err != nil {
            return nil, err
        }
        items = append(items, &item)
    }

This is, of course, less than ideal, but it does work. Could I write this (cleaner, if at all), with go-pg/pg?

Tag v2 is newer than v2.1

The version on gopkg.in/v2 uses tag v2.1. That tag had its latest commit Jun23. However, tag 2 has more commits done after that. Is that done on purpose? Its very confusing

connect over UNIX sockets

Hi, trying to connect yields an wrong password error. This had me scratching my head as I could connect just fine without a password on the commandline (same user): psql -d db. After a bit of poking around I realized I allow passwordless connections only over UNIX sockets. This package seems to be only capable of TCP connections. Could there be an option for UNIX sockets too?

support json array?

I have a json array column named 'content', example value like:
[{"time": 1458033154, "user": "baoer", "detail": "hello"}, {"time": 1458033155, "user": "beiguo", "detail": "hi"}]
when I find a record from my table ,It raised that "invalid character ',' after object key", how can I change it, my struct defined the content column is Content []interface{}

nil time.Time pointer works in insert, but not in update

I have this code:

type FeedArticle struct {
    Id          int64
    Feed        int64
    Reference   string
    Title       string
    Body        sql.NullString
    PublishedOn *time.Time
}

func (m *Model) MergeArticles(articles ...*FeedArticle) error {
    tx, err := m.db.Begin()
    if err != nil {
        return err
    }

    update, err := tx.Prepare(`
        UPDATE feed_articles
        SET    title         = $3::text
        ,      body          = $4::text
        ,      published_on  = $5::timestamptz
        WHERE  feed          = $2::integer
        AND    reference     = $1::text
    `)
    if err != nil {
        tx.Rollback()
        return err
    }

    insert, err := tx.Prepare(`
        INSERT INTO feed_articles (reference, feed, title, body, published_on)
        VALUES ($1::text, $2::integer, $3::text, $4::text, $5::timestamptz)
    `)
    if err != nil {
        tx.Rollback()
        return err
    }

    for _, a := range articles {
        // do a naive loop, later we should generate a bulk statement client
        // side
        fmt.Printf("Processing: %s, %d, %s, %v, %p\n", a.Reference, a.Feed,
            a.Title, a.Body, a.PublishedOn)
        res, err := update.ExecOne(a.Reference, a.Feed, a.Title, a.Body,
            a.PublishedOn)
        if err != nil {
            tx.Rollback()
            return err
        }

        // if we've managed to update an article, stop here
        if res.Affected() != 0 {
            continue
        }

        _, err = insert.ExecOne(a.Reference, a.Feed, a.Title, a.Body,
            a.PublishedOn)
        if err != nil {
            return err
        }

    }

    return tx.Commit()
}

And the second time I run it, it errors out with:

panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xb code=0x1 addr=0x0 pc=0x5c23b0]

goroutine 16 [running]:
runtime.panic(0x777180, 0xa99cb3)
        /usr/lib/go/src/pkg/runtime/panic.c:279 +0xf5
github.com/go-pg/pg.assertOneAffected(0x0, 0xc208103938, 0x0, 0x0)
        /home/vagrant/go/src/github.com/go-pg/pg/db.go:388 +0x30
github.com/go-pg/pg.(*Stmt).ExecOne(0xc208140f80, 0xc208103938, 0x5, 0x5, 0x5, 0x0, 0x0)
        /home/vagrant/go/src/github.com/go-pg/pg/stmt.go:75 +0xa2
bitbucket.org/aktau/lumamodel.(*Model).MergeArticles(0xc2080384d8, 0xc208107b00, 0x12, 0x20, 0x0
, 0x0)

Note that the FeedArticle.PublishedOn member is a nil pointer in this invocation (I checked). What's string is: this seems to work for the INSERT. I checked after a first run and the row is present in the DB (and the PublishedOn column is NULL).

Am I doing something wrong?

(I'm using a pointer to time.Time because I couldn't find a sql.NullTime type).

Fix setting zero values on NULL (take two)

Regarding #13

There are several problems with the way #7 was resolved:

  1. The original test described in the issue fails, see https://travis-ci.org/go-pg/pg/jobs/27562725

  2. There is some unclear code in tests. For example, what is the difference between lines 191 and 192, if in both cases dst is of type *bool?

    188 var conversionTests = []conversionTest{
    189     {src: false, dst: &boolv, pgtype: "bool"},
    190     {src: true, dst: &boolv, pgtype: "bool"},
    191     {src: nil, dst: &boolv, pgtype: "bool", wantzero: true},
    192     {src: nil, dst: boolptr, pgtype: "bool", wantnil: true},
    
  3. The intent of this code is not clear:

    37 func DecodeValue(dst reflect.Value, f []byte) error {
    38     if f == nil {
    39         dst = indirect(dst)
    40         if dst.IsValid() {
    41             dst.Set(reflect.Zero(dst.Type()))
    42         }
    43         return nil
    44     }
    

My original idea was along the following lines: suppose we have a table CREATE TABLE aaa (id INT NOT NULL, value INT), which is modelled by a struct

type Aaa struct {
    Id int
    Value *int
}

This struct has a zero state of {0, nil}. Currently pg can do the following things:

  • Write this struct into database, encoding Value=nil as NULL and Value=123 as 123;
  • Read into this struct from database, setting *Value = 123 even if *Value was nil prior to reading.

What it can't do is set Value back to its zero state of nil when NULL is read from the database.

Originally I thought that this problem can be fixed by a simple code like

 37 func DecodeValue(dst reflect.Value, f []byte) error {
 38     if f == nil {
 39         if dst.CanSet() {
 40             dst.Set(reflect.Zero(dst.Type()))
 41         }
 42         return nil
 43     }

However the current implementation (which was merged in #13) seems to do some other, more complex thing.

Compound items (item after JOIN)

I realize my requests are getting more outlandish by the day, but I'd like to ask just in case I'm missing something.

Suppose I have a table items and a table groups:

CREATE TABLE items (
    id serial PRIMARY KEY,
    group_name text NOT NULL REFERENCES groups(name),

    name text NOT NULL,
    description text,
    user_id integer NOT NULL REFERENCES users(id),
    -- ...
    data json,

    created_on timestamptz,    
    updated_on timestamptz
);

CREATE TABLE groups(
    name text,
    mediadir text,
    created_on timestamptz,    
    updated_on timestamptz
);

As you can see, the group_name field in the items table refers to the name field of the groups table. Sometimes I want the info in the groups table to be included in the item's fields:

SELECT  it.*, gr.*
FROM   items it
JOIN     groups gr ON it.group_name = gr.name;    

At first I had the idea of embedding both structs into a new struct:

type ItemWithGroup struct {
    Item
    Group
}

But then I realized there would be duplicate column names (the updated_on and created_on fields). Can I resolve this issue, or will have to use a Scan-like interface? Or will I have to join on the application side?

Document how RunInTransaction works

We have a small go coding test for potential new employees at my company. From what I can tell a lot of our candidates dont understand how RunInTransaction works. The "tricky" part is that you have to actually use the inner pg.Tx to make use of the transaction which many of them doesnt seem to get. Now since this is a coding test it doesnt have any critical effect except it makes it easy for us to filter them out, but I suspect that a lot of normal users of go-pg will make the same mistake.

What Ive seen people do is this:

db.RunInTransaction(func(tx *pg.Tx) error {
    db.Exec("insert into users (a) values(1)")
    tx.Rollback()
    return nil
})

Instead you should of course do it this way if you want the insert to be rollbacked:

db.RunInTransaction(func(tx *pg.Tx) error {
    tx.Exec("insert into users (a) values(1)")
    tx.Rollback()
    return nil
})

Maybe you can add an example / clarify the documentation of this function? I think that can save a lot of tricky debugging for unexperienced users of this library.

v4 Changelog

Please tell us what is new in v4 and if it is production ready. Thanks.

A connection is not put back to the pool if DB.Query panics

Hi guys,
We struggled with a very mystical problem of no available connections in the pool.
After a very long investigation, I figured out that if the first parameter model in DB.Query is given as an value, simpleQueryData will panic. In that case a connection won't be freed.
Though it wasn't a correct way of using, DB.Query should always put back a connection to the pool, otherwise the pool will not have an available connection.

Basically,

db.freeConn(cn, err)

should be

defer db.freeConn(cn, err)

https://github.com/go-pg/pg/blob/v4/db.go#L151

And I believe, this is not the only place with such problem in the package.

It reminded me of exception safety in C++: https://en.wikipedia.org/wiki/Exception_safety
You'd better use RAII technique (Go's defer) to free resources instead of relying on a perfect case.

Zero value of slices

short version: It would be nice if null::text[] gets decoded as []string(nil) instead of []string{}

A column of array type, let say text[], can be mapped to a slice of strings []string. In such column we can store both a null array (null::text[]) or an empty array ('{}'::text[]).
Go has equivalents for both of them, a null array would be []string(nil) and an empty array would be []string{}.

Currently the library makes no difference between null and empty arrays, and maps both to []string{}. The issue with this is that the zero value of a slice is a nil slice, not an empty one, so if I have an structure like the following:

type Object struct{
    id               string
    stringSlice []string
}

and then I create a new Object in the following way:

a := Object{id: "1"}

then a.stringSlice will have a value of []string(nil), but if I save a to the db and then retrieve it, the stringSlice property gets decoded as []string{}. Therefore, if I compare my original struct and my retrieved struct, they are different.

Here's a playground with the mentioned example: http://play.golang.org/p/N4rf_47sL-

Same thing applies to the other supported slice types.

Thanks for such a great package!

pg: unsupported dst: []float64

I noticed there's support for []int64 but []float64 are missing from the decoder. Do you plan on adding support or is there some technical reason this can't be handled? Thanks!

Does pg.DB support concurrent calls?

Hello,

I see that unlike sql.DB, pg.DB needs to be closed explicitly via Close()

What I like about sql.DB is that I can open it once when my web application starts, and then safely use it in all go routines/HTTP requests.

Can I do the same with pg.DB, or do I need to call Connect()/Close() in every handler?

Thanks!

How load collections?

Need to load ids. How to do it?

ids := []int64{}
Query(pg.LoadInto(&ids), "SELECT id FROM table")
// unsupported pg.valuesLoader

Query(&ids, "SELECT id FROM table")
// unsupported []int64

How to get inserted auto_incremented value?

The table:

create table users (
id bigserial primary key,
name varchar(512) not null)

I insert a user using:

type User struct {
    Id       int64
    Name     string
}

var db pg.DB
var u User
db.Create(&u)

So how can I get the auto_incremented id?

Logger or output of raw SQL generated

I've started migrating my project from beego ORM to use pg.v2. So far, it's been fantastic and handles some of few PostgreSQL issues I've had like UUID and Array support. Is there a way to output the raw generated SQL?

pg.v3 production ready?

Thanks for all your work with pg.v3. Any ideas of when you think it will be production-ready? I'll switch over my pg.v2 to v3 and file issues if you're ready for some user testing.

Escaping query strings

Are query strings safe or I have to escape them myself (or use prepared statements)?

db.QueryOne(tag, `INSERT INTo tags (name) VALUES (?name) RETURNING id, name`, tag)

A jsonb field problem

Sorry for my bad English first!

I have a struct like this:

type User struct {
    //...
    Settings *UserJsonb // it's a jsonb field in pg
}

type UserJsonb struct {
    Timezone    string
    Stockparams []float64
}

func (m *UserJsonb) Scan(value interface{}) error {
    return json.Unmarshal(value.([]byte), m)
}

func (m UserJsonb) Value() (driver.Value, error) {
    b, err := json.Marshal(m)
    if err != nil {
        return nil, err
    }
    return string(b), nil
}

And it works well at first (both v2 and v3). My pg version is v9.4.

But after I updated gopkg.in/pg.v3(go get -u) and postgresql (apt-get upgrade), my app got an error message: json:Unmarshal(nil *mods.UserJsonb).

Then I changed Settings *UserJsonb to Settings UserJsonb, it worked again!

So is it a bug? Or it just works like this?

pg: cannot map timestamp with time zone/time.Time field

So I'm trying to run a simple SELECT query and get the resulting rows in a slice of their respective struct. This includes a time.Time value that is stored in postgres as a timestamp with time zone.

Here's the postgres result running the query:

testdb=# SELECT * FROM users WHERE device = 'android';
registrationid | tokens | device  |       registrationdate
---------------+--------+---------+-------------------------------
    foobar     |        | android | 2015-08-20 23:12:24.593388-04
(1 row)

And here's the relevant Go code:

type User struct {
     RegistrationID string
     Tokens *oauth2.Token
     Device string
     RegistrationDate time.Time
}

type Users struct {
     C []User
}

var _ pg.Collection = &Users{}

func (users *Users) NewRecord() interface{} {
     users.C = append(users.C, User{})
     return &users.C[len(users.C)-1]
}

func getUsers(device string) ([]User, error) {
     var users Users
     //DB is my active postgres DB connection
     _, err := DB.Query(&users, `SELECT * FROM users WHERE device = ?`, device)
     return users.C, err
}

The exact error I'm getting is pg: cannot map field "registrationdate" without a whole lot of other explanation or documentation on what that means.

There is one record retrieved, but RegistrationID is blank (not nil, just an empty string) and the RegistrationDate is "0001-01-01 00:00:00 +0000 UTC".

So far, everything I've done or tried seems to be exactly how it is in the documentation/examples so I'm assuming either the documentation is wrong/scare for time-based values or there's a bug mapping go's time.Time to and from timestamp with time zone. I've also tried switching the column to timestamp without time zone, date, and bigint with no avail.

Sorry if this doesn't really constitute a GitHub issue, but I could've find anywhere else to get specialized help related to this library like a mailing list or Slack channel. Any ideas what it could be?

How to define a custom type?

I want to use ObjectId of "gopkg.in/mgo.v2/bson" as the type of primary key. Is that possible? If it is, how to make it?

Ability to run psql file

I tried the library today and have not seen a way in the documentation or the code, to run psql files. I tried dumping the file contents to a string and doing an Exec but that didn't work either.

How to implement simple COUNT(*)

The db.Query and db.QueryOne expects a struct to populate. For a simple select count(*) query, what would you recommend?

My attempt:

type QueryResult struct {
    Count int
}

qr := QueryResult{}
_, err := db.QueryOne(&qr, `SELECT COUNT(*) FROM users`)
if err == nil {
   return qr.Count, nil
}
return -1, err

Is there a more idiomatic way to do this? Thanks.

another Go postgres driver

This is not really an issue, but I wanted to bring it to your attention, there's a new kid on the block: https://groups.google.com/forum/#!topic/Golang-nuts/ZvjNkpi1Fc4

That makes three in total. I asked him to add go-pg to his benchmark game and he obliged. I'll copy/paste the results at the bottom.

Of course that doesn't tell the whole story, ease-of-use is very important (that's why I'm using go-pg and not lib/pq.

Just wanted to mention it if you hadn't seen it yet.

# Run on a Core i7 4770, 16GB, Samsung 840 SSD
jack@edi:~/dev/go/src/github.com/jackc/go_db_bench$ uname -a
Linux edi 3.13.0-30-generic #55-Ubuntu SMP Fri Jul 4 21:40:53 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
jack@edi:~/dev/go/src/github.com/jackc/go_db_bench$ GO_DB_BENCH_PG_HOST=/var/run/postgresql go test -test.bench=. -test.benchmem -test.benchtime=5s
testing: warning: no tests to run
PASS
BenchmarkPgxNativeSelectSingleValueUnprepared     200000         49210 ns/op         304 B/op          9 allocs/op
BenchmarkPgxStdlibSelectSingleValueUnprepared     200000         56198 ns/op         627 B/op         21 allocs/op
BenchmarkPgSelectSingleValueUnprepared    100000         61146 ns/op         218 B/op          9 allocs/op
BenchmarkPqSelectSingleValueUnprepared    100000         84663 ns/op        1036 B/op         29 allocs/op
BenchmarkPgxNativeSelectSingleValuePrepared   200000         28481 ns/op         108 B/op          4 allocs/op
BenchmarkPgxStdlibSelectSingleValuePrepared   500000         30977 ns/op         516 B/op         21 allocs/op
BenchmarkPgSelectSingleValuePrepared      200000         28747 ns/op         118 B/op          5 allocs/op
BenchmarkPqSelectSingleValuePrepared      500000         33706 ns/op         663 B/op         23 allocs/op
BenchmarkRawSelectSingleValuePrepared     500000         21142 ns/op           0 B/op          0 allocs/op
BenchmarkPgxNativeSelectSingleRowUnprepared   200000         78815 ns/op        1069 B/op         21 allocs/op
BenchmarkPgxStdlibSelectSingleRowUnprepared   100000         92366 ns/op        1828 B/op         37 allocs/op
BenchmarkPgSelectSingleRowUnprepared      100000         74305 ns/op         866 B/op         26 allocs/op
BenchmarkPqSelectSingleRowUnprepared      100000        105567 ns/op        2017 B/op         48 allocs/op
BenchmarkPgxNativeSelectSingleRowPrepared     500000         32121 ns/op         218 B/op          5 allocs/op
BenchmarkPgxStdlibSelectSingleRowPrepared     200000         41183 ns/op        1056 B/op         26 allocs/op
BenchmarkPgSelectSingleRowPrepared    200000         43649 ns/op         453 B/op         14 allocs/op
BenchmarkPqSelectSingleRowPrepared    200000         48943 ns/op        1378 B/op         37 allocs/op
BenchmarkRawSelectSingleRowPrepared   500000         23256 ns/op           0 B/op          0 allocs/op
BenchmarkPgxNativeSelectMultipleRowsUnprepared     50000        289041 ns/op       13799 B/op        144 allocs/op
BenchmarkPgxStdlibSelectMultipleRowsUnprepared     50000        374134 ns/op       20369 B/op        303 allocs/op
BenchmarkPgSelectMultipleRowsUnprepared    20000        386107 ns/op       11559 B/op        338 allocs/op
BenchmarkPqSelectMultipleRowsUnprepared    50000        375961 ns/op       26526 B/op        473 allocs/op
BenchmarkPgxNativeSelectMultipleRowsPrepared      100000        130355 ns/op       11365 B/op         68 allocs/op
BenchmarkPgxStdlibSelectMultipleRowsPrepared      100000        147837 ns/op       17986 B/op        232 allocs/op
BenchmarkPgSelectMultipleRowsPrepared      50000        241844 ns/op       11150 B/op        326 allocs/op
BenchmarkPqSelectMultipleRowsPrepared      50000        230397 ns/op       25889 B/op        462 allocs/op
BenchmarkRawSelectMultipleRowsPrepared    200000         43388 ns/op           0 B/op          0 allocs/op
ok      github.com/jackc/go_db_bench    340.569s

About insert and update

If inserting a record is failed then update the record.I see the doc say using selectOrCreate,But i have no idea.how to write the code of upsert?

`Update` generate extra comma when the updated field is not in the end

The following will generate sql with syntax error:

type Relationship struct {
    UserId      int64 `sql:",pk"`
    Relation    int16
    OtherUserId int64 `sql:",pk"`
}
newRelationship = Relationship{1, 1, 3}
db.Update(&newRelationship)

SQL generated:

UPDATE "relationships" SET "relation" = 1,  WHERE "user_id" = 1 AND "other_user_id" = 3 -- extra comma after relation field

If I move Relation to the end, it will generated the correct sql:

type Relationship struct {
    UserId      int64 `sql:",pk"`
    OtherUserId int64 `sql:",pk"`
    Relation    int16
}
...

SQL generated:

UPDATE "relationships" SET "relation" = 1 WHERE "user_id" = 1 AND "other_user_id" = 3

Is this a bug?

v3 multiple inserts returning values

I have written query that insert multiple rows using the following construct:

db.Query(hold, "INSERT INTO table (id, name, url) VALUES (?, ?, ?), (?, ?, ?), ... returning id, url", args...)

Is it possible to load returned id and url values to a slice hold ? Is there a way to access the result directly?

sql parser error?

What am i doing wrong?
I get an error pg: cannot find field "subpath" in struct { Path string; Name string }

arrRes := []struct {
        Path string
        Name    string
    }
_, err := Query(&arrRes, query, "a.b.s.c")

For example, sql

-- query
SELECT 
    subpath(tree, 1, nlevel(tree)-1), 
    (
        SELECT array_agg(name)
        FROM (SELECT md5(lex) AS name FROM table WHERE tree @> fcc.tree AND nlevel(tree) > 1 ORDER BY tree) AS ttt

    ) AS new_path,
    name
    FROM table fcc WHERE component_path @> ? AND nlevel(tree) > 1

Custom type struct

How to add a custom type that would be not causing an error Formater? This is the interface?

ex.

CREATE TABLE objects (
object_id serial NOT NULL PRIMARY KEY,
tags text[],
);

INSERT INTO objects(tags) VALUES
    ('{a, b, c}'),
    ('{a, c}'),
    ('{c}'),
    ('{b}');
var model = struct {
        ObjectId int64
        Tags  StringSlice
    }{}

QueryOne(&model, "SELECT object_id, tags FROM objects WHERE object_id = ?", 1)
// ok

model.Tags.Del("a")
model.Tags.Del("b")
model.Tags.Add("d")

QueryOne(&model, "UPDATE objects SET tags = ?tags WHERE object_id = ?object_id", model)
QueryOne(&model, "UPDATE objects SET tags = ? WHERE object_id = ?", model.Tags, model.ObjectId)
// Error: Format(unsupported StringSlice)

QueryOne(&model, "UPDATE objects SET tags = ? WHERE object_id = ?", []string(model.Tags), model.ObjectId)
// It'ok, but isn't convenient to use
type StringSlice []string

func (c *StringSlice) Del(str string) {
    str = strings.TrimSpace(str)

    if !c.IsExist(str) {
        return
    }

    for index, value := range *c {

        if bytes.Equal([]byte(str), []byte(value)) {
            (*c)[index] = (*c)[len((*c))-1]
            (*c) = (*c)[:len((*c))-1]

            return
        }
    }
}

func (c *StringSlice) IsExist(str string) bool {
    str = strings.TrimSpace(str)

    for _, value := range *c {

        if bytes.Equal([]byte(str), []byte(value)) {

            return true
        }
    }

    return false
}

func (c *StringSlice) Add(str string) {
    str = strings.TrimSpace(str)

    if len(str) == 0 {
        return
    }

    if !c.IsExist(str) {
        *c = append(*c, str)
    }
}

Provide ability to read inner structCache

I'm building model layer on top of this wonderful library. And it would be great to have read access to structs (structCache).
I've monkey-patched my fork with

func Structs() *structCache {
    return structs
}

and used it to access name from map, pgValue.Source and pgValue.Index.

I think it's good to form low level library interface which allows upper levels on top of it.
What do you think about it?

PK in nested struct

Hello, and thank you for your hard work.

I'd choose to change my db ms to go-pg from gorm and now I'm wondering - is it possible somehow to use nested struct field as PK.
For example:

type M struct {
    ID uint `sql:",pk"`
}

type Author struct {
    M
    Name  string
    Books []Book
}

type Book struct {
    M
    Name     string
    AuthorID uint
}

This will loose all benefits from model binding. No relations, no RETURNIG id and so on.
I realise that it is not necessity but it would be very convenient to have a choise to use nested struct field as PK.

JSON error

Trying to insert

{{63593732575 333450581 0x67f6c0} map[fb_adset_id:6032793131234 af_c_id:3e025 af_cost_model:cpi imei:358422050231234 android_id:449021d533061234 af_sub1:<nil> media_source:Facebook Ads carrier:Vodafone os_version:4.3 device_brand:samsung af_adset:3gvow fb_campaign_id:6032793121234 af_sub4:<nil> campaign:FACEBOOK_CAMPAIGN_TEST fb_campaign_name:FACEBOOK_CAMPAIGN_TEST agency:<nil> af_sub3:<nil> customer_user_id:1234-5678 fb_adgroup_id:6032793211234 operator:Vodafone IN appsflyer_device_id:1446034686662-8619927357095541234 app_version:4.4.1 af_adset_id:3gvow device_model:GT-I9300 app_name:[App Name Comes Here] attribution_type:regular event_type:install country_code:GB af_siteid:<nil> af_cost_currency:USD af_keywords:key1 key2 key3 af_ad_id:123245353 download_time:2015-10-28 00:18:00 cost_per_install:<nil> is_retargeting:false af_sub2:<nil> fb_adgroup_name:FACEBOOK_ADGROUP_TEST event_time:2015-10-28 12:18:36 event_value:<nil> mac:<nil> http_referrer:<nil> af_channel:youtube ip:115.98.114.125 re_targeting_conversion_type:<nil> sdk_version:1.17 language:English af_sub5:<nil> wifi:true click_url:<nil> platform:android af_ad_type:Banner_210X600 event_name:<nil> af_cost_value:0.0750 advertising_id:95e55150-1111-2222-3333-3e0c8c1d9978 app_id:[App ID Comes Here] city:London fb_adset_name:FACEBOOK_ADSET_TEST currency:<nil> install_time:2015-10-28 12:18:36 click_time:2015-10-28 12:17:37 af_ad:a86721874]}

into

type Postback struct { dt time.Time postback map[string]interface{} }

get

ERROR #42601 syntax error at or near ")":

Custom table name?

Is it possible to create a model but specify a custom table name? If so, how does one do so?

Formatting a WHERE field IN (array) query

How would you send a query with a WHERE field IN ('a', 'b', 'c') clause in it? Passing the string slice to FormatQ wraps the slice up as {"a","b","c"} if I use WHERE field IN (?).

I think I'm going to end up having to use AppendQ and looping over the members of the array. Is there a better way?

Having only really been exposed to Arel in recent years I don't know if I've ended up with any Arel specific expectations about how interpolation would happen.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.