Code Monkey home page Code Monkey logo

sqlite's Introduction

Tailscale

https://tailscale.com

Private WireGuard® networks made easy

Overview

This repository contains the majority of Tailscale's open source code. Notably, it includes the tailscaled daemon and the tailscale CLI tool. The tailscaled daemon runs on Linux, Windows, macOS, and to varying degrees on FreeBSD and OpenBSD. The Tailscale iOS and Android apps use this repo's code, but this repo doesn't contain the mobile GUI code.

Other Tailscale repos of note:

For background on which parts of Tailscale are open source and why, see https://tailscale.com/opensource/.

Using

We serve packages for a variety of distros and platforms at https://pkgs.tailscale.com.

Other clients

The macOS, iOS, and Windows clients use the code in this repository but additionally include small GUI wrappers. The GUI wrappers on non-open source platforms are themselves not open source.

Building

We always require the latest Go release, currently Go 1.22. (While we build releases with our Go fork, its use is not required.)

go install tailscale.com/cmd/tailscale{,d}

If you're packaging Tailscale for distribution, use build_dist.sh instead, to burn commit IDs and version info into the binaries:

./build_dist.sh tailscale.com/cmd/tailscale
./build_dist.sh tailscale.com/cmd/tailscaled

If your distro has conventions that preclude the use of build_dist.sh, please do the equivalent of what it does in your distro's way, so that bug reports contain useful version information.

Bugs

Please file any issues about this code or the hosted service on the issue tracker.

Contributing

PRs welcome! But please file bugs. Commit messages should reference bugs.

We require Developer Certificate of Origin Signed-off-by lines in commits.

See git log for our commit message style. It's basically the same as Go's style.

About Us

Tailscale is primarily developed by the people at https://github.com/orgs/tailscale/people. For other contributors, see:

Legal

WireGuard is a registered trademark of Jason A. Donenfeld.

sqlite's People

Contributors

andrew-d avatar astrophena avatar bradfitz avatar crawshaw avatar creachadair avatar danderson avatar icio avatar kradalby avatar maisem avatar raggi avatar sailorfrag avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sqlite's Issues

Using dlv fails with compile errors related to _GoStringLen and _GoStringPtr

When I try to use dlv in cgosqlite or anything that depends on it, I get compiler errors:

$ dlv test ./
# github.com/tailscale/sqlite.test
${GOROOT}/pkg/tool/linux_amd64/link: running gcc failed: exit status 1
/usr/bin/ld: /tmp/go-link-2730833985/000000.o: in function `bind_parameter_index':
${HOME}/tailscale/sqlite/cgosqlite/cgosqlite.h:42: undefined reference to `_GoStringLen'
/usr/bin/ld: ${HOME}/tailscale/sqlite/cgosqlite/cgosqlite.h:43: undefined reference to `_GoStringPtr'
collect2: error: ld returned 1 exit status

exit status 1

For some reason go test ./ still succeeds and I haven't figured out what exactly is different.

Anyway looking into this error I eventually came across golang/go#48824 and the very last comment says:

As far as I can tell _GoStringLen and _GoStringPtr are declared in the preamble--but only if you don't use //export. The static inline trick can often work but there is no intention of officially supporting it.

So it looks like the export in cgosqlite/cgosqlite.go needs to be moved to a new file.

Issues with interactions with custom Scanners

I am working on something involving this database driver that makes use of the sql.Null* types. When making queries that have a mix of NULL and non-NULL values in a column, only NULL values are returned. Consider this SQLite shell session:

sqlite> CREATE TABLE foo ( data TEXT );
sqlite> INSERT INTO foo(data) VALUES ('bar');
sqlite> INSERT INTO foo(data) VALUES (NULL);

When I run this in Go and query SELECT data FROM foo ORDER BY rowid, I get the following (after scanning it into a slice of sql.NullString):

[]sql.NullString{
  {String: "bar", Valid: true},
  {String: "", Valid: true},
}

Which is confusing because the SQLite shell returns the following for the moral equivalent of that datatype:

sqlite> SELECT data, data IS NOT NULL AS valid FROM foo ORDER BY rowid;
| data | valid |
|------|-------|
| bar  | 1     |
|      | 0     |

Here is a minimal reproduction case:

func TestSQLiteBug(t *testing.T) {
	db := sql.OpenDB(sqlite.Connector(":memory:", func(ctx context.Context, conn driver.ConnPrepareContext) error { return nil }, nil))
	err := db.Ping()
	if err != nil {
		t.Fatal(err)
	}

	_, err = db.Exec("CREATE TABLE foo ( data TEXT )")
	if err != nil {
		t.Fatal(err)
	}

	db.Exec("INSERT INTO foo(data) VALUES ('bar')")
	db.Exec("INSERT INTO foo(data) VALUES (NULL)")

	var result []sql.NullString
	rows, err := db.Query("SELECT data FROM foo ORDER BY rowid")
	if err != nil {
		t.Fatal(err)
	}
	defer rows.Close()

	for rows.Next() {
		var val sql.NullString
		err := rows.Scan(&val)
		if err != nil {
			t.Fatal(err)
		}
		result = append(result, val)
	}

	if len(result) != 2 {
		t.Fatalf("wanted 2 results, not %d", len(result))
	}

	first := result[0]
	second := result[1]

	t.Logf("first:  %#+v", first)
	t.Logf("second: %#+v", second)

	if !first.Valid {
		t.Error("first is not valid but we wanted it to be")
	}

	if first.String != "bar" {
		t.Errorf("first is not \"bar\", got: %q", first.String)
	}

	if second.Valid {
		t.Error("second is valid but we wanted it to not be")
	}
}

This fails with the following message:

=== RUN   TestSQLiteBug
    sqlite_date_test.go:101: first:  sql.NullString{String:"bar", Valid:true}
    sqlite_date_test.go:102: second: sql.NullString{String:"", Valid:true}
    sqlite_date_test.go:113: second is valid but we wanted it to not be
--- FAIL: TestSQLiteBug (0.00s)

There have been other issues when adding left joins into the mix, but details will be provided in the comments when I am able to get a minimal reproduction.

Potential memory leak with sqlite3_expanded_sql?

According to the docs strings returned by sqlite3_expanded_sql should be freed manually with sqlite3_free:

https://www.sqlite.org/c3ref/expanded_sql.html

The string returned by sqlite3_expanded_sql(P), on the other hand, is obtained from sqlite3_malloc() and must be freed by the application by passing it to sqlite3_free().

Here the C value is passed directly to C.GoString. I'm not an expert in CGO and I don't know if this could be a potential memory leak.

return C.GoString(C.sqlite3_expanded_sql(stmt.stmt))

Executing a Broken Statement Causes a Hang

Hi,

whenever you try to execute an invalid statement in a transaction an error isn't returned (nor a panic thrown) the program just hangs.

I'd initially noticed this when inserting twice into a table that expected a column to be unique assuming I'd see an error but instead the program just hung.

this does not seem to be directly related to the unique constraint, it seems to happened with any invalid statement; even if they aren't valid SQL.

here's a example:

package main

import (
	"context"
	"log"

	"github.com/tailscale/sqlite/sqliteh"
	"github.com/tailscale/sqlite/sqlitepool"
)

var db *sqlitepool.Pool

func init() {
	var err error

	db, err = sqlitepool.NewPool("file:./db.sqlite", 10, func(init sqliteh.DB) error { return nil }, nil)

	if err != nil {
		panic(err)
	}
}

func main() {
	defer db.Close()

	transaction, err := db.BeginTx(context.Background(), "")

	if err != nil {
		panic(err)
	}

	log.Println("executing invalid statement")

	err = transaction.Exec("this should throw an error")

	if err != nil {
		panic(err)
	}

	log.Println("done")

	err = transaction.Commit()

	if err != nil {
		panic(err)
	}
}

running this program yields the following then hangs forever:

% go run main.go
2023/03/01 22:43:26 executing invalid statement

This vs. crawshaw.io/sqlite?

I understand this library aims to be db/sql driver which is explicitly a non-goal for crawshaw.io/sqlite, although apparently there was a discussion about it: crawshaw/sqlite#30.

Is this package going to be pure db/sql driver, or the driver will be something like a wrapper around a lower level package that's mimics SQLite's C interface?

Are there any major pitfalls in crawshaw.io/sqlite impeding Tailscale from using it or making a db/sql wrapper on top of it?

No distinction between inserting nil vs empty []byte slices

Currently driver stores both nil and empty byte slices as non-NULL empty blob values.

I'm not sure whether it's a deliberate design choice, or just a corner case nobody hit yet.
With other database drivers I got used to the following behavior:

  • insert nil []byte as NULL;
  • insert non-nil zero-sized []byte as zero-sized blob.

Other Go sqlite driver implementations make such distinction: in both github.com/mattn/go-sqlite3 and modernc.org/sqlite (note that its versions [v1.16.0, v1.17.2] are buggy in this respect) if you put a nil byte slice into a column, you get a NULL value back.

One partucular use case where it may be useful to make such a distinction is when dealing with optional stored json blobs and some constraints, like such:

CREATE TABLE t(
  Path TEXT PRIMARY KEY NOT NULL,
  Doc TEXT NOT NULL,
  Tags TEXT check(Tags is NULL OR(json_valid(Tags) AND json_type(Tags)='array'))
)

Please see this test:

func TestNilAndEmptyByteSlices(t *testing.T) {
	db := sql.OpenDB(sqlite.Connector("file:/dbname?vfs=memdb", nil, nil))
	defer db.Close()
	if _, err := db.Exec(`CREATE TABLE t(v TEXT)`); err != nil {
		t.Fatal(err)
	}
	for _, c := range [...]struct {
		val     []byte
		notNull bool
	}{
		{[]byte{}, true},
		{nil, false},
	} {
		if _, err := db.Exec(`DELETE FROM t`); err != nil {
			t.Fatalf("emptying the table: %v", err)
		}
		if _, err := db.Exec(`INSERT INTO t(v) VALUES(?)`, c.val); err != nil {
			t.Fatal(err)
		}
		var res sql.NullString
		if err := db.QueryRow(`SELECT v FROM t LIMIT 1`).Scan(&res); err != nil {
			t.Fatalf("reading result for original value of %#v: %v", c.val, err)
		}
		if res.Valid != c.notNull {
			t.Fatalf("got unexpected result %#v for original value of %#v", res, c.val)
		}
	}
}

Cross-DB query using temp doesn't work

Porting from modernc.org/sqlite to this package, one of my queries fail preparation, where it executed without issue on modernc:

sqlite.Prepare: SQLITE_ERROR: no such table: temp.seen (DELETE FROM dirty WHERE NOT EXISTS (SELECT 1 FROM temp.seen WHERE temp.seen.path = dirty.path))

Schema, for context:

CREATE TABLE IF NOT EXISTS dirty (
  id INTEGER PRIMARY KEY,
  path TEXT,
  mtime DATETIME,
  size INTEGER,
  dev INTEGER,
  inode INTEGER,
  canread INTEGER,

  dirty INTEGER
);

CREATE TEMPORARY TABLE temp.seen (path TEXT);

afaict, queries across regular and temporary tables should Just Work, but something about how this package sets things up makes it unable to query temporary tables in the same query as a non-temporary table (other queries that only touch the temporary table or non-temporary table work fine).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.