PostFDB is proof-of-concept database that exposes a Apache CouchDB-like API but which is backed by a FoundationDB database. It supports:
- Create/Delete database API
- Insert/Update/Delete document API, without requiring revision tokens.
- Bulk Insert/Update/Delete API.
- Fetch all documents or a range using the primary index.
- Fetch documents by key or range of keys using secondary indexes.
- "Pull" replication i.e. fetching data from a remote URL.
It does not implement CouchDB's MVCC, Design Documents, attachments, MapReduce, "Mango" search or any other CouchDB feature.
It does however provide a "consistent" data store where the documents and secondary indexes are in lock-step. Documents are limited to 100KB in size.
Download this project and install the dependencies to run it on your machine:
npm install
npm run start
The application will connect to local FoundationDB instance and start serving out its API on port 5984 (CouchDB's usual port).
$ curl -X GET http://localhost:5984/
{"postFDB":"Welcome","pkg":"postfdb","node":"v12.8.1","version":"1.0.0"}
$ curl -X GET http://localhost:5984/_all_dbs
["_replicator","animaldb","cities"]
$ curl -X PUT http://localhost:5984/mydb
{"ok":true}
$ curl -X GET http://localhost:5984/mydb
{"db_name":"mydb"}
$ curl -X PUT \
-H 'Content-type: application/json' \
-d '{"x": 1, "y": false, "z": "aardvark"}' \
http://localhost:5984/mydb/a
{"ok":true,"id":"a","rev":"0-1"}
$ curl -X POST \
-H 'Content-type: application/json' \
-d '{"x": 2, "y": true, "z": "bear"}' \
http://localhost:5984/mydb
{"ok":true,"id":"001hla5z2pEedb3wB5rI2Rkd0k2pzUQg","rev":"0-1"}
$ curl -X GET http://localhost:5984/mydb/a
{"x":1,"y":false,"z":"aardvark","_id":"a","_rev":"0-1","_i1":"","_i2":"","_i3":""}
$ curl -X GET http://localhost:5984/mydb/_all_docs
{"rows":[{"id":"a","key":"a","value":{"rev":"0-1"}},{"id":"001hla5z2pEedb3wB5rI2Rkd0k2pzUQg","key":"001hla5z2pEedb3wB5rI2Rkd0k2pzUQg","value":{"rev":"0-1"}},{"id":"b","key":"b","value":{"rev":"0-1"}},{"id":"c","key":"c","value":{"rev":"0-1"}},{"id":"d","key":"d","value":{"rev":"0-1"}},{"id":"e","key":"e","value":{"rev":"0-1"}},{"id":"f","key":"f","value":{"rev":"0-1"}}],"bookmark":"abc"}
Add include_docs=true
to include document bodies:
$ curl -X GET http://localhost:5984/mydb/_all_docs?include_docs=true
{"rows":[{"id":"a","key":"a","value":{"rev":"0-1"},"doc":{"x":1,"y":false,"z":"aardvark","_id":"a","_rev":"0-1","_i1":"","_i2":"","_i3":""}},{"id":"001hla5z2pEedb3wB5rI2Rkd0k2pzUQg","key":"001hla5z2pEedb3wB5rI2Rkd0k2pzUQg","value":{"rev":"0-1"},"doc":{"x":2,"y":true,"z":"bear","_id":"001hla5z2pEedb3wB5rI2Rkd0k2pzUQg","_rev":"0-1","_i1":"","_i2":"","_i3":""}},{"id":"b","key":"b","value":{"rev":"0-1"},"doc":{"x":1,"y":false,"z":"bat","_id":"b","_rev":"0-1","_i1":"","_i2":"","_i3":""}},{"id":"c","key":"c","value":{"rev":"0-1"},"doc":{"x":1,"y":false,"z":"cat","_id":"c","_rev":"0-1","_i1":"","_i2":"","_i3":""}},{"id":"d","key":"d","value":{"rev":"0-1"},"doc":{"x":1,"y":false,"z":"dog","_id":"d","_rev":"0-1","_i1":"","_i2":"","_i3":""}},{"id":"e","key":"e","value":{"rev":"0-1"},"doc":{"x":1,"y":false,"z":"eagle","_id":"e","_rev":"0-1","_i1":"","_i2":"","_i3":""}},{"id":"f","key":"f","value":{"rev":"0-1"},"doc":{"x":1,"y":false,"z":"fox","_id":"f","_rev":"0-1","_i1":"","_i2":"","_i3":""}}],"bookmark":"abc"}
Add a limit
parameter to reduce number of rows returned:
$ curl -X GET http://localhost:5984/mydb/_all_docs?limit=2
{"rows":[{"id":"a","key":"a","value":{"rev":"0-1"}},{"id":"001hla5z2pEedb3wB5rI2Rkd0k2pzUQg","key":"001hla5z2pEedb3wB5rI2Rkd0k2pzUQg","value":{"rev":"0-1"}}],"bookmark":"abc"}
Use startkey
/endkey
to fetch a range of document ids:
$ curl -X GET 'http://localhost:5984/mydb/_all_docs?startkey="b"&endkey="d"'
{"rows":[{"id":"b","key":"b","value":{"rev":"0-1"}},{"id":"c","key":"c","value":{"rev":"0-1"}},{"id":"d","key":"d","value":{"rev":"0-1"}}],"bookmark":"abc"}
Pass a bookmark
from a previous call to get the next page of results (and a new bookmark):
$ curl -X GET 'http://localhost:5984/mydb/_all_docs?bookmark=abc123'
{"rows":[{"id":"b","key":"b","value":{"rev":"0-1"}},{"id":"c","key":"c","value":{"rev":"0-1"}},{"id":"d","key":"d","value":{"rev":"0-1"}}],"bookmark":"xyz987"}
Parameters:
startkey
/endkey
- one or both supplied, for range queries.limit
- the number of documents to return (default: 100)bookmark
- a pointer into the next page of search results
$ curl -X GET http://localhost:5984/mydb/_changes_
{"last_seq":"000000603c16799200000061","results":[{"changes":[{"rev":"0-1"}],"id":"001hluy43gHHub3XakCv0Mt4DL0LpMRr","seq":"000000603c16799200000062"},{"changes":[{"rev":"0-1"}],"id":"001hluy41gCxKV2lM6oV1eaRTp2apBWS","seq":"000000603c16799200000061"}}
Parameters:
since
- return changes after a known point. Default0
include_docs
- iftrue
returns document body too. Defaultfalse
limit
- the number of documents to return. Default 100
$ curl -X POST \
-H 'Content-type: application/json' \
-d '{"docs":[{"x": 2, "y": true, "z": "bear"},{"_id":"abc","_deleted":true}]}' \
http://localhost:5984/mydb/_bulk_docs
[{"ok":true,"id":"001hlstC1aW4vf189ZLf2xZ9Rq4LriwV","rev":"0-1"},{"ok":true,"id":"abc","rev":"0-1"}]
$ curl -X DELETE http://localhost:5984/mydb/001hla5z2pEedb3wB5rI2Rkd0k2pzUQg
{"ok":true,"id":"001hla5z2pEedb3wB5rI2Rkd0k2pzUQg","rev":"0-1"}
$ curl -X DELETE http://localhost:5984/mydb
{"ok":true}
PostFDB has no MapReduce, or Mango search but it does allow any number of fields to be indexed. Fields starting with _
(except _id
, _rev
and _deleted
) will be indexed e.g. _i1
, _myindex
& _sausages
. For example, your document could look like this:
{
"_id": "abc123",
"_i1": "1561972800000",
"_i2": "smith",
"_i3": "person:uk:2019-07-01",
"type": "person",
"name": "Bob Smith",
"dob": "1965-04-21",
"country": "uk",
"lastLogin": "2019-07-01 10:20:00"
}
In this case _i1
is used to extract users by a timestamp, perhaps last login time. The _i2
index is used to extract users by surname, all lowercase. The third compounds several fields: document type, country and last login date.
If documents don't need additional data indexed, then the fields can be omitted or left as empty strings.
The indexed data can be accessed using the POST /db/_query
endpoint which expects a JSON object specifying the indexed field to query and the key or range of keys to fetch:
{
"index": "i1",
"startkey": "c",
"endkey": "m"
}
e.g
```sh
$ curl -X POST \
-H 'Content-type: application/json' \
-d '{"index": "i1", "startkey": "e", "endkey": "m"}' \
http://localhost:5984/mydb/_query
{"docs":[...]}
Parameters:
index
- the name of index to query (mandatory).startkey
/endkey
- one or both supplied, for range queries.key
- the key in the index to search for, for selection queries.limit
- the number of documents to return (default: 100)
Only "pull" replication is supported i.e. fetching data from a remote URL. A replication is started by writing a data to the _replicator
database:
$ curl -X POST \
-d '{"source":"https://U:[email protected]/cities","target":"cities"}' \
http://localhost:5984/_replicator
{"ok":true,"id":"73106768769860315949fe301a75c18a","rev":"0-1"}
Parameters for a _replicator document:
source
- the source database (must be a URL)target
- the target databasr (must be a local database name)since
- the sequence token to start replication from (default0
)continuous
- if true, replicates from the source forever (defaultfalse
)create_target
- if true, a new target database is created (defaultfalse
)
Replications are processed by a second process which is run with:
$ npm run replicator
Only one such process should run. It polls for new replcation jobs and sets them off. It will resume interrupted replications on restart.
You can check on the status of a replication by pulling the _replicator
document you created:
$ curl http://localhost:5984/_replicator/73106768769860315949fe301a75c18a
{
"source": "https://U:[email protected]/cities",
"target": "cities",
"continuous": false,
"create_target": false,
"state": "running",
"seq": "5000-g1AAAAf7eJy9lM9NwzAUh",
"doc_count": 5000,
"_id": "73106768769860315949fe301a75c18a",
"_rev": "0-1",
"_i1": "running",
"_i2": "",
"_i3": ""
}
Note the additional fields:
state
- the state of the replicationnew
/running
/completed
/error
doc_count
- the number of documents written so far.
This project doesn't come with a dashboard but you can run PostFDB and Apache CouchDB's Fauxton dashboard alongside:
npm install -g fauxton
fauxton
The application is configured using environment variables
PORT
- the port that the database's web server will listen on. Default 5984.USERNAME
/PASSWORD
- to insist on authenticated connections, bothUSERNAME
/PASSWORD
must be set and then the server will require them to be supplied in every request using HTTP Basic Authentication.DEBUG
- when set topostfdb
the PostFDB console will contain extra debugging information.LOGGING
- the logging format. One ofcombined
/common
/dev
/short
/tiny
/none
. Defaultdev
.
See debugging messages by setting the DEBUG
environment variable:
DEBUG=postfdb npm run start
Data is stored in FoundationDB - a multi node key/value store.
Databases are stored in keys:
['_db','animaldb'] => { ... }
The contents of the databases are stored in keys:
['animaldb', 'doc', 'fox'] => { ... }
['animaldb', 'changes', 'fox', '45'] => { ... }
['animaldb', 'index', 'myindexname', 'value', 'fox'] => 'fox'