Code Monkey home page Code Monkey logo

postfdb's Introduction

PostFDB

PostFDB is proof-of-concept database that exposes a Apache CouchDB-like API but which is backed by a FoundationDB database. It supports:

  • Create/Delete database API
  • Insert/Update/Delete document API, without requiring revision tokens.
  • Bulk Insert/Update/Delete API.
  • Fetch all documents or a range using the primary index.
  • Fetch documents by key or range of keys using secondary indexes.
  • "Pull" replication i.e. fetching data from a remote URL.

It does not implement CouchDB's MVCC, Design Documents, attachments, MapReduce, "Mango" search or any other CouchDB feature.

It does however provide a "consistent" data store where the documents and secondary indexes are in lock-step. Documents are limited to 100KB in size.

schmeatic

Running locally

Download this project and install the dependencies to run it on your machine:

npm install
npm run start

The application will connect to local FoundationDB instance and start serving out its API on port 5984 (CouchDB's usual port).

API Reference

Ping the service - GET /

$ curl -X GET http://localhost:5984/
{"postFDB":"Welcome","pkg":"postfdb","node":"v12.8.1","version":"1.0.0"}

Get list of databases - GET /_all_dbs

$ curl -X GET http://localhost:5984/_all_dbs
["_replicator","animaldb","cities"]

Create Database - PUT /db

$ curl -X PUT http://localhost:5984/mydb
{"ok":true}

Get Database Info - GET /db

$ curl -X GET http://localhost:5984/mydb
{"db_name":"mydb"}

Add a document (known ID) - PUT /db/id

$ curl -X PUT \
       -H 'Content-type: application/json' \
       -d '{"x": 1, "y": false, "z": "aardvark"}' \
       http://localhost:5984/mydb/a
{"ok":true,"id":"a","rev":"0-1"}

Add a document (generated ID) - POST /db

$ curl -X POST \
       -H 'Content-type: application/json' \
       -d '{"x": 2, "y": true, "z": "bear"}' \
       http://localhost:5984/mydb
{"ok":true,"id":"001hla5z2pEedb3wB5rI2Rkd0k2pzUQg","rev":"0-1"}

Get a document by id - GET /db/id

$ curl -X GET http://localhost:5984/mydb/a
{"x":1,"y":false,"z":"aardvark","_id":"a","_rev":"0-1","_i1":"","_i2":"","_i3":""}

Get all documents - GET /db/_all_docs

$ curl -X GET http://localhost:5984/mydb/_all_docs
{"rows":[{"id":"a","key":"a","value":{"rev":"0-1"}},{"id":"001hla5z2pEedb3wB5rI2Rkd0k2pzUQg","key":"001hla5z2pEedb3wB5rI2Rkd0k2pzUQg","value":{"rev":"0-1"}},{"id":"b","key":"b","value":{"rev":"0-1"}},{"id":"c","key":"c","value":{"rev":"0-1"}},{"id":"d","key":"d","value":{"rev":"0-1"}},{"id":"e","key":"e","value":{"rev":"0-1"}},{"id":"f","key":"f","value":{"rev":"0-1"}}],"bookmark":"abc"}

Add include_docs=true to include document bodies:

$ curl -X GET http://localhost:5984/mydb/_all_docs?include_docs=true
{"rows":[{"id":"a","key":"a","value":{"rev":"0-1"},"doc":{"x":1,"y":false,"z":"aardvark","_id":"a","_rev":"0-1","_i1":"","_i2":"","_i3":""}},{"id":"001hla5z2pEedb3wB5rI2Rkd0k2pzUQg","key":"001hla5z2pEedb3wB5rI2Rkd0k2pzUQg","value":{"rev":"0-1"},"doc":{"x":2,"y":true,"z":"bear","_id":"001hla5z2pEedb3wB5rI2Rkd0k2pzUQg","_rev":"0-1","_i1":"","_i2":"","_i3":""}},{"id":"b","key":"b","value":{"rev":"0-1"},"doc":{"x":1,"y":false,"z":"bat","_id":"b","_rev":"0-1","_i1":"","_i2":"","_i3":""}},{"id":"c","key":"c","value":{"rev":"0-1"},"doc":{"x":1,"y":false,"z":"cat","_id":"c","_rev":"0-1","_i1":"","_i2":"","_i3":""}},{"id":"d","key":"d","value":{"rev":"0-1"},"doc":{"x":1,"y":false,"z":"dog","_id":"d","_rev":"0-1","_i1":"","_i2":"","_i3":""}},{"id":"e","key":"e","value":{"rev":"0-1"},"doc":{"x":1,"y":false,"z":"eagle","_id":"e","_rev":"0-1","_i1":"","_i2":"","_i3":""}},{"id":"f","key":"f","value":{"rev":"0-1"},"doc":{"x":1,"y":false,"z":"fox","_id":"f","_rev":"0-1","_i1":"","_i2":"","_i3":""}}],"bookmark":"abc"}

Add a limit parameter to reduce number of rows returned:

$ curl -X GET http://localhost:5984/mydb/_all_docs?limit=2
{"rows":[{"id":"a","key":"a","value":{"rev":"0-1"}},{"id":"001hla5z2pEedb3wB5rI2Rkd0k2pzUQg","key":"001hla5z2pEedb3wB5rI2Rkd0k2pzUQg","value":{"rev":"0-1"}}],"bookmark":"abc"}

Use startkey/endkey to fetch a range of document ids:

$ curl -X GET 'http://localhost:5984/mydb/_all_docs?startkey="b"&endkey="d"'
{"rows":[{"id":"b","key":"b","value":{"rev":"0-1"}},{"id":"c","key":"c","value":{"rev":"0-1"}},{"id":"d","key":"d","value":{"rev":"0-1"}}],"bookmark":"abc"}

Pass a bookmark from a previous call to get the next page of results (and a new bookmark):

$ curl -X GET 'http://localhost:5984/mydb/_all_docs?bookmark=abc123'
{"rows":[{"id":"b","key":"b","value":{"rev":"0-1"}},{"id":"c","key":"c","value":{"rev":"0-1"}},{"id":"d","key":"d","value":{"rev":"0-1"}}],"bookmark":"xyz987"}

Parameters:

  • startkey/endkey - one or both supplied, for range queries.
  • limit - the number of documents to return (default: 100)
  • bookmark - a pointer into the next page of search results

Get changes feed - GET /db/_changes

$ curl -X GET http://localhost:5984/mydb/_changes_
{"last_seq":"000000603c16799200000061","results":[{"changes":[{"rev":"0-1"}],"id":"001hluy43gHHub3XakCv0Mt4DL0LpMRr","seq":"000000603c16799200000062"},{"changes":[{"rev":"0-1"}],"id":"001hluy41gCxKV2lM6oV1eaRTp2apBWS","seq":"000000603c16799200000061"}}

Parameters:

  • since - return changes after a known point. Default 0
  • include_docs - if true returns document body too. Default false
  • limit - the number of documents to return. Default 100

Bulk operations - POST /db/_bulk_docs

$ curl -X POST \
       -H 'Content-type: application/json' \
       -d '{"docs":[{"x": 2, "y": true, "z": "bear"},{"_id":"abc","_deleted":true}]}' \
       http://localhost:5984/mydb/_bulk_docs
[{"ok":true,"id":"001hlstC1aW4vf189ZLf2xZ9Rq4LriwV","rev":"0-1"},{"ok":true,"id":"abc","rev":"0-1"}]

Delete a document - DELETE /db/id

$ curl -X DELETE http://localhost:5984/mydb/001hla5z2pEedb3wB5rI2Rkd0k2pzUQg
{"ok":true,"id":"001hla5z2pEedb3wB5rI2Rkd0k2pzUQg","rev":"0-1"}

Delete a database - DELETE /db

$ curl -X DELETE http://localhost:5984/mydb
{"ok":true}

Indexing

PostFDB has no MapReduce, or Mango search but it does allow any number of fields to be indexed. Fields starting with _ (except _id, _rev and _deleted) will be indexed e.g. _i1, _myindex & _sausages. For example, your document could look like this:

{
  "_id": "abc123",
  "_i1": "1561972800000",
  "_i2": "smith",
  "_i3": "person:uk:2019-07-01",
  "type": "person",
  "name": "Bob Smith",
  "dob": "1965-04-21",
  "country": "uk",
  "lastLogin": "2019-07-01 10:20:00"
}

In this case _i1 is used to extract users by a timestamp, perhaps last login time. The _i2 index is used to extract users by surname, all lowercase. The third compounds several fields: document type, country and last login date.

If documents don't need additional data indexed, then the fields can be omitted or left as empty strings.

The indexed data can be accessed using the POST /db/_query endpoint which expects a JSON object specifying the indexed field to query and the key or range of keys to fetch:

{ 
  "index": "i1",
  "startkey": "c",
  "endkey": "m"
}

e.g

```sh
$ curl -X POST \
       -H 'Content-type: application/json' \
       -d '{"index": "i1", "startkey": "e", "endkey": "m"}' \
       http://localhost:5984/mydb/_query
{"docs":[...]}

Parameters:

  • index - the name of index to query (mandatory).
  • startkey/endkey - one or both supplied, for range queries.
  • key - the key in the index to search for, for selection queries.
  • limit - the number of documents to return (default: 100)

Replication

Only "pull" replication is supported i.e. fetching data from a remote URL. A replication is started by writing a data to the _replicator database:

$ curl -X POST \
      -d '{"source":"https://U:[email protected]/cities","target":"cities"}' \
      http://localhost:5984/_replicator
{"ok":true,"id":"73106768769860315949fe301a75c18a","rev":"0-1"}

Parameters for a _replicator document:

  • source - the source database (must be a URL)
  • target - the target databasr (must be a local database name)
  • since - the sequence token to start replication from (default 0)
  • continuous - if true, replicates from the source forever (default false)
  • create_target - if true, a new target database is created (default false)

Replications are processed by a second process which is run with:

$ npm run replicator

Only one such process should run. It polls for new replcation jobs and sets them off. It will resume interrupted replications on restart.

You can check on the status of a replication by pulling the _replicator document you created:

$ curl http://localhost:5984/_replicator/73106768769860315949fe301a75c18a
{
  "source": "https://U:[email protected]/cities",
  "target": "cities",
  "continuous": false,
  "create_target": false,
  "state": "running",
  "seq": "5000-g1AAAAf7eJy9lM9NwzAUh",
  "doc_count": 5000,
  "_id": "73106768769860315949fe301a75c18a",
  "_rev": "0-1",
  "_i1": "running",
  "_i2": "",
  "_i3": ""
}

Note the additional fields:

  • state - the state of the replication new/running/completed/error
  • doc_count - the number of documents written so far.

Dashboard

This project doesn't come with a dashboard but you can run PostFDB and Apache CouchDB's Fauxton dashboard alongside:

npm install -g fauxton
fauxton

Configuring

The application is configured using environment variables

  • PORT - the port that the database's web server will listen on. Default 5984.
  • USERNAME/PASSWORD - to insist on authenticated connections, both USERNAME/PASSWORD must be set and then the server will require them to be supplied in every request using HTTP Basic Authentication.
  • DEBUG - when set to postfdb the PostFDB console will contain extra debugging information.
  • LOGGING - the logging format. One of combined/common/dev/short/tiny/none. Default dev.

Debugging

See debugging messages by setting the DEBUG environment variable:

DEBUG=postfdb npm run start

How does it work?

Data is stored in FoundationDB - a multi node key/value store.

Databases are stored in keys:

['_db','animaldb'] => { ... }

The contents of the databases are stored in keys:

['animaldb', 'doc', 'fox'] => { ... }
['animaldb', 'changes', 'fox', '45'] => { ... }
['animaldb', 'index', 'myindexname', 'value', 'fox'] => 'fox'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.