Code Monkey home page Code Monkey logo

dgraph's Introduction

Dgraph

Scalable, Distributed, Low Latency, High Throughput Graph Database.

logo Wiki Build Status Coverage Status Slack Status

Dgraph's goal is to provide Google production level scale and throughput, with low enough latency to be serving real time user queries, over terabytes of structured data. Dgraph supports GraphQL as query language, and responds in JSON.


Note that we use the Github Issue Tracker for bug reports only. For feature requests or questions, visit https://discuss.dgraph.io.


We are using Git Flow branching model. So, please send out your pull requests against develop branch.


The README is divided into these sections:

Current Status

Check out the demo at dgraph.io.

Upcoming - v0.4 Follow our Trello board for progress. Got questions or issues? Talk to us via discuss.

May 2016 - Tag v0.3 This release contains more efficient binary protocol client and ability to query first:N results. Please see Release notes and Trello board for more information.

Mar 2016 - Branch v0.2 This is the first truly distributed version of Dgraph. Please see the release notes here.

MVP launch - Dec 2015 - Branch v0.1 This is a minimum viable product, alpha release of Dgraph. It's not meant for production use. This version is not distributed and support for GraphQL is partial. See the Roadmap for list of working and planned features.

Your feedback is welcome. Feel free to file an issue when you encounter bugs and to direct the development of Dgraph.

There's an instance of Dgraph running at http://dgraph.xyz, that you can query without installing Dgraph. This instance contains 21M facts from Freebase Film Data. See Queries and Mutations below for sample queries. curl dgraph.xyz/query -XPOST -d '{}'

Quick Testing

Single instance via Docker

There's a docker image that you can readily use for playing with Dgraph.

$ docker pull dgraph/dgraph:latest
# Setting a `somedir` volume on the host will persist your data.
$ docker run -t -i -v /somedir:/dgraph -p 80:8080 dgraph/dgraph:latest

Now that you're within the Docker instance, you can start the server.

$ mkdir /dgraph/m # Ensure mutations directory exists.
$ dgraph --mutations /dgraph/m --postings /dgraph/p --uids /dgraph/u

There are some more options that you can change. Run dgraph --help to look at them.

Run some mutations and query the server, like so:

# Make Alice follow Bob, and give them names.
$ curl localhost:80/query -X POST -d $'mutation { set {<alice> <follows> <bob> . \n <alice> <name> "Alice" . \n <bob> <name> "Bob" . }}'

# Now run a query to find all the people Alice follows 2 levels deep. The query would only result in 1 connection, Alice to Bob.
$ curl localhost:80/query -X POST -d '{me(_xid_: alice) { name _xid_ follows { name _xid_ follows {name _xid_ } } }}'

# Make Bob follow Greg.
$ curl localhost:80/query -X POST -d $'mutation { set {<bob> <follows> <greg> . \n <greg> <name> "Greg" .}}'

# The same query as above now would now show 2 connections, one from Alice to Bob, another from Bob to Greg.
$ curl localhost:80/query -X POST -d '{me(_xid_: alice) { name _xid_ follows { name _xid_ follows {name _xid_ } } }}'

Note how we can retrieve XIDs by using _xid_ identifier.

Multiple distributed instances

We have loaded 21M RDFs from Freebase Films data along with their names into 3 shards. They're located in dgraph-io/benchmarks repository. To use it, install Git LFS first. I've found the Linux download to be the easiest way to install. Note that this repository has over 1GB worth of data.

$ git clone https://github.com/dgraph-io/benchmarks.git
$ cd benchmarks/rocks
$ tar -xzvf uids.async.tar.gz -C $DIR
$ tar -xzvf postings.tar.gz -C $DIR
# You should now see directories p0, p1, p2 and uasync.final. The last directory name is unfortunate, but made sense at the time.

For quick testing, you can bring up 3 different processes of Dgraph. You can of course, also set this up across multiple servers.

go build . && ./dgraph --instanceIdx 0 --mutations $DIR/m0 --port 8080 --postings $DIR/p0 --workers ":12345,:12346,:12347" --uids $DIR/uasync.final --workerport ":12345" &
go build . && ./dgraph --instanceIdx 1 --mutations $DIR/m1 --port 8082 --postings $DIR/p1 --workers ":12345,:12346,:12347" --workerport ":12346" &
go build . && ./dgraph --instanceIdx 2 --mutations $DIR/m2 --port 8084 --postings $DIR/p2 --workers ":12345,:12346,:12347" --workerport ":12347" &

Now you can run any of the queries mentioned in Test Queries. You can hit any of the 3 processes, they'll produce the same results.

curl localhost:8080/query -XPOST -d '{}'

Installation

Best way to do this is to refer to Dockerfile, which has the most complete instructions on getting the right setup. All the instructions below are based on a Debian/Ubuntu system.

Install Go 1.6

Download and install Go 1.6 from here.

Install RocksDB

Dgraph depends on RocksDB for storing posting lists.

# First install dependencies.
# For Ubuntu, follow the ones below. For others, refer to INSTALL file in rocksdb.
$ sudo apt-get update && apt-get install libgflags-dev libsnappy-dev zlib1g-dev libbz2-dev
$ git clone https://github.com/facebook/rocksdb.git
$ cd rocksdb
$ git checkout v4.2
$ make shared_lib
$ sudo make install

This would install RocksDB library in /usr/local/lib. Make sure that your LD_LIBRARY_PATH is correctly pointing to it.

# In ~/.bashrc
export LD_LIBRARY_PATH="/usr/local/lib"

Install Dgraph

Now get Dgraph code. Dgraph uses govendor to fix dependency versions. Version information for these dependencies is included in the github.com/dgraph-io/dgraph/vendor directory under the vendor.json file.

go get -u github.com/kardianos/govendor
# cd to dgraph codebase root directory e.g. $GOPATH/src/github.com/dgraph-io/dgraph
govendor sync

# Optional
go test github.com/dgraph-io/dgraph/...

See govendor for more information.

Usage

Distributed Bulk Data Loading

Let's load up data first. If you have RDF data, you can use that. Or, there's Freebase film rdf data here.

Bulk data loading happens in 2 passes.

First Pass: UID Assignment

We first find all the entities in the data, and allocate UIDs for them. You can run this either as a single instance, or over multiple instances.

Here we set number of instances to 2.

$ cd $GOPATH/src/github.com/dgraph-io/dgraph/dgraph/dgraphassigner

# Run instance 0.
$ go build . && ./dgraphassigner --numInstances 2 --instanceIdx 0 --rdfgzips $BENCHMARK_REPO/data/rdf-films.gz,$BENCHMARK_REPO/data/names.gz --uids ~/dgraph/uids/u0

# And either later, or on another server, run instance 1.
$ go build . && ./dgraphassigner --numInstances 2 --instanceIdx 1 --rdfgzips $BENCHMARK_REPO/data/rdf-films.gz,$BENCHMARK_REPO/data/names.gz --uids ~/dgraph/uids/u1

Once the shards are generated, you need to merge them before the second pass. If you ran this as a single instance, merging isn't required.

$ cd $GOPATH/src/github.com/dgraph-io/dgraph/tools/merge
$ go build . && ./merge --stores ~/dgraph/uids --dest ~/dgraph/uasync.final

The above command would iterate over all the directories in ~/dgraph/uids, and merge their data into one ~/dgraph/uasync.final. Note that this merge step is important if you're generating multiple uid intances, because all the loader instances need to have access to global uids list.

Second Pass: Data Loader

Now that we have assigned UIDs for all the entities, the data is ready to be loaded.

Let's do this step with 3 instances.

$ cd $GOPATH/src/github.com/dgraph-io/dgraph/dgraph/dgraphloader
$ go build . && ./dgraphloader --numInstances 3 --instanceIdx 0 --rdfgzips $BENCHMARK_REPO/data/names.gz,$BENCHMARK_REPO/data/rdf-films.gz --uids ~/dgraph/uasync.final --postings ~/dgraph/p0
$ go build . && ./dgraphloader --numInstances 3 --instanceIdx 1 --rdfgzips $BENCHMARK_REPO/data/names.gz,$BENCHMARK_REPO/data/rdf-films.gz --uids ~/dgraph/uasync.final --postings ~/dgraph/p1
$ go build . && ./dgraphloader --numInstances 3 --instanceIdx 2 --rdfgzips $BENCHMARK_REPO/data/names.gz,$BENCHMARK_REPO/data/rdf-films.gz --uids ~/dgraph/uasync.final --postings ~/dgraph/p2

You can run these over multiple machines, or just one after another.

Loader performance

Loader is typically memory bound. Every mutation loads a posting list in memory, where mutations are applied in layers above posting lists. While loader doesn't write to disk every time a mutation happens, it does periodically merge all the mutations to posting lists, and writes them to rocksdb which persists them.

There're 2 types of merging going on: Gentle merge, and Aggressive merge. Gentle merging picks up N% of dirty posting lists, where N is currently 7, and merges them. This happens every 5 seconds.

Aggressive merging happens when the memory usage goes above stw_ram_mb. When that happens, the loader would stop the world, start the merge process, and evict all posting lists from memory. The more memory is available for loader to work with, the less frequently aggressive merging needs to be done, the faster the loading.

As a reference point, for instance 0 and 1, it took 11 minutes each to load 21M RDFs from rdf-films.gz and names.gz (from benchmarks repository) on n1-standard-4 GCE instance using SSD persistent disk. Instance 2 took a bit longer, and finished in 15 mins. The total output including uids was 1.3GB.

Note that stw_ram_mb is based on the memory usage perceived by Golang. It currently doesn't take into account the memory usage by RocksDB. So, the actual usage is higher.

Server

Now that the data is loaded, you can run the Dgraph servers. To serve the 3 shards above, you can follow the same steps as here. Now you can run GraphQL queries over freebase film data like so:

curl localhost:8080/query -XPOST -d '{
	me(_xid_: m.06pj8) {
		type.object.name.en
		film.director.film {
			type.object.name.en
			film.film.starring {
				film.performance.character {
					type.object.name.en
				}
				film.performance.actor {
					type.object.name.en
					film.director.film {
						type.object.name.en
					}
				}
			}
			film.film.initial_release_date
			film.film.country
			film.film.genre {
				type.object.name.en
			}
		}
	}
}' > output.json

This query would find all movies directed by Steven Spielberg, their names, initial release dates, countries, genres, and the cast of these movies, i.e. characteres and actors playing those characters; and all the movies directed by these actors, if any.

The support for GraphQL is very limited right now. You can conveniently browse Freebase film schema here. There're also some schema pointers in README.

Query Performance

With the data loaded above on the same hardware, it took 218ms to run the pretty complicated query above the first time after server run. Note that the json conversion step has a bit more overhead than captured here.

{
  "server_latency": {
      "json": "37.864027ms",
      "parsing": "1.141712ms",
      "processing": "163.136465ms",
      "total": "202.144938ms"
  }
}

Consecutive runs of the same query took much lesser time (80 to 100ms), due to posting lists being available in memory.

{
  "server_latency": {
    "json": "38.3306ms",
    "parsing": "506.708µs",
    "processing": "32.239213ms",
    "total": "71.079022ms"
	}
}

Queries and Mutations

You can see a list of sample queries here. Dgraph also supports mutations via GraphQL syntax. Because GraphQL mutations don't contain complete data, the mutation syntax uses RDF NQuad format.

mutation {
  set {
		<subject> <predicate> <objectid> .
		<subject> <predicate> "Object Value" .
		<subject> <predicate> "объект"@ru .
		_uid_:0xabcdef <predicate> <objectid> .
	}
}

You can batch multiple NQuads in a single GraphQL query. Dgraph would assume that any data in <> is an external id (XID), and it would retrieve or assign unique internal ids (UID) automatically for these. You can also directly specify the UID like so: _uid_: 0xhexval or _uid_: intval.

Note that a delete operation isn't supported yet.

In addition, you could couple a mutation with a follow up query, in a single GraphQL query like so.

mutation {
  set {
		<alice> <follows> <greg> .
	}
}
query {
	me(_xid_: alice) {
		follows
	}
}

The query portion is executed after the mutation, so this would return greg as one of the results.

Contributing to Dgraph

  • See a list of issues that we need help with.
  • Please see contributing to Dgraph for guidelines on contributions.
  • Alpha Program: If you want to contribute to Dgraph on a continuous basis and need some Bitcoins to pay for healthy food, talk to us.

Contact

  • Please use discuss.dgraph.io for documentation, questions, feature requests and discussions.
  • Please use Github issue tracker ONLY to file bugs. Any feature request should go to discuss.
  • Or, just join Slack Status.

Talks

About

I, Manish R Jain, the author of Dgraph, used to work on Google Knowledge Graph. My experience building large scale, distributed (Web Search and) Graph systems at Google is what inspired me to build this.

dgraph's People

Contributors

ashwin95r avatar koppula avatar manishrjain avatar pawanrawal avatar thezelus avatar tim-cheng avatar

Stargazers

 avatar

Watchers

 avatar  avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.