Code Monkey home page Code Monkey logo

flumeview-query's Introduction

flumeview-query

A flumeview with map-filter-reduce queries

Motivation

This particular module was because I needed to query things in secure-scuttlebutt in a flexible way. A previous exploration of this general idea was mynosql Yes the joke is that it's sql, but for no sql. SQL is actually totally functional (just with weird names)

map, filter, reduce == select, where, group-by !

Except with none of those icky schemas that just get you down! (anyway, with ssb we really can't enforce schemas (because of privacy oriented decentralization))

Example

var db = Flume(log).use('query', FlumeQuery(null, {indexes:indexes}))

//write a batch of data to the log
db.append([{
  foo: true,
  bar: 5,
  nested: {baz: 'okay'}
},
{
  foo: false,
  bar: 6,
  nested: {baz: 'okay'}
},
{
  foo: false,
  bar: 7,
  nested: {baz: 'not-okay'}
},
], function (err) {
  //filter for all records which match the above query
  pull(
    db.query.read({query: [
      {$filter: {nested: {baz: 'okay'}}}
    ]}),
    pull.collect(function (err, ary) {
      console.log(ary)
      //out puts the first and second items inserted above.
    })
  )
})

new api

flumeview-query shouldn't hold the indexes, it should just know what to do with them. Instead of passing in the indexes to create, pass in an index which can be used. that index would need to expose what paths it indexes, of course.

query.add(index, createStream) //add an index

Indexes

The indexes argument is an array of indexes that flumeview-query will be able to look at to do a fast query.

Indexes is an object with the a short name of the index (this will be stored with every record, so say 3 chars, is recommended) and a value with the fields being indexed.

For example, ssb-query's indexes look like:

[
  {key: 'log', value: ['timestamp']},
  {key: 'clk', value: [['value', 'author'], ['value', 'sequence']] },
  {key: 'typ', value: [['value', 'content', 'type'], ['timestamp']] },
  {key: 'cha', value: [['value', 'content', 'channel'], ['timestamp']] },
  {key: 'aty', value: [['value', 'author'], ['value', 'content', 'type'], ['timestamp']]}
]

Indexes can be of a single field or multiple fields (which are called "compound indexes"). Each item in an index must be unique, that is why the most of the indexes end in timestamp, author:sequence is also unique in ssb, so that index doesn't need timestamp. The uniqueness is not enforced by flumeview-query, it is the responsibilty of the index designer.

Compound indexes optimize queries with multiple fields. For example a query like: "all posts by @bob" which is {value: {author: @bob, content: {type: 'post'}}} uses the clk index.

If a query matches all fields in the index, the query will return 1 item (or zero if there is no record with those values), otherwise results will be returned in the order of the index.

Properties in the index are used from left to right, a query for "messages from @bob received since yesterday" {author: @bob, timestamp: {$gt: yesterday}} cannot use author and timestamp fields in aty as it leaves a gap in the value.content.type field. This query with these indexes would use part of a compound index, (clk, because it matches first) read all the messages by @bob and filter out the records matching the other query parameter (timestamp: {$gt: yesterday}). This is called a "partial scan". A partial scan is clearly less efficient than matched index, but not as bad as a full scan (which reads the entire database!)

Queries with compound indexes will end up sorted by the last index matched. Therefor, put the fields you expect to be exact first!

Queries

This module uses map-filter-reduce queries, if the filter stage uses fields that are in a index, then select can choose the best index and perform many queries very quickly.

See map-filter-reduce for documentation of the syntax, for example queries, performed on top of secure-scuttlebutt

api : flumedb.use("query", FlumeViewQuery(version, opts))

version must be an number. When you update any options, change the version and the index will rebuild. opts is the options. in particular {indexes: [...]} is mandatory.

Here we use the name "query", you can use any name.

api : FlumeViewQuery(version, {indexes, filter, map}) => FlumeView: query

as required by every flumeview, version is a integer. change this when the indexes or other settings change and the view will be rebuilt. indexes is mandatory. Indexes are the paths supported. indexes to use. links is an optional function used for mapping a value into one or more values for the index. version must be an number. When you change indexes or links, bump the version and the index will rebuild.

query.read({query:MFR_query, limit, reverse, live, old})

Perform the query! limit, reverse, live, old are stardard as with other flume streams. unlinkedValues is an option that can be used to include the values not part of the index in the return value.

query.explain ({query:MFR_query, limit, reverse, live, old}) => obj

Figure out what indexes are best to use to perform a query, but do not actually run the query! If a query is slow or doesn't seem to be working right, this method can be used to understand what is really going on. If the return value is {scan: true} that means no indexes are being used. If an index is selected, that should mean it's more efficient, but it might still be filtering the output.

License

MIT

flumeview-query's People

Contributors

arj03 avatar clehner avatar dangerousbeans avatar dominictarr avatar mixmix avatar mmckegg avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

flumeview-query's Issues

idea: support high entropy matches

in secure scuttlebutt, there are many values that are high entropy - i.e. ids (both feeds and messages and blobs). Since these are essentially random, there is no reason to query them as ranges, usually they are retrived as exact queries.

for example, you could request all replies in a thread like this:

[{$filter: { value: {content: { root: <thread_id> } } } }]

this is a valid query, but would unfortunately produce a full-scan (i.e. read the entire database, a very inefficient query!).

Currently, we have indexes that match a given path, but we could also have indexes that match a given value. This index would match a particular value where ever it appears in the object. So this query would return replies and likes and backlinks, like https://github.com/ssbc/ssb-backlinks does. and then these would be filtered out (which would generally be an efficient query!). This means we could replace backlinks and do pretty much all the message queries via ssb-query.

@arj03 @mmckegg @mixmix

allow passing in an unbox function

It would be nice to be able to use flumeview-query/links against private messages also.

Could be supported by adding the ability to pass an unbox function in that would be used to decode any encrypted messages.

In the ssbc/ssb-backlinks fork of links, I just whacked an unbox option on the constructor, but this isn't very nice. Maybe an options argument would do the trick?

https://github.com/ssbc/ssb-backlinks/blob/master/lib/flumeview-links-raw.js#L36-L38

This would help close ssbc/ssb-backlinks#4

option to include original object in `read`

Maybe add a {values: true} option to links that allows you to retrieve the original message so that you don't need to do a seperate lookup in the client.

Or this could become the default behaviour and match flumeview-query/index.

This would help close ssbc/ssb-backlinks#4

Don't create FlumeViewLevel instance, if indexes.length === 0

In a test, I tried to

var Flume = require('flumedb')
var MemLog = require('flumelog-memory')
var FlumeQuery = require('flumeview-query')
var pull = require('pull-stream')

var indexes = []

var db = Flume(MemLog())
  .use('query', FlumeQuery(indexes))

db.append({foo: 1}, function (err, seq) {
  console.log(err, seq)
  pull(
    db.query.read({
      $filter: {foo: 1}
    }),
    pull.log()
  )
})

it fails with

Error: flumeview-level can only be used with a log that provides a directory
    at create (/Users/regular/dev/ssb/npm-ssb/node_modules/flumeview-level/index.js:22:15)
    at /Users/regular/dev/ssb/npm-ssb/node_modules/flumeview-level/index.js:13:14
    at /Users/regular/dev/ssb/npm-ssb/node_modules/flumeview-query/index.js:35:17

My understanding: FlumeViewLevel needs the log to have a file property so that it knows where to put its db for the index. The rest of the code seems to be fine if indexes=[], so not creating FlumeViewLevel in this case should enable combining flumelog-memory with flumeview-query, right? This would be nice for unit testing. Another path could be leveldb in memory.

If this makes sense to you, I could work on a PR. Any thoughts?

Could most of this factor out?

I was reading through the implementation and it looks like the only link-specific thing is tied up in the emitter:

function links (doc, emit) {
  emit({source: s, dest: d, rel: r, ts: ts})
}
...
StreamviewLinks(dirname, indexes, links, version)

If this method didnt emit links, but some other extracted data, wouldnt this be pretty general?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.