Code Monkey home page Code Monkey logo

tectonicdb's Introduction

tectonicdb

build crate.io doc.rs Minimum Rust version Rust stable

crate docs.rs crate.io
tectonicdb doc.rs crate.io
tdb-core doc.rs crate.io
tdb-server-core doc.rs crate.io
tdb-cli doc.rs crate.io

tectonicdb is a fast, highly compressed standalone database and streaming protocol for order book ticks.

Why

  • Uses a simple and efficient binary file format: Dense Tick Format(DTF)

  • Stores order book tick data tuple of shape: (timestamp, seq, is_trade, is_bid, price, size).

  • Sorted by timestamp + seq

  • 12 bytes per orderbook event

  • 600,000 inserts per thread second

Installation

There are several ways to install tectonicdb.

  1. Binaries

Binaries are available for download. Make sure to put the path to the binary into your PATH. Currently only build is for Linux x86_64.

  1. Crates
cargo install tectonicdb

This command will download tdb, tdb-server, dtftools binaries from crates.io and build locally.

  1. GitHub

To contribute you will need the copy of the source code on your local machine.

git clone https://github.com/0b01/tectonicdb
cd tectonicdb
cargo build --release
cargo run --release tdb-server

The binaries can be found under target/release.

How to use

It's very easy to setup.

./tdb-server --help

For example:

./tdb-server -vv -a -i 10000
# run the server on INFO verbosity
# turn on autoflush for every 10000 inserts per orderbook

Configuration

To config the Google Cloud Storage and Data Collection Backend integration, the following environment variables are used:

Variable Name Default Description
TDB_HOST 0.0.0.0 The host to which the database will bind
TDB_PORT 9001 The port that the database will listen on
TDB_DTF_FOLDER db Name of the directory in which DTF files will be stored
TDB_AUTOFLUSH false If true, recorded orderbook data will automatically be flushed to DTF files every interval inserts.
TDB_FLUSH_INTERVAL 1000 Every interval inserts, if autoflush is enabled, DTF files will be written from memory to disk.
TDB_GRANULARITY 0 Record history granularity level
TDB_LOG_FILE_NAME tdb.log Filename of the log file for the database
TDB_Q_CAPACITY 300 Capacity of the circular queue for recording history

Client API

Command Description
HELP Prints help
PING Responds PONG
INFO Returns info about table schemas
PERF Returns the answercount of items over time
LOAD [orderbook] Load orderbook from disk to memory
USE [orderbook] Switch the current orderbook
CREATE [orderbook] Create orderbook
GET [n] FROM [orderbook] Returns items
GET [n] Returns n items from current orderbook
COUNT Count of items in current orderbook
COUNT ALL Returns total count from all orderbooks
CLEAR Deletes everything in current orderbook
CLEAR ALL Drops everything in memory
FLUSH Flush current orderbook to "Howdisk can
FLUSHALL Flush everything from memory to disk
SUBSCRIBE [orderbook] Subscribe to updates from orderbook
EXISTS [orderbook] Checks if orderbook exists
SUBSCRIBE [orderbook] Subscribe to orderbook

Data commands

USE [dbname]
ADD [ts], [seq], [is_trade], [is_bid], [price], [size];
INSERT 1505177459.685, 139010, t, f, 0.0703620, 7.65064240; INTO dbname

Monitoring

TectonicDB supports monitoring/alerting by periodically sending its usage info to an InfluxDB instance:

    --influx-db <influx_db>                        influxdb db
    --influx-host <influx_host>                    influxdb host
    --influx-log-interval <influx_log_interval>    influxdb log interval in seconds (default is 60)

As a concrete example,

...
$ influx
> CREATE DATABASE market_data;
> ^D
$ tdb --influx-db market_data --influx-host http://localhost:8086 --influx-log-interval 20
...

TectonicDB will send field values disk={COUNT_DISK},size={COUNT_MEM} with tag ob={ORDERBOOK} to market_data measurement which is the same as the dbname.

Additionally, you can query usage information directly with INFO and PERF commands:

  1. INFO reports the current tick count in memory and on disk.

  2. PERF returns recorded tick count history whose granularity can be configured.

Logging

Log file defaults to tdb.log.

Testing

export RUST_TEST_THREADS=1
cargo test

Tests must be run sequentially because some tests depend on dtf files that other tests generate.

Benchmark

tdb client comes with a benchmark mode. This command inserts 1M records into the tdb.

tdb -b 1000000

Using dtf files

Tectonic comes with a commandline tool dtfcat to inspect the file metadata and all the stored events into either JSON or CSV.

Options:

USAGE:
    dtfcat [FLAGS] --input <INPUT>

FLAGS:
    -c, --csv         output csv
    -h, --help        Prints help information
    -m, --metadata    read only the metadata
    -V, --version     Prints version information

OPTIONS:
    -i, --input <INPUT>    file to read

As a library

It is possible to use the Dense Tick Format streaming protocol / file format in a different application. Works nicely with any buffer implementing the Write trait.

Requirements

TectonicDB is a standalone service.

  • Linux

  • macOS

Language bindings:

  • TypeScript

  • Rust

  • Python

  • JavaScript

Additional Features

  • Usage statistics like Cloud SQL

  • Commandline inspection tool for dtf file format

  • Logging

  • Query by timestamp

Changelog

  • 0.5.0: InfluxDB monitoring plugin and improved command line arguments
  • 0.4.0: iterator-based APIs for handling DTF files and various quality of life improvements
  • 0.3.0: Refactor to async

tectonicdb's People

Contributors

0b01 avatar ameobea avatar coderfi avatar dependabot-preview[bot] avatar dependabot[bot] avatar erismart avatar gsalaz98 avatar ivoscc avatar jhnsmth avatar john35 avatar rickyhan avatar swoorup avatar vincent-liuwingsang avatar yurikoval avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tectonicdb's Issues

Dockerfile build fails

Getting the following error when building a docker image:

~/git/tectonicdb master[+]> docker-compose build
Building server
Step 1/17 : FROM ekidd/rust-musl-builder:nightly AS builder
 ---> 199d4b052bb2
Step 2/17 : ADD . ./
 ---> 9e36062d21fe
Step 3/17 : RUN sudo chown -R rust:rust /home/rust
 ---> Running in 76a27d1789f7
Removing intermediate container 76a27d1789f7
 ---> a1311eea26df
Step 4/17 : RUN rm -rf ~/.rustup
 ---> Running in f5a1044bb295
Removing intermediate container f5a1044bb295
 ---> 74b082f3caba
Step 5/17 : RUN curl https://sh.rustup.rs -sSf |     sh -s -- -y --default-toolchain nightly-2018-06-13 &&     rustup target add x86_64-unknown-linux-musl
 ---> Running in 92eeff584e91
info: downloading installer
info: syncing channel updates for 'nightly-2018-06-13-x86_64-unknown-linux-gnu'
info: latest update on 2018-06-13, rust version 1.28.0-nightly (b68432d56 2018-06-12)
info: downloading component 'rustc'
info: downloading component 'rust-std'
info: downloading component 'cargo'
info: downloading component 'rust-docs'
info: installing component 'rustc'
info: installing component 'rust-std'
info: installing component 'cargo'
info: installing component 'rust-docs'

info: default toolchain set to 'nightly-2018-06-13'
  nightly-2018-06-13 installed - rustc 1.28.0-nightly (b68432d56 2018-06-12)


Rust is installed now. Great!

To get started you need Cargo's bin directory ($HOME/.cargo/bin) in your PATH 
environment variable. Next time you log in this will be done automatically.

To configure your current shell run source $HOME/.cargo/env
info: downloading component 'rust-std' for 'x86_64-unknown-linux-musl'
info: installing component 'rust-std' for 'x86_64-unknown-linux-musl'
Removing intermediate container 92eeff584e91
 ---> c843b98c3f21
Step 6/17 : WORKDIR ~
 ---> Running in 70d582da5535
Removing intermediate container 70d582da5535
 ---> 879b48808e4d
Step 7/17 : RUN PKG_CONFIG_PATH=/usr/local/musl/lib/pkgconfig     LDFLAGS=-L/usr/local/musl/lib     cargo build --bin tectonic-server --target x86_64-unknown-linux-musl --release
 ---> Running in 784d518d6175
error: failed to parse manifest at `/home/rust/src/Cargo.toml`

Caused by:
  editions are unstable

Caused by:
  feature `edition` is required

consider adding `cargo-features = ["edition"]` to the manifest
ERROR: Service 'server' failed to build: The command '/bin/sh -c PKG_CONFIG_PATH=/usr/local/musl/lib/pkgconfig     LDFLAGS=-L/usr/local/musl/lib     cargo build --bin tectonic-server --target x86_64-unknown-linux-musl --release' returned a non-zero code: 101


Implement query statements

Perhaps we can implement some query statements in the server to aggregate data? i.e get volume of bid/ask orders within a specific range?

how to reconstruct orderbook

You mentioned in this blog post:
2018-04-23 17 17 23

In another post, you mentioned:
2018-04-23 17 18 22

I thought you always need a base snapshot to reconstruct the orderbook. Can you explain how to reconstruct with only the DFT data?

Thanks!

#![feature] may not be used on the stable release channel

cargo build --lib gives me an error:

laptop:tectonic ksanderer$ cargo build --lib
warning: unused manifest key: bin.2.publish
warning: unused manifest key: bin.3.publish
warning: unused manifest key: bin.4.publish
warning: unused manifest key: package.category
   Compiling tectonicdb v0.1.7 (file:///Users/ksanderer/Projects/tectonic)
error[E0554]: #![feature] may not be used on the stable release channel
 --> src/lib/lib.rs:1:1
  |
1 | #![feature(conservative_impl_trait)]
  | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

error: aborting due to previous error

error: Could not compile `tectonicdb`.

Any ideas how to fix this?

liblibtectonic.so: cannot open shared object file: No such file or directory

Hi, I was following the python example code and encountered this error

Traceback (most recent call last):
  File "get_order_book.py", line 1, in <module>
    from tectonic import TectonicDB
  File "/mnt/960EVO/workspace/blockchain/myorderbook/tectonic.py", line 5, in <module>
    import ffi
  File "/mnt/960EVO/workspace/blockchain/myorderbook/ffi.py", line 32, in <module>
    lib = CDLL(lib_path)
  File "/home/mingrui/anaconda3/envs/py36_bricks/lib/python3.6/ctypes/__init__.py", line 348, in __init__
    self._handle = _dlopen(self._name, mode)
OSError: /mnt/960EVO/workspace/target/debug/liblibtectonic.so: cannot open shared object file: No such file or directory

regarding this line:
https://github.com/rickyhan/tectonicdb/blob/4feaf1028032a3541610295c91073c0cabb3b5dc/cli/python/ffi.py#L31

Is it possible to store multiple tickers in one database?

Can't figure out is it possible to store data from multiple sources?

For example for example if I have streaming data from two platforms:

platform1:
    ETH/BTC
    ETH/USDT

platform2:
    ETH/BTC
    ETH/USDT

How can I handle this with tectonicdb?

Building 'tectonic-server' fails if dependency 'reqwest' is not included

user@computer:~/tectonicdb$ cargo build --bin tectonic-server

.
.
.
error[E0599]: no function or associated item named `new_v4` found for type `uuid::Uuid` in the current scope
   --> src/bin/server/state.rs:423:42
    |
423 |                 fname: format!("{}--{}", Uuid::new_v4(), store_name).into(),
    |                                          ^^^^^^^^^^^^ function or associated item not found in `uuid::Uuid`

error[E0599]: no function or associated item named `new_v4` found for type `uuid::Uuid` in the current scope
   --> src/bin/server/state.rs:658:47
    |
658 |                 fname: format!("{}--default", Uuid::new_v4()).into(),
    |                                               ^^^^^^^^^^^^ function or associated item not found in `uuid::Uuid`

error[E0599]: no function or associated item named `new_v4` found for type `uuid::Uuid` in the current scope
   --> src/bin/server/state.rs:672:46
    |
672 |                     fname: format!("{}--{}", Uuid::new_v4(), store_name).into(),
    |                                              ^^^^^^^^^^^^ function or associated item not found in `uuid::Uuid`

error: aborting due to 3 previous errors

For more information about this error, try `rustc --explain E0599`.
error: Could not compile `tectonicdb`.
  • rustc version: rustc 1.27.0-nightly (ac3c2288f 2018-04-18)
  • cargo version: cargo 1.26.0-nightly (008c36908 2018-04-13)

By disabling the [features] tag in Cargo.toml, the dependency reqwest is never included, and results in a build failure when trying to build binary tectonic-server.

Integration

Hello, I want to use tectonicdb and integrate it into my work which is crypto-bank, I want to contribute as much as possible and not to split the codebases as much as it will be possible, are you up to a conversation about it on gitter or somewhere else?

Thank you

How to organize datastores

Hi, really loving your project so far.

I have a question about organizing datastore files, this is my current workflow:

  1. read websocket data, flush to file every 1000 rows
  2. when file hits 100 thousand rows (this is just a limit I use, can be changed), start with a new file.
  3. buffer the incoming data when switching to the new file
  4. start writing to the new file.
  5. put the old filename into a database for example mongodb, along with timestamp of starting and ending time.

When later on I want to find data within a time frame, I will search the mongodb to get the correct datastore file.

I'm wondering if this is the best way to do this?
Thank you!

failed to compile

I received this error on running cargo install tectonicdb


error[E0554]: #![feature] may not be used on the stable release channel
 --> .\.cargo\registry\src\github.com-1ecc6299db9ec823\tectonicdb-0.2.0\src/lib/lib.rs:1:1
  |
1 | #![feature(conservative_impl_trait)]
  | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

error: aborting due to previous error

error: failed to compile `tectonicdb v0.2.0`, intermediate artifacts can be found at \AppData\Local\Temp\cargo-install.aFn6bCR7Zc4B`

Caused by:
  Could not compile `tectonicdb`.

Server crashes when connected to from multiple clients under certain conditions

[tectonic-1]2018-03-31T04:56:04.448768213Z [2018-03-31][04:5604:448603709][tectonic_server::plugins::gstorage::run][INFO] Need to upload 0 files.
[tectonic-1]2018-03-31T04:56:04.861235932Z [2018-03-1][04:56:04:861008880][tectonic_server::server][INFO] Client connected. Current: 2.
[tectonic-1]2018-03-31T04:56:04.864033967Z thread 'main' panicked at 'called `Option::unwrap()` on a `None` value', libcore/option.rs:335:21
[tectonic-1]2018-03-31T04:56:05.189609879Z stack backtrace:
[tectonic-1]2018-03-31T04:56:05.189687246Z    0:std::sys::unix::backtrace::tracing::imp::unwind_backtrace
[tectonic-1]2018-03-31T04:56:05.203499788Z              at libstd/sys/unix/backtrace/tracing/gcc_s.rs:49
[tectonic-1]2018-03-31T04:56:05.203560268Z    1: std::sys_common::backtrace::print
[tectonic-1]2018-03-31T04:56:05.205063950Z              at libstd/sys_common/backtrace.rs:71
[tectonic-1]2018-03-31T04:56:05.205087479Z              at libstd/sys_common/backtrace.rs:59
[tectonic-1]2018-03-31T04:56:05.205096772Z    2: std::panicking::default_hook::{{closure}}
[tectonic-1]2018-03-31T04:56:05.207197720Z              at libstd/panicking.rs:207
[tectonic-1]2018-03-31T04:56:05.207217797Z    3: std::panicking::default_hook
[tectonic-1]2018-03-31T04:56:05.207236221Z              at libstd/panicking.rs:223
[tectonic-1]2018-03-31T04:56:05.207250501Z    4: std::panicking::rust_panic_with_hook
[tectonic-1]2018-03-31T04:56:05.207255778Z              at libstd/panicking.rs:402
[tectonic-1]2018-03-31T04:56:05.207285279Z    5: std::panicking::begin_panic_fmt
[tectonic-1]2018-03-31T04:56:05.207294583Z              at libstd/panicking.rs:349
[tectonic-1]2018-03-31T04:56:05.207299475Z    6: rust_begin_unwind
[tectonic-1]2018-03-31T04:56:05.207304360Z              at libstd/panicking.rs:325
[tectonic-1]2018-03-31T04:56:05.207309087Z    7: core::panicking::panic_fmt
[tectonic-1]2018-03-31T04:56:05.209775588Z              at libcore/panicking.rs:72
[tectonic-1]2018-03-31T04:56:05.209795418Z    8: core::panicking::panic
[tectonic-1]2018-03-31T04:56:05.209802357Z              at libcore/panicking.rs:51
[tectonic-1]2018-03-31T04:56:05.209807435Z    9: tectonic_server::handler::gen_response
[tectonic-1]2018-03-31T04:56:05.209819673Z   10: <futures::stream::fold::Fold<S, F, Fut, T> as futures::future::Future>::poll
[tectonic-1]2018-03-31T04:56:05.209835370Z   11: <futures::future::chain::Chain<A, B, C>>::poll
[tectonic-1]2018-03-31T04:56:05.209844951Z   12: futures::task_impl::std::set
[tectonic-1]2018-03-31T04:56:05.209856753Z   13:tokio::executor::current_thread::CurrentRunner::set_spawn
[tectonic-1]2018-03-31T04:56:05.209890861Z   14:<tokio::executor::current_thread::scheduler::Scheduler<U>>::tick
[tectonic-1]2018-03-31T04:56:05.209905963Z   15: <scoped_tls::ScopedKey<T>>::set
[tectonic-1]2018-03-31T04:56:05.209934525Z   16: <std::thread::local::LocalKey<T>>::with
[tectonic-1]2018-03-31T04:56:05.209992676Z   17: <std::thread::local::LocalKey<T>>::with
[tectonic-1]2018-03-31T04:56:05.210003670Z   18: tokio_core::reactor::Core::poll
[tectonic-1]2018-03-31T04:56:05.210008512Z   19: tectonic_server::server::run_server
[tectonic-1]2018-03-31T04:56:05.210017277Z   20: tectonic_server::main
[tectonic-1]2018-03-31T04:56:05.210022304Z   21: std::rt::lang_start::{{closure}}
[tectonic-1]2018-03-31T04:56:05.210033606Z   22: std::panicking::try::do_call
[tectonic-1]2018-03-31T04:56:05.210078849Z              at libstd/rt.rs:59
[tectonic-1]2018-03-31T04:56:05.210094392Z              at libstd/panicking.rs:306
[tectonic-1]2018-03-31T04:56:05.210100069Z   23: __rust_maybe_catch_panic
[tectonic-1]2018-03-31T04:56:05.210380609Z              at libpanic_unwind/lib.rs:102
[tectonic-1]2018-03-31T04:56:05.210418051Z   24: std::rt::lang_start_internal
[tectonic-1]2018-03-31T04:56:05.210427495Z              at libstd/panicking.rs:285
[tectonic-1]2018-03-31T04:56:05.210432191Z              at libstd/panic.rs:361
[tectonic-1]2018-03-31T04:56:05.210436775Z              at libstd/rt.rs:58
[tectonic-1]2018-03-31T04:56:05.210442128Z   25: main 

Break out the python client, yah?

Hey @0b01, just found your project and am really digging it (all around).

I was wondering, what do you think about breaking out the Python client into a new repo and adding some support for additional async frameworks such as trio and/or anyio. I'd be glad to help with this work and write a test suite that could be used to audit the core db using a (standard) docker image.

๐Ÿ„

GET range problem: catch will return everything in memory if range not found

I have trouble understanding this line:

https://github.com/rickyhan/tectonicdb/blob/79b67d05182773a822d232c7cecfdfcb06c5374e/src/bin/server/state.rs#L561

I am trying to search for a timestamp range with GET FROM x TO y, in both memory and filestore. I see in the code that if range is not in memory, it will return None. But then unwrap_to catches the None and returns everything that's in memory. Is this the correct behavior?

I can of course work around this by doing FLUSH and CLEAR ALL before my get range query. But it's a bit convoluted.

Thank you!

python example "Example program: Plotter" - is broken

running the python Example program: Plotter (example-plot.md) on OSX with

(base) โžœ  ~ python --version
Python 3.7.6
RuntimeWarning: coroutine 'TectonicDB.cmd' was never awaited
    self.db.cmd("USE {}".format(market).encode())[1]
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
Traceback (most recent call last):
    self.db.cmd("USE {}".format(market).encode())[1]
TypeError: 'coroutine' object is not subscriptable

rustc 1.49.0 (e1884a8e3 2020-12-29)

Example Algorithmic Trading bot - broken - crashes the server

when running the Example Algorithmic Trading bot

and the server is started with:

RUST_BACKTRACE=full ./tdb-server -vv -a -f "/tectonicdb/test/test-data" -i 10000
from tectonic import TectonicDB
import json
import asyncio

async def subscribe(name):
    db = TectonicDB(host="localhost", port=9001)
    _success, _text = await db.subscribe(name)
    while 1:
        _, item = await db.poll()
        if b"NONE" == item:
            await asyncio.sleep(0.01)
        else:
            yield json.loads(item)

class TickBatcher(object):
    def __init__(self, db_name):
        self.one_batch = []
        self.db_name = db_name

    async def batch(self):
        async for item in subscribe(self.db_name):
            self.one_batch.append(item)

    async def timer(self):
        while 1:
            await asyncio.sleep(1)     # do work every n seconds
            print(len(self.one_batch)) # do work here
            self.one_batch = []        # clear queue

if __name__ == '__main__':
    loop = asyncio.get_event_loop()

    proc = TickBatcher("bt_btceth")
    loop.create_task(proc.batch())
    loop.create_task(proc.timer())

    loop.run_forever()
    loop.close()

where the bt_btceth - is the file located inside the test-data directory

when running the code it crashes the server with:

[2021-01-08][21:58:28:763015000][tdb_server_core::server][INFO] Accepting from: 127.0.0.1:54466
thread 'async-std/executor' panicked at 'range end index 1398096467 out of range for slice of length 1048576', /Users/megalodon/hook/tectonicdb/crates/tdb-server-core/src/server.rs:97:32
stack backtrace:
   0:        0x100fb9bb4 - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::ha0848bb2602b5d05
   1:        0x100fd91e0 - core::fmt::write::h9f3ccac2ef682b93
   2:        0x100fb35e6 - std::io::Write::write_fmt::h0a47673aab280496
   3:        0x100fbb9b9 - std::panicking::default_hook::{{closure}}::h850c6aaf5e80c2f5
   4:        0x100fbb67d - std::panicking::default_hook::h037801299da6e1c6
   5:        0x100fbc03b - std::panicking::rust_panic_with_hook::h76436d4cf7a368ac
   6:        0x100fbbb65 - std::panicking::begin_panic_handler::{{closure}}::h516c76d70abf04f6
   7:        0x100fba028 - std::sys_common::backtrace::__rust_end_short_backtrace::h653290b4f930faed
   8:        0x100fbbaca - _rust_begin_unwind
   9:        0x100fe790f - core::panicking::panic_fmt::hde9134dd19c9a74f
  10:        0x100fe79e6 - core::slice::index::slice_end_index_len_fail::h1abfffb7603f7340
  11:        0x100e2b7a5 - <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll::ha8ee21bb24b9c334
  12:        0x100e1b2db - async_task::raw::RawTask<F,R,S,T>::run::h3048d31c3e65fa36
  13:        0x100f19d9b - std::thread::local::LocalKey<T>::with::h22726983dfed64e7
  14:        0x100f1e039 - std::sys_common::backtrace::__rust_begin_short_backtrace::h751ab56d9cd682d2
  15:        0x100f1969d - core::ops::function::FnOnce::call_once{{vtable.shim}}::hef7b80dd4fa7d5bb
  16:        0x100fbf4cd - std::sys::unix::thread::Thread::new::thread_start::hedb7cc0d930a8f40
  17:     0x7fff2036a950 - __pthread_start
[1]    21857 abort      RUST_BACKTRACE=full ./tdb-server -vv -a -f  -i 10000

rustc 1.49.0 (e1884a8e3 2020-12-29)

the order book seems ok.. running the ffi.py with big limit, gives:

                    ts    seq  is_trade  is_bid     price       size
0        1509862963012   7050      True    True  0.040190  81.000000
1        1509862963012   7050      True    True  0.040190  28.550123
2        1509862963012   7050      True    True  0.036790   0.000000
3        1509862963012   7050      True    True  0.032791   0.000000
4        1509862963012   7050      True    True  0.040313   0.990000
...                ...    ...       ...     ...       ...        ...
6104686  1510340961171  16144      True    True  0.052534   0.000000
6104687  1510340961171  16144      True    True  0.060045   0.045147
6104688  1510340961171  16144      True    True  0.060048   0.350615
6104689  1510340961171  16144      True    True  0.060054   0.382365
6104690  1510340961171  16144      True    True  0.060059   0.067691

[6104691 rows x 6 columns]
53.56917119026184

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.