Code Monkey home page Code Monkey logo

readysettech / readyset Goto Github PK

View Code? Open in Web Editor NEW
3.9K 3.9K 106.0 227.56 MB

Readyset is a MySQL and Postgres wire-compatible caching layer that sits in front of existing databases to speed up queries and horizontally scale read throughput. Under the hood, ReadySet caches the results of cached select statements and incrementally updates these results over time as the underlying data changes.

Home Page: https://readyset.io

License: Other

Rust 96.74% R 0.03% HTML 0.01% PLpgSQL 0.13% Smarty 0.10% Ruby 0.01% Starlark 1.35% Go 0.10% Shell 0.83% Gnuplot 0.01% Python 0.04% Clojure 0.65% Dockerfile 0.02%
backend cache caching caching-proxy databases mysql mysql-database postgres postgresql postgresql-database rust rust-lang sql streaming-data

readyset's People

Contributors

alanamarzoev avatar altmannmarcelo avatar amartin96 avatar benesch avatar eeeeeta avatar ekmartin avatar ethan-readyset avatar fintelia avatar frannoriega avatar glittershark avatar goodsyntax808 avatar gzsombor avatar harleyk-readyset avatar imeyer avatar jasobrown-rs avatar jbmcgill avatar jmbredenberg avatar jmftrindade avatar jonathangb avatar jonhoo avatar justinmir avatar larat7 avatar lukoktonos avatar ms705 avatar nickelization avatar nvzqz avatar prismaphonic avatar ronh-rs avatar staple avatar vladrodionov avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

readyset's Issues

Clickhouse Support

Is your feature request related to a problem? Please describe.

Support for Clickhouse DB

Additional context

With Clickhouse being primarily a large data OLAP DB, integration with it could lead to a wider adoption of ReadySet

Bug: Error with enum type and geography type at Postgres

So, i just install ReadySet at my local with my Postgres 14 existing.

Then, i just want to select my "Table" with specific type like enum or geography.
When i try to query got this error.

ERROR:  internal error: could not retrieve expected column index 3 from row while parsing psql result: error deserializing column 3: cannot convert between the Rust type `noria_data::DataType` and the Postgres type `geography`

this is bug? or i just wrong install

btw, great work for doing this. Keep it up.

Cached query returns null for newly inserted data

I connected my node api to the readyset docker service.
A user fetching query was cached in readyset.
But on creating new user, i do not get the new user data from the api.
I could see the new inserted row in both mysql db and in the readyset service through mysql shell.

This is the config for readyset in the docker-compose file which is working in swarm mode:

  readyset:
    image: public.ecr.aws/readyset/readyset:latest
    ports:
      - 5433:5433
      - 3307:3307
    depends_on:
      - mysql
    env_file: .env
    networks:
      - backend
    volumes:
      - ./readyset:/state
    environment:
      STANDALONE: "1"
      DB_DIR: "/state"
      DATABASE_TYPE: mysql
      QUERY_CACHING: explicit
      DEPLOYMENT: "quickstart_docker"
      UPSTREAM_DB_URL: mysql://root:root@mysql:3306/testdb
      LISTEN_ADDRESS: "0.0.0.0:5433"
    logging:
      <<: *logging

This is the readyset_docker logs when i fire the api request


nf_readyset.1.msuofpokgq43@ip-172-31-27-196    | 2023-02-15T15:36:14.933119Z  INFO readyset: Accepted new connection context=LogContext({"deployment": "quickstart_docker"})
nf_readyset.1.msuofpokgq43@ip-172-31-27-196    | 2023-02-15T15:36:14.933265Z  INFO Connecting to MySQL upstream{host=mysql port=3306 user=root}: readyset_mysql::upstream: Establishing connection context=LogContext({"deployment": "quickstart_docker"})
nf_readyset.1.msuofpokgq43@ip-172-31-27-196    | 2023-02-15T15:36:14.939268Z  INFO Connecting to MySQL upstream{host=mysql port=3306 user=root}: readyset_mysql::upstream: Established connection to upstream context=LogContext({"deployment": "quickstart_docker"})

Type conversion error when selecting any UUID column

Using PG13:

Any query through Readyset that attempts to select a UUID column seems to result in the following type of error:

ERROR:  internal error: could not retrieve expected column index 0 from row while parsing psql result: error deserializing column 0: cannot convert between the Rust type `noria_data::DataType` and the Postgres type `uuid`

Ie, for a table that looks like:

CREATE TABLE books (
    id uuid DEFAULT uuid_generate_v4() PRIMARY KEY,
    title text NOT NULL
)
select id, title from books -- query will result in the above error
select title from books -- query will succeed

Support for Microsoft SQL Server (MSSQL)

Is your feature request related to a problem? Please describe.
As an organization, we extensively utilize Microsoft SQL Server (MSSQL) alongside Postgres and MySQL. Still, MSSQL is one of the widest-used Enterprise DBs world-wide

Describe the solution you'd like
We would like to request the addition of MSSQL support in ReadySet. This will allow us to leverage the powerful caching and performance enhancements of ReadySet with our MSSQL-based applications.

Describe alternatives you've considered
We've explored other caching solutions (polyscale.ai), but they either require substantial costs. Hence, adding MSSQL support directly to ReadySet would be our preferred solution.

Additional context
With the widespread use of MSSQL in many organizations, we believe the addition of MSSQL support could not only benefit us but also many other users who might be looking to enhance the performance of their MSSQL applications. This could also serve to broaden ReadySet's user base, increasing its reach and effectiveness.

Looking forward to your consideration and response.

Best

App testing

  • Goal: log all supported & unsupported queries for each app, validate that cached query results are correct, and get this information into CI so we can track progress in query support over time. 
  • Deliverable: dashboard filterable by ORM that shows % of supported queries (however, we want to make sure that we are also able to easily access specific info about what queries are supported vs not in addition to these aggregates).

Please view and update details about various apps deployed, here:

https://readysettech.atlassian.net/wiki/spaces/ENG/pages/edit-v2/46104577

From SyncLinear.com | REA-2766

Type conversion error when selecting any `character varying(n)[]` column

Similar to #6:

Any query through Readyset that attempts to select a character varying(n)[] column seems to result in the following type of error:

ERROR:  internal error: could not retrieve expected column index 1 from row while parsing psql result: error deserializing column 1: cannot convert between the Rust type `noria_data::DataType` and the Postgres type `_varchar`

Ie, for a table that looks like:

CREATE TABLE some_table (
    id uuid DEFAULT uuid_generate_v4() PRIMARY KEY,
    some_text_array character varying(5)[],
);
select id, some_text_array from some_table -- query will result in the above error
select id from some_table -- query will succeed

Support for table blacklist instead of whitelist

Is your feature request related to a problem? Please describe.
I'm currently trialling ReadySet and I already know there are 2-3 tables that I'd like to not bother ReadySet with at all (because they're read from so infrequently and they're massive). Currently, the only option is to write out a whitelist of tables with the --table-replication flag, which doesn't work well for us because we have 300+ tables plus we'll be adding new tables all the time.

Describe the solution you'd like
A command line flag for the tables that ReadySet shouldn't bother caching, similar to --table-replication (--table-replication-ignore might work?).

I would guess that --table-replication and --table-replication-ignore would be mutually exclusive.

Describe alternatives you've considered
None, beyond a long whitelist.

set_hard_pending_compaction_bytes_limit to 2TB+

If we restart partway through compaction for a large database/table, there may be enough sst files to trigger rocksdb to block writes until compaction finishes. We currently do 1 write as a part of starting up persistent_state, so this ends up compacting a lot as part of startup (with log messages like “Still initializing persistent state”) before continuing the snapshot + compaction process.

The overall time should be the same as if compaction hadn’t restarted, but it may make sense to just [set_hard_pending_compaction_bytes_limit](https://docs.rs/rocksdb/latest/rocksdb/struct.Options.html#method.set_hard_pending_compaction_bytes_limit) to be large enough that we can compact our target db size (2 TB) without blocking writes in snapshot mode. Then once we finish snapshotting, reset it to the default.

From SyncLinear.com | REA-2900

replication-tables changes are not enforced on restart

Describe the problem

ReadySet does not apply changes to the replication-tables config on restart.

To Reproduce

  1. Started a Postgres ReadySet instance with --replication-tables 'db.table1'.
  2. Shut down the ReadySet instance after snapshotting was complete and restarted with --replication-tables 'db.table1, db.table2'.
  3. After restart, ReadySet did not snapshot db.table2.

The same behavior is expected for a MySQL readyset instance but not tested.

Expected behavior

Expected ReadySet to snapshot db.table2 on restart.

Environment

  • ReadySet version beta-2022-12-15 and newer
  • Tested by following the instructions in the quickstart documentation and building manually.
  • Postgres 14

panicked at 'Maximum offset must be present after snapshot'

Describe the problem
readyset was running 2023-01-18,
today upgrade to image 2023-02-15 and change a few settings
add --replication-tables=schema1.*,schema12.*,schema13.*,schema14.*
add --prometheus-metrics

after restart, readyset drop some previous replication (as expected), then crash

To Reproduce
current docker-compose.yml

readyset:
    image: public.ecr.aws/readyset/readyset:beta-2023-02-15
    ports:
      - "5444:5433"
      - "6034:6034"
    environment:
      DEPLOYMENT_ENV: staging
      RUST_BACKTRACE: 1
      RUST_BACKTRACE: full
    command:
      - --standalone
      - --replication-tables=schema1.*,schema12.*,schema13.*,schema14.*
      - --deployment=staging
      - --disable-telemetry
      - --database-type=postgresql
      - --upstream-db-url=postgresql://abc
      - --address=0.0.0.0:5433
      - --username=a
      - --password=b
      - --query-caching=explicit
      - --db-dir=/state
      # metrics
      - --query-log
      - --prometheus-metrics
      - --query-log-ad-hoc

Expected behavior
A clear and concise description of what you expected to happen.

Additional data / screenshots

readyset_1                   | thread 'tokio-runtime-worker' panicked at 'Maximum offset must be present after snapshot', /tmp/readyset/replicators/src/noria_adapter.rs:553:14
readyset_1                   | stack backtrace:
readyset_1                   |    0:     0x55909bf950e0 - <unknown>
readyset_1                   |    1:     0x55909bfbc15c - <unknown>
readyset_1                   |    2:     0x55909bf8e575 - <unknown>
readyset_1                   |    3:     0x55909bf967e1 - <unknown>
readyset_1                   |    4:     0x55909bf964b3 - <unknown>
readyset_1                   |    5:     0x55909bf96ee3 - <unknown>
readyset_1                   |    6:     0x55909bf96dd7 - <unknown>
readyset_1                   |    7:     0x55909bf95604 - <unknown>
readyset_1                   |    8:     0x55909bf96b02 - <unknown>
readyset_1                   |    9:     0x55909bfb94e3 - <unknown>
readyset_1                   |   10:     0x55909bfb93a1 - <unknown>
readyset_1                   |   11:     0x55909bfb934b - <unknown>
readyset_1                   |   12:     0x55909bfb90a6 - <unknown>
readyset_1                   |   13:     0x559099c1bdae - <unknown>
readyset_1                   |   14:     0x559099c658db - <unknown>
readyset_1                   |   15:     0x559099947a98 - <unknown>
readyset_1                   |   16:     0x5590999dfcae - <unknown>
readyset_1                   |   17:     0x55909986aca7 - <unknown>
readyset_1                   |   18:     0x55909be27fff - <unknown>
readyset_1                   |   19:     0x55909be26f83 - <unknown>
readyset_1                   |   20:     0x559099b61c9a - <unknown>
readyset_1                   |   21:     0x5590999e0be8 - <unknown>
readyset_1                   |   22:     0x559099862eb6 - <unknown>
readyset_1                   |   23:     0x55909be17ad1 - <unknown>
readyset_1                   |   24:     0x55909be18237 - <unknown>
readyset_1                   |   25:     0x55909be295cf - <unknown>
readyset_1                   |   26:     0x55909bf9a933 - <unknown>
readyset_1                   |   27:     0x7f0185456609 - start_thread
readyset_1                   |   28:     0x7f0184ea0133 - clone
readyset_1                   |   29:                0x0 - <unknown>

If applicable, add screenshots to help explain your problem.

Environment

  • ReadySet version [beta-2023-02-15]
  • ReadySet deployment method [docker]
  • Upstream database and version [Postgres 13]

Additional context
changing back to old config won't help.

currently I solved it by deleting the data directory /state

Ensure Readyset works within private networks

Kubernetes can deploy internal private networking services. We should ensure ReadySet works when it does not have access to the external internet, as well as providing support in the Helm chart for finding the upstream database over an internal private network.

From SyncLinear.com | REA-2800

Documentation on required readyset user privileges

I'm looking to set up a new user on a RDS postgres database such that its credentials can be distributed to a ReadySet instance.

However, it is not clear to me what permissions/privileges this user's role needs to have in order to work. Could someone clarify what these should be? Thank you!

2023-03-01T21:16:54.003709Z ERROR replicators: Error in replication, will retry after timeout context=LogContext({"deployment": "brain_interfaces_readyset"}) error=Error during replication: PostgreSQL: db error: ERROR: permission denied for database <my database name> timeout_sec=30

Surrealdb support

Surrealdb is a NewSQL database. I couldn't find any cache layer for it. Surrealdb SQL is very similar to regular SQL. it's still in beta but works fine. it would be great if Readyset supports it. thanks.

[REA-2863] File is too large for PlainTableReader!

Summary

If the snapshotting of a large (~100 GiB) table gets interrupted before compaction finishes (as can easily happen if one forgets to run ulimit -n to allow readyset to use more file descriptors than the default 1024), Readyset Panics when re-opening persistent state.

Description

See summary

Expected behavior

ReadySet can open persistent state and complete the compaction of any snapshot table

Actual behavior

Readyset permanently fails if rocksdb gets into the state where a file is too big to open for PlainTableReader

Steps to reproduce

  1. Snapshot a 100GiB table (in my case, the table had (int, bytea) where bytea was random 64kiB of data for each row)-- not sure if this is the minimal size that will reproduce this.
  2. Interrupt compaction before it finishes
  3. Restart readyset
  4. Should see the panic

ReadySet version

readyset
release-version: unknown-release-version
commit_id:       6316109b9b3eee40328945c498c7d6eeb496c14e
platform:        x86_64-unknown-linux-gnu
rustc_version:   rustc 1.70.0-nightly (f63ccaf25 2023-03-06)
profile:         release
opt_level:       3
ubuntu@ip-10-0-0-66:~$

Upstream DB type and version

Postgres 14.5

Instance Details

  • i-0fde78bb18b00860a in sandbox account, us-east-2, m6a.8xlarge connected to RDS aurora serverless postgres

Logs

thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: Error { message: "Not implemented: File is too large for PlainTableReader!" }', dataflow-state/src/persistent_state.rs:166:30
dataflow-state/src/persistent_state.rs:166:30
stack backtrace:
   0: rust_begin_unwind
         	at /rustc/f63ccaf25f74151a5d8ce057904cd944074b01d2/library/std/src/panicking.rs:579:5
   1: core::panicking::panic_fmt
         	at /rustc/f63ccaf25f74151a5d8ce057904cd944074b01d2/library/core/src/panicking.rs:64:14
   2: core::result::unwrap_failed
         	at /rustc/f63ccaf25f74151a5d8ce057904cd944074b01d2/library/core/src/result.rs:1750:5
   3: core::result::Result<T,E>::unwrap
         	at /rustc/f63ccaf25f74151a5d8ce057904cd944074b01d2/library/core/src/result.rs:1090:23
   4: <&rocksdb::db::DBCommon<rocksdb::db::SingleThreaded,rocksdb::db::DBWithThreadModeInner> as dataflow_state::persistent_state::Put>::do_put
         	at /readyset/dataflow-state/src/persistent_state.rs:166:9
   5: dataflow_state::persistent_state::Put::save_meta
         	at /readyset/dataflow-state/src/persistent_state.rs:156:9
   6: dataflow_state::persistent_state::increment_epoch
         	at /readyset/dataflow-state/src/persistent_state.rs:185:5
   7: dataflow_state::persistent_state::PersistentState::new_inner
         	at /readyset/dataflow-state/src/persistent_state.rs:1485:20
   8: dataflow_state::persistent_state::PersistentState::new
         	at /readyset/dataflow-state/src/persistent_state.rs:1395:15
   9: readyset_dataflow::domain::initialize_state::{{closure}}::{{closure}}
         	at /readyset/readyset-dataflow/src/domain/mod.rs:705:21
  10: <tokio::runtime::blocking::task::BlockingTask<T> as core::future::future::Future>::poll
         	at ./.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.28.1/src/runtime/blocking/task.rs:42:21
  11: tokio::runtime::task::core::Core<T,S>::poll::{{closure}}
         	at ./.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.28.1/src/runtime/task/core.rs:223:17
  12: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut
         	at ./.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.28.1/src/loom/std/unsafe_cell.rs:14:9
  13: tokio::runtime::task::core::Core<T,S>::poll
         	at ./.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.28.1/src/runtime/task/core.rs:212:13
  14: tokio::runtime::task::harness::poll_future::{{closure}}
         	at ./.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.28.1/src/runtime/task/harness.rs:476:19
  15: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
         	at /rustc/f63ccaf25f74151a5d8ce057904cd944074b01d2/library/core/src/panic/unwind_safe.rs:271:9
  16: std::panicking::try::do_call
         	at /rustc/f63ccaf25f74151a5d8ce057904cd944074b01d2/library/std/src/panicking.rs:487:40
  17: std::panicking::try
         	at /rustc/f63ccaf25f74151a5d8ce057904cd944074b01d2/library/std/src/panicking.rs:451:19
  18: std::panic::catch_unwind
         	at /rustc/f63ccaf25f74151a5d8ce057904cd944074b01d2/library/std/src/panic.rs:140:14
  19: tokio::runtime::task::harness::poll_future
         	at ./.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.28.1/src/runtime/task/harness.rs:464:18
  20: tokio::runtime::task::harness::Harness<T,S>::poll_inner
         	at ./.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.28.1/src/runtime/task/harness.rs:198:27
  21: tokio::runtime::task::harness::Harness<T,S>::poll
         	at ./.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.28.1/src/runtime/task/harness.rs:152:15
  22: tokio::runtime::task::raw::RawTask::poll
         	at ./.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.28.1/src/runtime/task/raw.rs:200:18
  23: tokio::runtime::task::UnownedTask<S>::run
         	at ./.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.28.1/src/runtime/task/mod.rs:431:9
  24: tokio::runtime::blocking::pool::Task::run
         	at ./.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.28.1/src/runtime/blocking/pool.rs:159:9
  25: tokio::runtime::blocking::pool::Inner::run
         	at ./.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.28.1/src/runtime/blocking/pool.rs:513:17
  26: tokio::runtime::blocking::pool::Spawner::spawn_thread::{{closure}}
         	at ./.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.28.1/src/runtime/blocking/pool.rs:471:13
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
thread 'Domain 0.0.0' panicked at 'called `Result::unwrap()` on an `Err` value: JoinError::Panic(Id(65), ...)', readyset-dataflow/src/domain/mod.rs:712:10
stack backtrace:
   0: rust_begin_unwind
         	at /rustc/f63ccaf25f74151a5d8ce057904cd944074b01d2/library/std/src/panicking.rs:579:5
   1: core::panicking::panic_fmt
         	at /rustc/f63ccaf25f74151a5d8ce057904cd944074b01d2/library/core/src/panicking.rs:64:14
   2: core::result::unwrap_failed
         	at /rustc/f63ccaf25f74151a5d8ce057904cd944074b01d2/library/core/src/result.rs:1750:5
   3: core::result::Result<T,E>::unwrap
         	at /rustc/f63ccaf25f74151a5d8ce057904cd944074b01d2/library/core/src/result.rs:1090:23
   4: readyset_dataflow::domain::initialize_state::{{closure}}
         	at /readyset/readyset-dataflow/src/domain/mod.rs:686:9
   5: <tracing::instrument::Instrumented<T> as core::future::future::Future>::poll
         	at ./.cargo/registry/src/github.com-1ecc6299db9ec823/tracing-0.1.37/src/instrument.rs:272:9
   6: <F as futures_core::future::TryFuture>::try_poll
         	at ./.cargo/registry/src/github.com-1ecc6299db9ec823/futures-core-0.3.21/src/future.rs:82:9
   7: <futures_util::future::try_future::into_future::IntoFuture<Fut> as core::future::future::Future>::poll
         	at ./.cargo/registry/src/github.com-1ecc6299db9ec823/futures-util-0.3.21/src/future/try_future/into_future.rs:34:9
   8: <futures_util::future::future::map::Map<Fut,F> as core::future::future::Future>::poll
         	at ./.cargo/registry/src/github.com-1ecc6299db9ec823/futures-util-0.3.21/src/future/future/map.rs:55:37
   9: <futures_util::future::future::Map<Fut,F> as core::future::future::Future>::poll
         	at ./.cargo/registry/src/github.com-1ecc6299db9ec823/futures-util-0.3.21/src/lib.rs:91:13
  10: <futures_util::future::try_future::MapErr<Fut,F> as core::future::future::Future>::poll
         	at ./.cargo/registry/src/github.com-1ecc6299db9ec823/futures-util-0.3.21/src/lib.rs:91:13
  11: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::future::future::Future>::poll
         	at /rustc/f63ccaf25f74151a5d8ce057904cd944074b01d2/library/core/src/panic/unwind_safe.rs:296:9
  12: <futures_util::future::future::catch_unwind::CatchUnwind<Fut> as core::future::future::Future>::poll::{{closure}}
         	at ./.cargo/registry/src/github.com-1ecc6299db9ec823/futures-util-0.3.21/src/future/future/catch_unwind.rs:36:42
  13: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
         	at /rustc/f63ccaf25f74151a5d8ce057904cd944074b01d2/library/core/src/panic/unwind_safe.rs:271:9
  14: std::panicking::try::do_call
         	at /rustc/f63ccaf25f74151a5d8ce057904cd944074b01d2/library/std/src/panicking.rs:487:40
  15: std::panicking::try
         	at /rustc/f63ccaf25f74151a5d8ce057904cd944074b01d2/library/std/src/panicking.rs:451:19
  16: std::panic::catch_unwind
         	at /rustc/f63ccaf25f74151a5d8ce057904cd944074b01d2/library/std/src/panic.rs:140:14
  17: <futures_util::future::future::catch_unwind::CatchUnwind<Fut> as core::future::future::Future>::poll
         	at ./.cargo/registry/src/github.com-1ecc6299db9ec823/futures-util-0.3.21/src/future/future/catch_unwind.rs:36:9
  18: <F as futures_core::future::TryFuture>::try_poll
         	at ./.cargo/registry/src/github.com-1ecc6299db9ec823/futures-core-0.3.21/src/future.rs:82:9
  19: <futures_util::future::try_future::into_future::IntoFuture<Fut> as core::future::future::Future>::poll
         	at ./.cargo/registry/src/github.com-1ecc6299db9ec823/futures-util-0.3.21/src/future/try_future/into_future.rs:34:9
  20: <futures_util::future::future::map::Map<Fut,F> as core::future::future::Future>::poll
         	at ./.cargo/registry/src/github.com-1ecc6299db9ec823/futures-util-0.3.21/src/future/future/map.rs:55:37
  21: <futures_util::future::future::Map<Fut,F> as core::future::future::Future>::poll
         	at ./.cargo/registry/src/github.com-1ecc6299db9ec823/futures-util-0.3.21/src/lib.rs:91:13
  22: <futures_util::future::try_future::UnwrapOrElse<Fut,F> as core::future::future::Future>::poll
         	at ./.cargo/registry/src/github.com-1ecc6299db9ec823/futures-util-0.3.21/src/lib.rs:91:13
  23: tokio::runtime::task::core::Core<T,S>::poll::{{closure}}
         	at ./.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.28.1/src/runtime/task/core.rs:223:17
  24: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut
         	at ./.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.28.1/src/loom/std/unsafe_cell.rs:14:9
  25: tokio::runtime::task::core::Core<T,S>::poll
         	at ./.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.28.1/src/runtime/task/core.rs:212:13
  26: tokio::runtime::task::harness::poll_future::{{closure}}
         	at ./.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.28.1/src/runtime/task/harness.rs:476:19
  27: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
         	at /rustc/f63ccaf25f74151a5d8ce057904cd944074b01d2/library/core/src/panic/unwind_safe.rs:271:9
  28: std::panicking::try::do_call
         	at /rustc/f63ccaf25f74151a5d8ce057904cd944074b01d2/library/std/src/panicking.rs:487:40
  29: std::panicking::try
         	at /rustc/f63ccaf25f74151a5d8ce057904cd944074b01d2/library/std/src/panicking.rs:451:19
  30: std::panic::catch_unwind
         	at /rustc/f63ccaf25f74151a5d8ce057904cd944074b01d2/library/std/src/panic.rs:140:14
  31: tokio::runtime::task::harness::poll_future
         	at ./.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.28.1/src/runtime/task/harness.rs:464:18
  32: tokio::runtime::task::harness::Harness<T,S>::poll_inner
         	at ./.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.28.1/src/runtime/task/harness.rs:198:27
  33: tokio::runtime::task::harness::Harness<T,S>::poll
         	at ./.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.28.1/src/runtime/task/harness.rs:152:15
  34: tokio::runtime::task::LocalNotified<S>::run
         	at ./.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.28.1/src/runtime/task/mod.rs:394:9
  35: tokio::runtime::scheduler::current_thread::CoreGuard::block_on::{{closure}}::{{closure}}
         	at ./.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.28.1/src/runtime/scheduler/current_thread.rs:584:25
  36: tokio::runtime::coop::with_budget
         	at ./.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.28.1/src/runtime/coop.rs:107:5
  37: tokio::runtime::coop::budget
         	at ./.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.28.1/src/runtime/coop.rs:73:5
  38: tokio::runtime::scheduler::current_thread::Context::run_task::{{closure}}
         	at ./.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.28.1/src/runtime/scheduler/current_thread.rs:285:29
  39: tokio::runtime::scheduler::current_thread::Context::enter
         	at ./.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.28.1/src/runtime/scheduler/current_thread.rs:350:19
  40: tokio::runtime::scheduler::current_thread::Context::run_task
         	at ./.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.28.1/src/runtime/scheduler/current_thread.rs:285:9
  41: tokio::runtime::scheduler::current_thread::CoreGuard::block_on::{{closure}}
         	at ./.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.28.1/src/runtime/scheduler/current_thread.rs:583:34
  42: tokio::runtime::scheduler::current_thread::CoreGuard::enter::{{closure}}
         	at ./.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.28.1/src/runtime/scheduler/current_thread.rs:615:57
  43: tokio::macros::scoped_tls::ScopedKey<T>::set
         	at ./.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.28.1/src/macros/scoped_tls.rs:61:9
  44: tokio::runtime::scheduler::current_thread::CoreGuard::enter
         	at ./.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.28.1/src/runtime/scheduler/current_thread.rs:615:27
  45: tokio::runtime::scheduler::current_thread::CoreGuard::block_on
         	at ./.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.28.1/src/runtime/scheduler/current_thread.rs:530:19
  46: tokio::runtime::scheduler::current_thread::CurrentThread::block_on
         	at ./.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.28.1/src/runtime/scheduler/current_thread.rs:154:24
  47: tokio::runtime::runtime::Runtime::block_on
         	at ./.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.28.1/src/runtime/runtime.rs:302:47
  48: readyset_server::worker::Worker::handle_worker_request::{{closure}}::{{closure}}
         	at /readyset/readyset-server/src/worker/mod.rs:337:33
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.


This is the max size for PlainTable in rocksdb
  static const uint64_t kMaxFileSize = (1u << 31) - 1;


Rocksdb logs:

428-2023/06/05-01:33:46.608577 361431 [db/compaction/compaction_job.cc:1586] [0] [JOB 4] Generated table #1865: 100300 keys, 6680082120 bytes, temperature: kUnknown
429-2023/06/05-01:33:46.608655 361431 EVENT_LOG_v1 {"time_micros": 1685928826608618, "cf_name": "0", "job": 4, "event": "table_file_creation", "file_number": 1865, "file_size": 6680082120, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1459204, "largest_seqno": 1559503, "table_properties": {"data_size": 6679177600, "index_size": 778421, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 0, "index_value_is_delta_encoded": 0, "filter_size": 125375, "raw_key_size": 1504500, "raw_average_key_size": 15, "raw_value_size": 6677271900, "raw_average_value_size": 66573, "num_data_blocks": 1, "num_entries": 100300, "num_filter_entries": 0, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "", "column_family_name": "0", "column_family_id": 2, "comparator": "", "merge_operator": "", "prefix_extractor_name": "key", "property_collectors": "", "compression": "", "compression_options": "", "creation_time": 0, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0488e114-7f5f-4c48-b43f-4ef04d10f0ac", "db_session_id": "BRRRUZ9VLRYNNFPW34B1", "orig_file_number": 1865, "seqno_to_time_mapping": "N/A"}}
430:2023/06/05-01:33:46.610730 361431 [WARN] [db/db_impl/db_impl_compaction_flush.cc:3582] Compaction error: Not implemented: File is too large for PlainTableReader!
431:2023/06/05-01:33:46.610741 361431 [WARN] [db/error_handler.cc:395] Background IO error Not implemented: File is too large for PlainTableReader!
432-2023/06/05-01:33:46.610746 361431 [db/error_handler.cc:283] ErrorHandler: Set regular background error
433:2023/06/05-01:33:46.611247 361431 (Original Log Time 2023/06/05-01:33:46.610680) [db/compaction/compaction_job.cc:867] [0] compacted to: files[1490 10 0 0 0 0 0] max score 348.00, MB/sec: 725.2 rd, 725.2 wr, level 0, files in(0, 98) out(1 +0 blob) MB in(0.0, 6370.7 +0.0 blob) out(6370.6 +0.0 blob), read-write-amplify(0.0) write-amplify(0.0) Not implemented: File is too large for PlainTableReader!, records in: 100300, records dropped: 0 output_compression: LZ4
434-2023/06/05-01:33:46.611250 361431 (Original Log Time 2023/06/05-01:33:46.610716) EVENT_LOG_v1 {"time_micros": 1685928826610694, "job": 4, "event": "compaction_finished", "compaction_time_micros": 9211541, "compaction_time_cpu_micros": 4701659, "output_level": 0, "num_output_files": 1, "total_output_size": 6680082120, "num_input_records": 100300, "num_output_records": 100300, "num_subcompactions": 1, "output_compression": "LZ4", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [1490, 10, 0, 0, 0, 0, 0]}
435:2023/06/05-01:33:46.611252 361431 [ERROR] [db/db_impl/db_impl_compaction_flush.cc:3073] Waiting after background compaction error: Not implemented: File is too large for PlainTableReader!, Accumulated background error counts: 1
436-2023/06/05-01:33:48.194568 361431 [file/delete_scheduler.cc:73] Deleted file ./readyset-100/readyset_100-public-big_table-0.db/001865.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
437-2023/06/05-01:33:48.194600 361431 EVENT_LOG_v1 {"time_micros": 1685928828194596, "job": 4, "event": "table_file_deletion", "file_number": 1865}
438-2023/06/05-01:33:48.737250 361450 [db/db_impl/db_impl.cc:489] Shutdown: canceling all background work
439-2023/06/05-01:33:48.750233 361450 [db/db_impl/db_impl.cc:692] Shutdown complete
ubuntu@ip-10-0-0-66:~/readyset-100/readyset_100-public-big_table-0.db$

From SyncLinear.com | REA-2863

What to do if the query is out of the cache range?

Is your feature request related to a problem? Please describe.
What to do if the query is out of the cache range?
readiest is a very interesting project,
as I understand it, Readyset is caching the data from the backend database(mysql, Postgres),
but I have a question:

  1. Is the cache all the data of the database? (I understand that some data should be cached)
  2. if the user's query exceeds the range of cached data, how to deal with it? Forward the user's SQL to the back-end database(mysql, Postgres) for processing?

Support CockroachDB

Is your feature request related to a problem? Please describe.
I casually expected that CockroachDB might be supported since there's a prominent quote from the CEO of Cockroach Labs on the front page of readyset.io, but the more I look at the implementation, it seems like it'd be a tricky thing to support.

Attempting to connect ReadySet to a Cockroach Cluster fails because it cannot setup functions in PLPGSQL:

2023-03-16T22:52:25.363762Z ERROR replicators: Error in replication, will retry after timeout context=LogContext({"deployment": "'github-postgres'"}) error=Error during replication: PostgreSQL: db error: ERROR: at or near "plpgsql": syntax error: language "plpgsql" does not exist
DETAIL: source SQL:
CREATE OR REPLACE FUNCTION readyset.is_pre14()
RETURNS boolean
LANGUAGE plpgsql
         ^ timeout_sec=30

But, looking further -- it appears that the entire data freshness and data invalidation system is predicated on being able to replicate directly from the SQL topology we're caching. This presents a problem as Cockroach's replication topology is incredibly complicated compared to Postgres, considering the ranges are fully dynamic and can move around between individual hosts.

Describe the solution you'd like
So, a possible solution would be to integrate with Cockroach's Change Data Capture API and use that to sink event updates from a Cockroach Cluster.

This has two slightly less than desirable implications:

  1. we're increasing asynchonisity even farther by including a ser/deser step, and possibly another network hop (via Kafka or other Event Buss system if we used a broker of some kind as an intermediary) .
  2. Because of the serialization changes, we have even fewer direct conversions we could use between the CDC event and what ReadySet expects.

I'm willing to accept that this isn't practical at all, I just wanted to express interest in a solution to this problem some way.

Is there any appetite to look into this further?

Describe alternatives you've considered
I looked into just disabling the DDL functions to get this working -- but without a streaming replica setup it doesn't look like any part of ReadySet can function, and I don't think I can get that working with Cockroach.

Additional context
Add any other context or screenshots about the feature request here.

Does Planetscale work out of the box?

Since planetscale is built on mysql / vitess, it seems like it should work out of the box? Is this the case today? If not, is planetscale support planned for the future?

Fail starting up if ulimit is too low

When we finish snapshotting and run compaction, we open one file descriptor per sst file, which can open roughly $DB_SIZE / 66M worth of file descriptors. So a 2 TB database could potentially open ~32k file descriptors. I am not sure if rocksdb actually opens them all, or if it processes things in smaller chunks, but the default 1024 is too low for even 100GiB.

If ulimit -n is less than the number of sst files, we fail compaction--this can happen hours after readyset starts up, making it less of a helpful error than if we checked ulimit -n at startup and errored if we think it is too low.

We could either recommend that folks set a ulimit of 32k and error startup if it's lower than that (perhaps with a flag to allow for a smaller value without erroring), or try to query the upstream database to assess size and make an estimate of how many fds we need based on that and error before we actually spend a lot of time snapshotting and begin compaction.

From SyncLinear.com | REA-2899

[REA-2374] Memory bloat when running a sysbench workload.

Setup sysbench on the client.

Download the sysbench-tpcc tests from here:
https://github.com/Percona-Lab/sysbench-tpcc

Run the sysbench workload:

PHASE=cleanup; PGHOST=db-perf3.c3gkl30tsqmh.us-east-2.rds.amazonaws.com; PGPORT=5432; PGUSER=postgres; PGPASSWORD=ReadySet123; PGDATABASE=postgres; sysbench --db-driver=pgsql --report-interval=2 --tables=10 --scale=20 --threads=8 --time=360 --pgsql-host=$PGHOST --pgsql-port=$PGPORT --pgsql-user=$PGUSER --pgsql-password=$PGPASSWORD --pgsql-db=$PGDATABASE ./tpcc.lua $PHASE

ReadySet version is:

sh-4.2$ /usr/sbin/readyset --version
readyset
release-version: nightly-2023-03-14
commit_id:       ad3c494e48c20dcc091161042712305f1e6e8f20
platform:        x86_64-unknown-linux-gnu
rustc_version:   rustc 1.64.0-nightly (fe3342816 2022-08-01)
profile:         release
opt_level:       3

Backend Database size is reasonable:

postgres=> SELECT pg_size_pretty( pg_database_size(‘postgres’) );
 pg_size_pretty
--------------
 3905 MB
(1 row)postgres=> \l
                 List of databases
  Name  | Owner  | Encoding |  Collate  |  Ctype  |  Access privileges
----------<ins>----------</ins>----------<ins>-------------</ins>-------------<ins>----------------------
 postgres | postgres | UTF8   | en_US.UTF-8 | en_US.UTF-8 |
 rdsadmin | rdsadmin | UTF8   | en_US.UTF-8 | en_US.UTF-8 | rdsadmin=CTc/rdsadmin</ins>
      |     |     |       |       | rdstopmgr=Tc/rdsadmin
 template0 | rdsadmin | UTF8   | en_US.UTF-8 | en_US.UTF-8 | =c/rdsadmin     <ins>
      |     |     |       |       | rdsadmin=CTc/rdsadmin
 template1 | postgres | UTF8   | en_US.UTF-8 | en_US.UTF-8 | =c/postgres     </ins>
      |     |     |       |       | postgres=CTc/postgres
(4 rows)

Many messages like this are seen in the logs:

{“timestamp”:“2023-03-15T14:44:55.022793Z”,“level”:“WARN”,“fields”:{“message”:“Skipping table action for earlier entry”,“context”:“LogContext({\“deployment\“: \“gg-rs-3-14_h9DStCSPCxCG\“})“,”table”:“public.order_line4”,“pos”:“wal<17/97F14808>“,”cur”:“wal<17/97F7B8F0>“},“target”:“replicators::noria_adapter”}
{“timestamp”:“2023-03-15T14:44:55.023207Z”,“level”:“WARN”,“fields”:{“message”:“Skipping table action for earlier entry”,“context”:“LogContext({\“deployment\“: \“gg-rs-3-14_h9DStCSPCxCG\“})“,”table”:“public.order_line4”,“pos”:“wal<17/97F27198>“,”cur”:“wal<17/97F7B8F0>“},“target”:“replicators::noria_adapter”}
{“timestamp”:“2023-03-15T14:44:55.023597Z”,“level”:“WARN”,“fields”:{“message”:“Skipping table action for earlier entry”,“context”:“LogContext({\“deployment\“: \“gg-rs-3-14_h9DStCSPCxCG\“})“,”table”:“public.order_line4”,“pos”:“wal<17/97F3ACB8>“,”cur”:“wal<17/97F7B8F0>“},“target”:“replicators::noria_adapter”}
{“timestamp”:“2023-03-15T14:44:55.023989Z”,“level”:“WARN”,“fields”:{“message”:“Skipping table action for earlier entry”,“context”:“LogContext({\“deployment\“: \“gg-rs-3-14_h9DStCSPCxCG\“})“,”table”:“public.order_line4”,“pos”:“wal<17/97F4CA90>“,”cur”:“wal<17/97F7B8F0>“},“target”:“replicators::noria_adapter”}

ReadySet command line arguments:

sh-4.2$ ps -ef | grep readyset
root     13928     1 75 13:23 ?        00:55:00 /usr/sbin/readyset --database-type postgresql --deployment=gg-rs-3-14_h9DStCSPCxCG --upstream-db-url=<ins>postgres://postgres:[email protected]:5432/postgres</ins> --username=postgres --password=ReadySet123 --address=0.0.0.0:5433 --log-format=json --prometheus-metrics --query-log --standalone

Observation:
Resident memory size for ReadySet increased from 4GB to 76GB after which it was killed by the OOM killer.

<67965.936652> Out of memory: Kill process 6376 (readyset) score 958 or sacrifice child
<67965.942972> Killed process 6376 (readyset) total-vm:76136904kB, anon-rss:63694048kB, file-rss:0kB, shmem-rss:0kB
<67967.470299> oom_reaper: reaped process 6376 (readyset), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB

Get in touch with <~accountid:6386c6e93c26ca7fa0d52b22> for a repro. I will upload logs soon.

From SyncLinear.com | REA-2374

Parse fractional seconds precision on MySQL `CURRENT_TIMESTAMP` function

See: https://dev.mysql.com/doc/refman/8.0/en/date-and-time-functions.html#function_current-timestamp

Describe the problem

The following create table statement fails to be parsed

CREATE TABLE `table` (
            `enabled` bit(1) NOT NULL DEFAULT b'0',
            `lastModified` datetime(6) NOT NULL DEFAULT CURRENT_TIMESTAMP(6) ON UPDATE CURRENT_TIMESTAMP(6)
)

There are two causes

  • for the enable column it doesn't like b'0' (#54)
  • for the lastModified column it doesn't like the current_timestamp argument 6 (this issue)

The statement was generated for a table in percona xtradb 8.0.27-18.1


Originally reported by @cameronbraid

Could not find view `q_...`, query caches not being created

I can't get it to work on docker containers running on local machine (MacOS intel chip). So far, the MySQL database snapshot works but queries aren't cached, I had gotten a variety of errors, but the most common ones are about views not found.

Here's my setup:

# docker-compose.yml
version: '3'
services:
  mysql:
    image: mysql:5.7.22
    environment:
      - MYSQL_DATABASE=${MYSQL_DATABASE}
      - MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
      - MYSQL_USER=${MYSQL_USER}
      - MYSQL_PASSWORD=${MYSQL_PASSWORD}
    volumes:
      - ./data/db/mysql:/var/lib/mysql
    env_file: .env
    command: 'mysqld --log-bin --binlog-format=ROW --server-id=1'

  readyset:
    image: public.ecr.aws/readyset/readyset:beta-2022-12-15
    ports:
      - 5433:5433
      - 3307:3307
    platform: linux/amd64
    volumes:
      - ./data/readyset:/state
    environment:
      - DEPLOYMENT_ENV=quickstart_github
    env_file: .env
    depends_on:
      - mysql
    entrypoint: >
      sh -c "readyset \
          --prometheus-metrics \
          --standalone \
          --deployment=github-mysql \
          --database-type=mysql \
          --upstream-db-url=mysql://$${MYSQL_ROOT_USER}:$${MYSQL_ROOT_PASSWORD}@mysql:3306/$${MYSQL_DATABASE} \
          --address=0.0.0.0:5433 \
          --username=$${MYSQL_ROOT_USER} \
          --password=$${MYSQL_ROOT_PASSWORD} \
          --query-caching=explicit \
          --db-dir=/state
        "

Do I have to define the query caching patterns somewhere? Swapping the query-caching flag options from readyset -h doesn't work.

Some common error messages:

readyset_1   | 2023-01-28T17:05:02.356464Z ERROR prepare_select{statement_id=1 create_if_not_exist=false override_schema_search_path=None}: readyset_adapter::backend::noria_connector: getting view from noria failed context=LogContext({"deployment": "github-mysql"}) error=Could not find view `q_448e8f3d42afad54`
readyset_1   | 2023-01-28T17:05:02.358047Z  WARN readyset_adapter::backend: View not found during mirror_prepare() context=LogContext({"deployment": "github-mysql"}) error=Could not find view `q_448e8f3d42afad54`

SET String type

Is your feature request related to a problem? Please describe.
A lot of popular CMS/Ecommerce applications are utilizing SET sql_mode="" in SELECT queries

Describe the solution you'd like
Add SET String type to readyset application

Describe alternatives you've considered
So far, ProxySQL can rewrite each query, but that is not a permanent solution.

Additional context
Could expand application popularity and use cases

Add "compaction" state to "Show readyset tables"

@KwilLuke said:

It looks like the SHOW READYSET TABLES is determined by whether snapshot_mode is enabled or not. When we disable it, that starts compaction, so there could be quite a while left before SHOW READYSET STATUS, which only reports completion after all compaction is done, says things are done.

Tables can’t be used for caches until compaction is done, hence the “pending” number.

We should probably consider changing the SHOW READYSET TABLES to account for compaction if possible.

From SyncLinear.com | REA-2907

deployment error

Downloading ReadySet orchestrator
Welcome to the ReadySet orchestrator.

I found an existing deployment named crmeb. We can continue with this
deployment, or create a new one.

✔ Would you like to continue with the existing deployment? · no
Before proceeding we need to tear down all other deployments.
✔ Tear down other deployments now? · yes
Error: missing field advanced_settings at line 1 column 262

Parse MySQL Bit-Value literals

See: https://dev.mysql.com/doc/refman/8.0/en/bit-value-literals.html

Describe the problem

The following create table statement fails to be parsed

CREATE TABLE `table` (
            `enabled` bit(1) NOT NULL DEFAULT b'0',
            `lastModified` datetime(6) NOT NULL DEFAULT CURRENT_TIMESTAMP(6) ON UPDATE CURRENT_TIMESTAMP(6)
)

There are two causes

  • for the enable column it doesn't like b'0' (this issue)
  • for the lastModified column it doesn't like the current_timestamp argument 6 (#56)

The statement was generated for a table in percona xtradb 8.0.27-18.1

feature request

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Bail if upstream URL is malformed

In at least one case (postgres missing db name), a malformed upstream URL results in errors that could be cleaner (indefinitely repeating replicator errors).

We should make this case a permanent failure with a user-friendly message, and investigate/fix possible others.

From SyncLinear.com | REA-2917

Bounded Consistency

I couldn't find anything related to this in the docs, so please correct me if I'm wrong.

Is your feature request related to a problem? Please describe.

Systems like FlightTracker/TAO at Facebook or Zanzibar at Google use tokens (tickets, zookies respectively) to put a minimum bound on the required consistency for a request. This enables read-after-write consistency guarantees for only the requests that need it, rather than having to wait for eventual consistency to asynchronously propagate.

Describe the solution you'd like

Some form of bounded consistency to address this issue or read a view as of a specific "time" (some SQL databases have an "as of system time" predicate).

Describe alternatives you've considered

N/A

Additional context

N/A

Webhooks support to persist data to third party end-points

Is your feature request related to a problem? Please describe.
See a potential to have a feature to move 3rd party data integrations closer to the DB. For example, based on some condition or CRUD operations on any db table - to be able to post data to 3rd party end-points via https POST

Describe the solution you'd like
Maybe add an easy to use configurable user interface to build such Integrations and along with a feature to view all the request / response logs for traceability.

Describe alternatives you've considered

Additional context
Since keeping integrations in the application layer can be tricky some cases leading to data inconsistency - it is always fail safe to handle data integrations closer to the DB right before a CRUD operation.

A usecase would be to send data generated in a legacy system to a new system

enh: Support for PostgreSQL Row Level Security (RLS)

Are there any plans to support RLS with PostGreSQL in particular the when policy is applied to isolate records based on a tenant_id that is stored in configuration settings.

ALTER TABLE products ENABLE ROW LEVEL SECURITY;

CREATE POLICY product_isolation_policy ON products
USING (tenant_id = current_setting('app.current_tenant'));

And that is SET ahead of a query eg:

SELECT set_config('app.current_tenant', tenant_id, false); -- or
SET app.current_tenant = 'x'

See also multi-tenant example

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.