Code Monkey home page Code Monkey logo

sig-transaction's People

Contributors

andylokandy avatar cfzjywxk avatar ekexium avatar longfangsong avatar lysu avatar myonkeminta avatar nrc avatar pramodbiligiri avatar srisatya12 avatar sticnarf avatar youjiali1995 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sig-transaction's Issues

Building for JDK9+

I noticed that JDK 8 is still needed for building. Is this intentional? Do you plan to support JDK 8 in the java-client? I'm looking at some warnings like:

[WARNING] /home/peter/work/git/tikv-client-java/src/main/java/org/tikv/common/util/MemoryUtil.java:[31,16] sun.misc.Cleaner is internal proprietary API and may be removed in a future release
[WARNING] /home/peter/work/git/tikv-client-java/src/main/java/org/tikv/common/util/MemoryUtil.java:[32,16] sun.misc.Unsafe is internal proprietary API and may be removed in a future release
[WARNING] /home/peter/work/git/tikv-client-java/src/main/java/org/tikv/common/util/MemoryUtil.java:[33,18] sun.nio.ch.DirectBuffer is internal proprietary API and may be removed in a future release
[WARNING] /home/peter/work/git/tikv-client-java/src/main/java/org/tikv/common/util/FastByteComparisons.java:[23,16] sun.misc.Unsafe is internal proprietary API and may be removed in a future release

There are alternatives in JDK 9+ that don't produce such warnings which are now runtime errors in JDK 16+... How much do you value being able to run on JDK 8 ? There is an option of creating alternative classes such as MemoryUtil for JDK 8 vs. JDK 9+ and pack them into a multi-release JAR which would run correctly across all the JDK 8, 9, ..., 16+ releases. I can try to do that and propose a PR if you're interested.
The prerequisite for doing that is to require JDK 9+ (preferably JDK 11) to build the project via maven. The javac would use -release 8 option then to build most of the code so it would run on JDK 8, only those classes that need to use different API in JDK 9+ would then be build using -release 9 option. At the end the multi-release JAR would be build which would use the correct classes for the platform on which it is running. Are you OK with that approach?

That would be a really nice idea to support other Java versions! You see, before the client was separated from TiSpark, we require Java 8 so that users do not struggle with incompatible Java versions.

So are you OK with approach to require JDK9+ (or even JDK 11) for building client-java while the produced jar would still work on JDK 8 ?

I think there is always a way to build with JDK8 along with other Java versions... and it does not break the current environment.
Let's try to fulfill them both while supporting more Java versions:)

The problem is that a build produced with JDK 8 javac will:

  • produce runtime warnings when run on JDK9...15
  • fail on JDK16+

Because it uses deprecated (encapsulated) JDK APIs which are deprecated for removal (in JDK 16 they are just disabled, but JDK 17 might remove them alltogether)
You see, building with JDK 8 can't build code that uses replacement API(s) introduced in JDK9+.
OTOH when building with JDK 9, you can use the -release 8 javac option that does the following:

  • it produces bytecode compatible with JDK 8 runtime
  • it checks that only API(s) present in JDK 8 runtime are used in compiled code

so building with JDK 9 javac with -release 8 option is quivalent to building with JDK 8 javac.
If you want to build alternative versions of classes like MemoryUtil where one version will be used when running on JDK 8 and the other version when running on JDK 9+ (using multirelease JAR), then only JDK 9 javac is suitable for that.
Unless maven pom.xml is structured in a way where building a multirelease (and thus JDK 8+ compatible JAR) would be performed only when a particular Maven profile is enabled. So by default building with JDK 8 would not enable the profile, but building with JDK 9+ would enable it. Is this acceptable?

I see, it seems logical to me. We could discuss this change in our meeting with other client contributors. If it is okay, I can schedule it next week. Also, this change would influence a lot that it may not be merged to release 3.x. We could consider it to be enclosed in the next big release, say release-4.0?

I'm going to try to modify the build procedure to use a Maven profile which would be enabled manually. I think this way default build procedure can be left unchanged so the impact is minimal. You can choose to include that in either 3.x or 4.0 if you think the change is suitable for inclusion. Perhaps in 4.0 the profile could be enabled automatically when building with JDK 9+ but in 3.x only manually?

[PROPOSAL] sig-transaction integrates GitHub Discussion and moves stuff

With an offline discussion with @andylokandy yesterday, I learned we create tikv/sig-transaction for collecting stuff and hosting a repository for design discussion.

However, the design document should have been put under tikv/rfcs, as well as we have GitHub Discussion for open-ended discussion.

@andylokandy @sticnarf and I reached a consensus that we turn on GitHub Discussion on tikv/tikv and moving disuccsion there under a category named transaction. Also we finalize the design document stuff under tikv/rfcs.

What do you think?

Solve the green-gc related issues

Currently the green-gc is disabled by default as it still has some corener cases or issues to solve.

As the hibernate region is default enabled just on the master branch, it's urgent to solve the green gc issues. However, as the green-gc is already a GA released feature, it's better to solve the issues in the near sprints.

Also it's necessary to think it over about the raftstore bypass lock scan or collect, as missing lock may cause data loss which is critical.Besides, more tests about it are needed to cover these corner paths or exceptional paths.

Parallel commit initial design doc

Before we start implementation I think we should have a design doc that is a bit more concrete than the docs in the repo. It doesn't need to be very formal since I think initial implementation will be somewhat experimental. It should contain:

  • Goals of the project
  • Requirements
  • Constraints (i.e., things we cannot change or regress)
  • A description of the core protocol
  • What we plan to implement in the first iteration, how that relates to the finished product, and what we will deliberately not do in the first iteration.
  • People: RACI roles, who is available to work on the project.

Once we have the above in a document, then we should figure out the work items and who will do that work, but lets do that as a second stage to the above doc.

Audit usage of commit_ts

Look for any places where non-unique commit_ts might cause a problem (for example, it means that we cannot order two transactions with the same commit_ts). In the big picture it means commit_ts only gives a partial order of transactions.

When looking at tools such as CDC and binlog, please also check if there might be other issues with the parallel commit design.

Docs PRs should only require one LGTM

Iโ€™d like to propose that PRs to TiKV transaction code which are only documentation (i.e., only add or change comments and whitespace) only require one LGTM, not two.

Committers, please check your box if you agree, or leave unchecked and make a comment if you do not. As laid out in the governance decision making rules, the decision requires a simple majority, but consensus is preferred.

Check here if you agree

Check here if you disagree

Compatibility between async commit fallback and CDC

If we fallback from async commit using the solution 2 in #64, we need to amend the primary lock clearing the async commit mark. Then, there can be two mutations writing the same lock. I am not sure whether CDC can handle this case.

Website or landing page

Currently README.md has all the info, but it's not pretty. The SIG should have a landing page on the TiKV website or its own page to welcome new contributors and provide info and links, etc.

[Discussion] Reconsider the way we write UNIT test

I fell our unit tests in TiKV somehow not satisfing.

For example when reviewing tikv#9514, which changed the rollback collapse logic, I found it breaked some test cases in mvcc::reader.

These test are used to test the behavour of reader and the engine.rollbacks are used only to provide a testing environment. i.e. They are not supposed to really do the "rollback" work here, what we want is just the Modifies which caused by the rollback. But a large part of this test depends on the Modifies caused by the unprotected rollbacks, and if we change the rollback collapse logic, the tests are broken.

Moreover, who can promise that the "rollback" or "cleanup" function we are using here is right? Maybe the tests for "cleanup", but "cleanup"'s correctness is depend on the correctness of reader, and lead us to circular reasoning.

It might be a bad habit to prepare unit testing environment for lower level componets (scanner in this case) with higher level components (txn and actions, or more specific, cleanup in this case), after all, lower level componets should not even know higher level components' existance. But huge amount of tests in our code are written in this way, and many of them are too hard/complex/important to change.

Plan a client quest

Once the docs quest is well underway, I think finishing off the client would make a great next quest issue.

Create a duty roster for issue triage

We should have somebody on duty to do initial triage of issues. These are helpfully posted to slack once per day (I think). They could also triage new PRs to make sure they get review

As well as a list of names, we should specify the work the person on duty should do.

Committing with calculated TS breaks CDC's resolved_ts constraint

From @MyonKeminta :

CDC requires all incoming "commit" operation has a greater commit_ts than the current resolved_ts. But CDC receives these events by listening apply operation. So it's possible that:
Transaction T1 starts with start_ts = 90 while the max_ts = 100 and got min_commit_ts = 101 , then it starts writing;

  1. CDC's resolved_ts advanced to 200 and pushed max_ts to 200;
  2. T1's prewrite goes to apply phase, and is monitored by CDC;
  3. T1 commit at commit_ts = 101
  4. CDC receives T1 (90, 101) while it's resolved_ts is 200, and its constraint is broken.
    To fix this problem, after CDC pushing max_ts, it need to wait for all operations, which fetches the old max_ts but has not been applied, to finish, and only after that it can push its resolved_ts. That sounds kind of hard to implement.

Problems about usage of `max_read_ts`

We have done much work about updating and making use of max_read_ts in this PR: tikv/tikv#8363 . But there are still much work to do to make it perfectly correct.

We need to:

  • Update max_read_ts by CDC correctly (#42)
  • Find if it's necessary to keep a max_read_ts for every single region.
  • Optimizations (Some possible ideas about optimizations can be found here)
  • Requests that performs or potentially performs protected rollback operations (like rollback, cleanup, check_txn_status, check_secondary_locks, etc) need to update max_read_ts with its start_ts. Perhaps the name max_read_ts is not perfectly accurate. This is required to guarantee the correctness of the current way of handling non-globally-unique commit_ts.
  • Handle non-tso read_ts. Discussed here: #21

cc @sticnarf @youjiali1995

Parallel commit testing plan

Beyond simple unit tests, we should have a plan to test parallel commit thoroughly. Some ideas:

  • Running our various test suites in CI with parallel commit enabled.
  • Benchmarking to ensure performance
  • Adding parallel commit to our TLA+ model

More ideas (and elaborating on the above) welcome!

Find a solution for schema version check problem for 1PC

Previously we found that schema version checking might be a problem. Later we think it can be solved if we define a transaction to be committed iff all its keys are prewritten, and, the schema version doesn't change between its start_ts and commit_ts. However it still solve the problem of 1PC, which have no chance to check the schema version at all. We need to find a solution for this if we want to implement 1PC.


From @sticnarf : Maybe we can have something like max_commit_ts. It's also something like lease.
We send it in prewrite. If the calculated min_commit_ts > max_commit_ts , the prewrite will fail (can fallback).
When doing DDL, we invalidate max_commit_ts to disable async commit (or 1PC), but ensure changes before max_commit_ts are valid with the previous schema.

Pending async commit transactions don't necessarily block read

Now the min_commit_ts of async commit transactions cannot be advanced like the legacy transactions. So if our read timestamp is larger than the min_commit_ts of an async-commit lock, we can only wait until the lock expires and resolve the lock.

Actually, we can use the CheckSecondaryLocks API to achieve something like advancing min_commit_ts.

We can add caller_start_ts (like the one in the CheckTxnStatus API) to the request. TiKV uses this timestamp to update its max_ts.

message CheckSecondaryLocksRequest {
    ...

    // The start timestamp of the transaction which this request is part of.
    uint64 caller_start_ts = 4;
}

After the change, we don't need to wait until the TTL expires before checking secondary locks. If TTL is not expired, we set caller_start_ts. Otherwise, we keep caller_start_ts zero.

If caller_start_ts is zero, CheckSecondaryLocks writes rollbacks if the lock does not exist, which is the current behavior. If caller_start_ts exists, CheckSecondaryLocks needn't write anything if the lock does not exist.

After calling CheckSecondaryLocks with caller_start_ts, we can skip the locks of this transaction, just like we've advanced the min_commit_ts of the transaction: If a following prewrite hits the same TiKV, its min_commit_ts must be greater than the caller_start_ts due to the updated max_ts; if it sends to a different TiKV, the leader must have changed, so the max_ts should have been updated to a more recent timestamp from PD and thus the commit_ts should be also greater than the caller_start_ts.

However, we still need to weigh the benefit and the cost. It is heavy to check all secondaries, so maybe we should not do this so eagerly (maybe after several backoffs?). And the benefit is not so big. Async commit transactions are typically small. It is unusual that prewriting them takes a long time. So personally I don't think it's a task of high priority.

Docs quest planning

I'd like a 'docs quest' to be one of the first activities for the SIG.

Questions

  • How to track progress and list tasks? Tracking issue, milestone, project? In this repo or TiKV repo?
  • How to promote? TiKV blog post, Twitter, Slack. Where else?
  • Who will mentor?
  • How to manage the quest?
  • Who else should be looped in? (Community/ecosystem folk? i18n?)

Tasks

  • define goals
  • Make the list of tasks
  • Prepare issue text
  • Prepare blog post
  • find existing docs
  • find some general resources

Keep documentation of async commit up-to-date

We are keeping changing and improving our design during development of async commit. However now our design document is out-of-date, and doesn't match our real implementation well. We need to update the document, so people new to async commit can get start better.

Handle CommitTsTooLarge error in an efficient way

As a solution to schema version check issue (#51), we added max_commit_ts limit to async commit's prewrite requests. When the calculated min_commit_ts exceeds the max_commit_ts, the CommitTsTooLarge error will be thrown. We need to find a proper way to handle the CommitTsTooLarge error. Otherwise, when the load is high, the failure rate of async commit might be significant.

Solution 1:

When TiDB receives CommitTsTooLarge error, check the schema version again.

  • If the schema version is not changed, update the max_commit_ts and retry (don't need to retry for already-successfully-prewritten keys)
  • Otherwise, if the transaction is amended, then update the primary lock first to update the secondary list, then continue prewritting remaining keys. Note that the primary must be updated before all keys being prewritten to guarantee consistency.

Solution 2:

In solution 1, if the load is high enough, it's still likely to fail after retry. Another choice is to fallback to non-async-commit transaction when CommitTsTooLarge error occurs. This might be more complicated to implement than solution 1. If we always rewrite the primary lock to non-async-commit lock first when falling back, the implementation might be easier. We should confirm the correctness first before adopting this way.

Prove correctness of current implementation of handling non-globally-unique commit_ts

To implement async commit, we need to support using a timestamp that's not a globally-unique tso as a commit_ts. The hardest problem we meet is that the key collision in write cf. This problem was solved in this PR. What it does is:

  1. For all operations that need to write a rollback record, check write cf before writing it. If found another transaction's write record who has the same key and commit_ts with the current rollback operation, then add a has_overlapped_rollback flag to it if this is a protected rollback, or do nothing if not protected.
  2. For all commit operations, we found that we can avoid all possibilities that a commit overwrites an already-existed rollback record. So we don't need to do any additional checks.
  3. Also note that that PR allows prewritting on a key where there's already a commit record with commit_ts equals to the current start_ts, as long as the record is not a rollback record, nor does it have its overlapped_rollback flag set. Previously in this case it reports WriteConflict.

The problem is, I think, that we need to prove the second point above in a more strict way (like TLA+). For reference, here follows how we came into the conclusion that commit operations doesn't need to check overlapping rollbacks.


Consider we are performing a commit operation for transaction T2 on a key, and another transaction T1 also affects this key but rolled back.

First of all, we can push the max_read_ts before performing rollbacks, so prewrites after the rollback will surely have its min_commit_ts greater than T1.start_ts. So we just need to consider what will happen if on one of the keys that are affected by both transactions, T1's rollback operation happens between prewriting and committing of T2.

  1. If T1 is an optimistic transaction, then its rollback record can be safely overwritten by T2's commit, since it can still cause WriteConflict to T1's later-coming prewrite requests.[1]
  2. If T1 is a pessimistic transaction, then it need to write rollback records only when it has entered the prewrite phase, which means, it should have already successfully acquired all pessimisitc locks. So there's two cases about this, according to if T1 needs to acquire a pessimistic lock on the current key:
    • 2.1. If T1 needs acquiring pessimistic lock on the key, then the rollback must happen after acquiring pessimistic lock on the key. The pessimistic lock cannot be acquired after T2 prewriting, since T2 prewriting writes a lock on the key. If the pessimistic lock is acquired before T2 pewriting, and T2's prewriting cannot succeed before T1 rolling back. So in this case it's impossible that T1's rollback happens between T2's prewrite and commit.
    • 2.2. If T1 doesn't need the pessimistic lock, then it must be an index key, therefore T1 and T2 must have another key in common that needs acquiring pessimistic lock. Since the index key should not be T1's primary key, so if T1 is a non-async-commit transaction, we don't need to care about if its rollback record on secondaries is overwritten. However if T1 is a async-commit transaction, it may need to write protected rollback record on secondaries when someone calls check_secondary_locks on it. (WIP here...)[2]


[1]: This is currently broken by allowing optimistic-prewriting on a key where there's already a commit record whose commit_ts equals to current start_ts, as long as the commit record is not rollback (here). This should be fixed later but is relatively easy, hopefully.

[2]: I just found this case seems to be incorrect when I was drafting this issue. We need to confirm and find someway to fix it if necessary.

Update TLA+ Spec for async commit

The TiKV has recently been launching a new feature in the transaction model named Async commit. It's an optimization that reduces the commit latency from 2PC to 1PC. However, this feature has not been reflected in the TLA+ Specs.

This issue tracks updating the spec and also running TLC model checker on the new model to gain more confidence on the async commit design.

Documentation Quest

TiKV needs better documentation. The transaction SIG will do its part by documenting the concepts and code behind the TiKV/TiDB transaction system.

If the quest is successful, it should be easier for

  • users to understand the isolation properties and performance characteristics of TiKV/TiDB
  • users to reason about how to configure their transaction usage
  • new contributors to understand TiKV transactions and write code to change their implementation
  • outsiders to survey and understand TiKV's transaction implementation, and compare it to other databases

See the leader board for a ranking of documentation questers.

Mentors

If you want advice, help, or review, the following people are available and will prioritize quest issues:

Resources

Existing documentation and other resources are listed in the doc directory.

Tasks

To claim a task:

  • file an issue for it (either in this repository or TiKV
  • include a reference to this issue, and ask a mentor (listed above) to add the issue link here
  • a mentor will be assigned to that issue and they will keep this issue up to date with the task's status
  • let the mentor know if you need help or if someone else should take on the task

Tasks may or may not have a related issue. Tasks with names are being worked on by that user and should have an open issue for discussion. Boxes get ticked when the documentation task is complete.

High level documentation

Describe the underlying concepts of transactions (mostly be linking to other material, but we should organize and document that material and ensure it covers everything, filling in gaps where necessary). Describe at an algorithmic level the way transactions are implemented in TiKV.

  • Transactions overview, data format, latches, MVCC (TiDB docs, TinyKV docs, Xuelian's blog, VLDB paper)
  • Optimistic locking
  • Pessimistic locking (blog post)
  • Isolation/consistency properties (Jepsen report)
  • Lock management/waiter mangement, deadlock detection
  • GC, green GC
  • Pipelined pessimistic locking
  • Large transactions (blog post)
  • Parallel commit (repo)
  • 1pc
  • Replica read (aka follower read)
  • Timestamps and HLC
  • Alternative implementations

Module documentation

Describe modules of the TiKV implementation. This should how the code works, why that design was chosen, and how it fits in with the larger transaction system. We should have documentation per-concept, which should be roughly a Rust module, but is unlikely to exactly match.

MongoDB has lots of great examples of module-level docs, e.g., storage.

Code comments

Not every type and function needs a comment, but it would be great to document the more important and more complex code.

TiKV:

Rust Client:

TiDB:

TODO more tasks

Need disable async commit and 1PC during the upgrade from TiDB < 5.0

Async commit and 1PC are fundamental changes to the transaction protocol in TiDB. There will be new kinds of locks that need to be handled in new ways. Therefore, it is very hard for TiDB < 5.0 to handle these locks well, for example:

  1. The BatchGet API adds a new response-level error for returning the memory lock (see tikv/tikv#9077). TiDB 5.0 will be able to read the memory lock so it can handle the case correctly. However, TiDB < 5.0 cannot see this newly added field and it requires all KV pairs to be returned in the pairs field, which is what we cannot accomplish in TiKV 5.0 with async commit.

  2. When there is one 5.0 TiDB instance while other instances are 4.0, during a rolling update, the 4.0 TiDB may read an async-commit lock prewritten from the 5.0 TiDB. However, TiDB 4.0 cannot handle these async-commit locks, it will just retry but never succeed. Then, this TiDB connection will hang for a long time and prevent a graceful shutdown during the rolling update.

Reading group: nominations for January 2021

What would you like to read in January 2021?

Nominate papers in the comments. Any research paper or white paper which covers distributed transactions or related work are suitable, both recent work and classics.

How to change write CF?

Constraints:

  • backwards compatibility including rolling updates
  • distinguish between rollback and write where the rollback's start_ts is the same as the write's commit_ts

Draft a launch blog post

To post on the TiKV and my personal blog (and anywhere else we like). Announce the group, say what we do, how to join, etc. Initial activities.

Any other points we should be sure to add?

Identity

We should have (or check) at least:

  • a place to store info (this repo)
  • a place to store private info
  • web page (#7)
  • an email address
  • GitHub team

Anything else?

Adapt async commit/1pc with multi data center

Background

We are going to finish async commit in 5.0, and there's going to be another important feature in 5.0: Local/Global transaction in cross DC deployment.

In that feature, there might be multiple PDs in a cluster allocate timestamps simultaneously. There are two kinds of timestamps: local timestamps and global timestamps. A local timestamp can be guaranteed to be globally unique (although it's not yet implemented), but is not guaranteed to be globally monotonic. A global timestamp is guaranteed to be greater than all (global or local) allocated timestamps and less than all (global or local) timestamps that will be allocated. A transaction can be a local transaction (that uses a local timestamp) or a global transaction(that uses a global transaction).

The problem

The problem is, if multi-DC is used with async commit/1pc enabled, the commit_ts calculation becomes complicated. We can't record only one max_ts any more. For example, in this case:

  1. Read k1 from DC1, start_ts (allocated from local allocator 1) = 100, then the max_ts is updated to 100;
  2. Write k2 from DC2, start_ts (allocated from local allocator 2) = 50, calculated commit_ts = 101 (max_ts + 1)
  3. Commit k2 from DC2
  4. Read k2 from DC2, start_ts (allocated from local allocator 2) = 51, then the previous transaction's result is invisible.

Possible solution

Maybe we need to maintain a map (DC -> max_ts) instead of a single max_ts in TiKV. TiDB's requests to TiKV need to mark which DC it belongs to, or if it's a global transaction. TiKV updates (or gets) the max_ts corresponding to that DC, or update all max_ts-es (or gets the max one) if it's a global transaction. When leader transferring or region merging or something happens that needs updating the max_ts, get a global ts from PD and update all max_ts-es.

Then there might still be many corner cases. For example, the number of DC-es may changes dynamically, which introduces more complexity in maintaining max_ts. One of the ways is to record both local max_ts and global max_ts for ts calculation, like this:

struct MaxTs {
    global_max_ts: AtomicU64,
    local_max_ts: [AtomicU64; MAX_DC_COUNT],
}

Tracking issue: async commit

Design docs, etc are in this repo in design/parallel-commit

All open issues

  • Audit uses of commit ts (to support future non-unique timestamps) #19 - seems that only issues are with tools now
  • Replica read #20
  • Syncing timestamps #21
  • Testing plan #23
  • Problems about usage of max_read_ts #45
  • Schema check issue #51
  • Async commit does not ensure linearizability tikv/8589

Open PRs:

Work in progress:

Plan and status

Demo-able version

Goal is to demonstrate async commit which works and has roughly the expected performance profile, though not be performant. Some corner cases may not be correct. Demonstrate that the work is feasible and beneficial.

Currently in implementation.

Outstanding issues:

5.0 version

Address all outstanding issues. Handle corner cases and tools. Test, benchmark, and optimise.

Currently in implementation.

Outstanding issues:

Further work

Not blocking 5.0 release.

High level documentation

These things are not code-specific and span TiKV and its client. They would be best documented in a separate docs area, rather than inline in the code, and/or in blog posts.

  • Transactions overview, data format, latches, MVCC (TiDB docs, TinyKV docs, Xuelian's blog, VLDB paper)
  • Optimistic locking
  • Pessimistic locking (blog post)
  • Isolation/consistency properties (Jepsen report)
  • Lock management/waiter mangement, deadlock detection
  • GC, green GC
  • Pipelined pessimistic locking
  • Large transactions (blog post)
  • Parallel commit (repo)
  • 1pc
  • Replica read (aka follower read)
  • Timestamps and HLC
  • Alternatives

Plan initial activities

Some ideas:

  • documentation quest
  • host a talk
  • first paper for discussion in a reading group

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.