Code Monkey home page Code Monkey logo

Comments (8)

GanZiheng avatar GanZiheng commented on August 16, 2024

A question here. I see we protect the order of Requests send to write channel the same as commit_ts and I wonder why? For WAL in order?

That's because we should make transactions stored in WAL contiguously to achieve atomic. Badger will check the consistency of WAL and truncate it if necessary.

I think maybe it's not necessary to use transaction and consider write channel. We could use oracle to perform conflict check and get commit timestamp, and after getting the commit timestamp successfully, we just update the file and place it on level 0?

from agatedb.

wangnengjie avatar wangnengjie commented on August 16, 2024

We could use oracle to perform conflict check and get commit timestamp

The api of oracle needs a txn struct, I just don't want to break the api. And the conflict check of oracle needs write keys hash of transaction which is fields conflict_keys in Transaction.
So it's unavoidable to record write keys in ingested files.

pub(crate) fn new_commit_ts(&self, txn: &mut Transaction) -> (u64, bool) {

fn has_conflict(&self, txn: &Transaction) -> bool {

after getting the commit timestamp successfully, we just update the file and place it on level 0

Due to the particularity of level0, put SST just in level0 may cause significant read amplification and write stall. Put files in lower level can reduce read/write amplification caused by compaction but will cause space amplification.

I have a look to the strategy of rocksdb.

We pick the lowest level in the LSM-Tree that satisfies these conditions

  • The file can fit in the level
  • The file key range don't overlap with any keys in upper layers
  • The file don't overlap with the outputs of running compactions going to this level

It makes sence and I forget the third point~.

For the second point, it's because rocksdb protects new keys in higher level which is unnecessary in agatedb.

I'll try and see the result of ingest a file in rocksdb.

from agatedb.

GanZiheng avatar GanZiheng commented on August 16, 2024

The api of oracle needs a txn struct, I just don't want to break the api. And the conflict check of oracle needs write keys hash of transaction which is fields conflict_keys in Transaction.
So it's unavoidable to record write keys in ingested files.

We certainly should record write keys for the conflict check. I mean add method for oracle to get commit timestamp for the ingesting file scenario.

Due to the particularity of level0, put SST just in level0 may cause significant read amplification and write stall. Put files in lower level can reduce read/write amplification caused by compaction but will cause space amplification.

Yes it will cause write stall if there are many files being ingested in the same time, but in my opinion, it will not cause read amplification since we always read every SSTable in the LSM tree.

Maybe we could refer to compacting to level base and consider to put the file to level base first?

from agatedb.

wangnengjie avatar wangnengjie commented on August 16, 2024

We certainly should record write keys for the conflict check. I mean add method for oracle to get commit timestamp for the ingesting file scenario.

Okay, I got it.

Yes it will cause write stall if there are many files being ingested in the same time, but in my opinion, it will not cause read amplification since we always read every SSTable in the LSM tree.

Maybe we could refer to compacting to level base and consider to put the file to level base first?

Oh yes, I was wrong for level0's read amplification but the higher we put ingested files the more read/write amplification cause by compaction we get.

What does level base mean? Target for L0 to compact to or else?

from agatedb.

GanZiheng avatar GanZiheng commented on August 16, 2024

What does level base mean? Target for L0 to compact to or else?

Yes, the target level into which level 0 will merge its data.

from agatedb.

wangnengjie avatar wangnengjie commented on August 16, 2024

About how to check files to ingested overlap with running compaction output.

I saw we have CompactStatus to trace running compaction. When picking and checking $L_i$,I should check file does not overlap with levels[i].ranges. But one situation may happen.

Suppose a running compaction pick $L_{i-1}$ range [1..50],and we have below tables at $L_i$

Level i-1: [10..50] <- pick this to compact to Level i
Level   i: [1..5] [20..30] [40..60]

The next_level (next_range) we collect to trace range is [20..60], but the actual compaction output is [10..60]. And if the file to ingest has range [6..15], it doesn't overlap with [20..60] but overlap with [10..60].

So I can't use CompactStatus to check overlap based on current implimentation.

Any good ideas? I think we can extend next_range with this_range but I'm not sure whether it's possible. @GanZiheng

from agatedb.

GanZiheng avatar GanZiheng commented on August 16, 2024

I think we can extend next_range with this_range

I think that's ok.

By the way, I think this problem only happens when ingesting files, and will not appear in current implementation.

from agatedb.

wangnengjie avatar wangnengjie commented on August 16, 2024

By the way, I think this problem only happens when ingesting files, and will not appear in current implementation.

Yes, it only happens when ingesting files. I didn't make myself clear.

from agatedb.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.