Code Monkey home page Code Monkey logo

blockchain_big_bang's People

Contributors

mitchellpkt avatar vegaminer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

blockchain_big_bang's Issues

Shopping list of dynamic blocksize algos to simulate

As we consider ways to adjust the dynamic block size algorithm to limit absurd growth, it will be necessary to model all of the options on a variety of transaction volume scenarios (low traffic, high traffic, bursts, attacks, see #1).

Please use this issue as a forum to brainstorm and share your ideas to be tested in the models.

Current protocol is short-term only memory that last about 3 hours. This is implemented by limiting the block size to twice the median block size of the last 100 blocks. Thus the algorithm is not aware if the block size has increased by 10x in last day, or 1000x in the last week.

Proposed methods:

Long/short-term memory (2 medians)

Use two the lower of two medians e.g.

  • Short term: limit growth to 2x median of last 100 blocks (3 hours) < CURRENT METHOD
  • Long term: limit growth to 4x median of last 720 blocks (1 day)
    This allows short-term responsiveness, but adds reasonable limits on growth at longer timescales.

(Isthmus suggested & modeled in big bang notebook). I have not tuned these paramters whatsoever... 4x in 1 week is arbitrary numbers for example.

Keep one median, adjust to longer memory

e.g. calculate maximum block size based on last 5000 blocks instead of last 100 blocks. (This might just give us the worst of both worlds)

Span multi-scale

Select lowest value of median block size at multiple timescales. This is one possible generalization of the LSTM described above. Take the lowest of:

  • 2x median of last 100 blocks (3 hours)
  • 4x median of last day
  • 10x median of last week
  • 100x median of last month
  • 10000x median since last year

(Isthmus suggested & modeled in big bang notebook). I have not tuned these parameters whatsoever... 100x in 1 month is arbitrary numbers for example.

Switch from multiplicative to additive limit

Turns the exponential growth into linear growth
(Idea from smooth and suraeNoether)

Add momentum term

Not quite sure what this means or who suggested it first. Feel free to explain in the comments.

Penalize miners for creating blocks smaller than a median

I haven't fully thought this through - found it in surae's summary, but not sure who suggested it first. Feel free to explain in the comments. (I'm curious how this works when traffic is low)

Block size calculation includes hash rate

Include the network hashrate (estimated from difficulty and timing) in the calculation of maximum block size. The notion is that this attaches a PoW barrier to blockchain bloat, and uses hash rate as a rough proxy for actual adoption (certainly more meaningful than transaction volume). Since we can't forecast hash rate, we would model several different scenarios (increasing, decreasing, volatile, etc)

Add your ideas in the comments so we can model them.

Incorrect estimations for the current algorithm

At the end of the 18th hour:
-
# Current algorithm
Blocksize = 614400.0 kB
>>> Blockchain size = 117.5009 GB

This is where it gets wrong because current Monero node's source code has a sanity check for all incoming blocks, see core::handle_incoming_block in src/cryptonote_core/cryptonote_core.cpp:

    if(block_blob.size() > get_max_block_size())
    {
      LOG_PRINT_L1("WRONG BLOCK BLOB, too big size " << block_blob.size() << ", rejected");
      bvc.m_verifivation_failed = true;
      return false;
    }

get_max_block_size() returns CRYPTONOTE_MAX_BLOCK_SIZE which is set to 500000000 bytes (488281.25 KB), so blockchain growth rate can never be more than ~335.271 GiB/day with current code.

List of transaction volume patterns/scenarios to test

We will be modeling different methods (see #2) for deciding how to add an upper bound to dynamic block size expansion. Preliminary work is available in a Jupyter Notebook

We must consider how each algorithm will perform under a variety of circumstances:

  • Low traffic
  • Medium traffic
  • Heavy traffic
  • Rapid adoption
  • Attack traffic (post-penalty fee regime)

What else?

We want to be sure to model all of the normal use patterns AND every edge case we can imagine. Add your ideas in the comments, please. If possible, please include a qualitative description and pseudocode (not mandatory).

"Tipping point" for block size

What size blocks are necessary to start crashing nodes? From Monero Research Lab meeting on 26-Nov, discussing an edge case that creates blocks so large that they knock out nodes:

@Gingeropolous: ... useful information needed to address the issue: current node processing ability. What is the existing blocksize tipping point for processing

Can we get some experimental data on this? Perhaps on a few different computers. This will be crucial in guiding selection of parameters for the dynamic blocksize algorithm.

(Perhaps instead of Monero testnet, we should make a separate hostile testnet for these types of studies)

What is the cost per block to override penalty and drive to max size?

My initial simulations assumed that the net fees for the attack transactions must be greater than the coinbase reward (to override the block size penalty financial incentive). This is a 0th order approximation (i.e. not great).

The reward might have to be slightly larger to offset fees.

I thought I saw somebody say that enticing miners to mine a maximum-size (total penalty) block was 4x the coinbase. (I don't quite understand why yet).

Thoughts?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.