Code Monkey home page Code Monkey logo

exonum-btc-anchoring's Introduction

exonum-btc-anchoring's People

Contributors

alekseysidorov avatar aleksuss avatar defuz avatar dependabot[bot] avatar ivan-ochc avatar maria-nosyk avatar matklad avatar ponimas avatar popzxc avatar slowli avatar stanislav-tkach avatar vldm avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

exonum-btc-anchoring's Issues

Specify transaction payload data format

Payload must be contained in single output with the OP_RETURN instruction.
Payload must have version and type information.
There are two types: regular with the block hash and height informations and recover with the additional information about previous anchoring chain.

Improve `find_lect_deep` behaviour

Now when we do not know previous tx in lect. We check their previous tx recursively until we find transaction that we know. That means If node sleep too much time it need to check too many transactions.
But we can check that previous tx known by +2/3 other validators. That means that this transaction is accepted by majority of the network and we can trust them.

Proposal to implement stand-alone procedure of anchoring verification.

Propose implementing full-node anchoring support as a stand-alone script/application, which can be started

  • dynamically (considerably later than full-node start, later than bitcoind deployment on full-node)
  • repetetively
    It should use trusted starting FundingTx as input and query
  1. anchoring service api
  2. bitcoind

@slowli thoughts? is it separable/combinable with bitcoin oracle notion?

Current implementation in #21 has this restriction:

  1. to verify starting from exonum-genesis-block/FundingTx, bitcoind has to be deployed/configured prior to fullnode start.
  2. if bitcoind is turned on at some exonum block height, verification will start only at that height (ommitting FundingTx--- older lects verification.
  3. it is effectively run only once, performing check in handle_commit in each new exonum-block.

Transition waiting state

update_our_lect. When listunspent does return no appropriate LECTs?
Such situation may be in the two cases.

  1. We create transitional anchoring transaction when we move to another anchoring multisig address. Than we return to the AnchoringState and resume the usual anchoring process. However, transitional transaction may be loosed and listunspent can return no appropriate LECTs. We should somehow find transitional transaction and re-broadcast it.
  2. We are far behind the rest exonum network and anchoring chain moved from the current multisig-address to the another one. We should do nothing but wait until restoring actual blockchain state.

To distinguish between these cases, a WaitingState is proposed. We move into the WaitingState after the transitional transaction was created; and we do not resume an anchoring process until the transitional tx receives sufficient amount of confirmations.

P.S. Yes, such state existed previously but was removed later. But now it seems we need it again..

Add `FullNode` support

Implement service logic for non-validator nodes:

  • If FullNode have bitcoind it must validate anchoring chain by it.
  • Other nodes must check the lect payload by the given rules.

Rules:

  • Anchoring chain must begin from funding_tx in genesis block.
  • Anchoring transaction must be presents in bitcoin blockchain.
  • Each lect must depends on previous lect that is common at least for 2/3+1 validators

MsgAnchoringSignature does not verify sender

The current implementation of MsgAnchoringSignature does not seem to verify the transaction sender in execute(). This means that anyone can see a valid transaction of this type from a validator, change the from field and Exonum sig and broadcast the modified transaction to the network. While it seems that the MsgAnchoringSignature processing is idempotent for original and modified transaction(s), the attacker can at least waste computing resources of full nodes (spent on verifying the Bitcoin sig and other execute steps).

Proposed solution:

  • Verify the from field of MsgAnchoringSignature like it is done in MsgAnchoringUpdateLatest, and/or
  • Move resource-intensive operations (like Bitcoin sig checking) after it is verified that there is no sig stored for the given validator, anchoring tx and input index

majority_count refactoring

majority_count is now hard-coded as a function directly within the crate as +2/3 of the input, i.e., the number of validators (= maintainers of the service). Maybe, it makes sense to move it to the common configuration? While +2/3 is a reasonable threshold for Byzantine fault tolerance, there could be a situation where another threshold is needed (e.g., we need moar security).

This could be further useful in a potential evolution of the service, in which the service maintainers may not correspond one-to-one to the validators.

Proposed solution: move majority_count to the configuration of the service.

mempool loosed; re-broadcast existing anchoring txs

Problem: fair node may create the fork of the anchoring chain without any byzantine behavior.
We build the new anchoring tx using common LECT as its input. But LECT could be already spent by another legitimate anchoring tx.
While updating our LECT, nodes defines its new value as the last transaction of the anchoring chain. Validator get this tx from its fullnode. But what if new anchoring txs were already generated, but are unknown to our full node yet? If so, validator set some tx in the middle of the anchoring chain as its new LECT.

tx_1->tx_2->...->new_lect->...->prev_lect

Proposed solution: It is better to re-broadcast our previously generated anchoring txs (since new_lect for prev_lect) rather than create new anchoring chain from new_lect.

Sign anchoring proposal even if we are not agree with it

We compare LECTs of every validator. If common LECT is found then new anchoring proposal (with signatures) is generated.
Now node do not send its signature if its LECT differs from the common LECT. Isn't it better to sign common anchoring proposal even if we are not agree with common LECT?

Verify that anchoring transactions spend on agreed anchoring address

It seems that execute of service's transactions does not verify that a proposed anchoring transaction spends change on the Bitcoin address that follows from the current service configuration. While this check may be redundant if the service operates under the assumed security model, it could be nice to check it just in case. After all, execute of both MsgAnchoringSignature and MsgAnchoringUpdateLatest access the configuration anyway.

More complex funding transaction logic

As far as I understand FundingTx logic, the service can get new anchoring funds by supplying a new funding transaction through the common configuration (i.e., the config stored in the blockchain and manipulated by the +2/3 majority of admins). This is not very flexible; it may be desirable to be able to add funding txs automatically based on whitelist/blacklist rules (e.g., funds coming from KYC'd addresses, or having minimal BTC value).

Question raised in PR #13.

Check that there are no signatures in received anchoring transactions?

I think it could be beneficial to verify that scriptSig fields are empty in anchoring transactions within MsgAnchoringSignature transaction messages:

  • scriptSig fields are discarded anyway
  • A validator could submit multiple signatures for effectively the same anchoring transaction, but they would be registered under different txid-s each time. That is, the current implementation makes it easier to organize blockchain state bloat attacks. Although such attacks would not be fixed completely by fixing this, it may still be a good start

Proposed solution:

  • Check that scriptSig fields are empty in MsgAnchoringSignature::verify()
  • Replace id() -> ntxid() in MsgAnchoringSignature::execute()

Multiple signatures for same data in anchoring transactions

The MsgAnchoringSignature transaction implementation effectively allows a single validator to submit an arbitrary number of signatures for the same anchoring transaction. This is because it is possible to create different valid signatures over the same bitcoin transaction and input index/pubkey.

The bug may probably lead to various exploits.

Solution sketch: do not accept more than one signature from the same validator in MsgAnchoringSignature.

Reorganize crate

Improve the structure of the repository and write rules to organize service repositories.

Wrong `state_hash` computation

Sometimes in testnet with the many validators an error occurs:

thread 'main' panicked at 'We are fucked up...', /Users/aleksey/.cargo/git/checkouts/exonum-core-090ad07d3901293f/fc8ca6e/exonum/src/node/consensus.rs:333
stack backtrace:
   1:        0x10f05924c - std::sys::imp::backtrace::tracing::imp::write::h36dcd6719b30e145
   2:        0x10f05c51e - std::panicking::default_hook::{{closure}}::hce0d7336eef94718
   3:        0x10f05c1c0 - std::panicking::default_hook::hd7b4c7a7e16abb43
   4:        0x10f05c9d7 - std::panicking::rust_panic_with_hook::h33761bada49f3713
   5:        0x10ef7c8d4 - std::panicking::begin_panic::h5871bdf0eba452a7
   6:        0x10efb9f85 - exonum::node::consensus::<impl exonum::node::NodeHandler<S>>::has_majority_precommits::hcfeadcac8fe08d53
   7:        0x10efb7978 - exonum::node::consensus::<impl exonum::node::NodeHandler<S>>::handle_consensus::h6c692b21e8fec4c4
   8:        0x10efb3f4d - exonum::node::basic::<impl exonum::node::NodeHandler<S>>::handle_message::h4f5b0c087c6d67be
   9:        0x10efc0259 - <exonum::node::NodeHandler<S> as exonum::events::EventHandler>::handle_event::h409503dfd98a8e4d
  10:        0x10ef7ed23 - <mio::event_loop::EventLoop<H>>::run_once::hcf57c014b80ee99c
  11:        0x10efc583f - exonum::node::Node::run::h386b2e1cd0988bd2
  12:        0x10edb77f3 - timestamping::main::h74681c16ded293a5
  13:        0x10f05d9ea - __rust_maybe_catch_panic
  14:        0x10f05cda6 - std::rt::lang_start::h745bea112c3e5e1a

And after restart:

thread 'main' panicked at 'Block_hash incorrect in received block=Block { from: "PublicKey(6e56b6f2...)", to: "PublicKey(24cde76e...)", time: Timespec { sec: 1493195641, nsec: 70267000 }, block: Block { height: 2521, propose_round: 1, time: Timespec { sec: 1493193323, nsec: 601853000 }, prev_hash: "Hash(1bf452c7...)", tx_hash: "Hash(f71d6969...)", state_hash: "Hash(84733b4c...)" }, precommits: [Precommit { validator: 0, height: 2521, round: 1, propose_hash: "Hash(fcf7bb14...)", block_hash: "Hash(4d1be8ad...)" }, Precommit { validator: 2, height: 2521, round: 1, propose_hash: "Hash(fcf7bb14...)", block_hash: "Hash(4d1be8ad...)" }, Precommit { validator: 4, height: 2521, round: 1, propose_hash: "Hash(fcf7bb14...)", block_hash: "Hash(4d1be8ad...)" }, Precommit { validator: 3, height: 2521, round: 1, propose_hash: "Hash(fcf7bb14...)", block_hash: "Hash(4d1be8ad...)" }], transactions: [MessageBuffer { raw: [0, 0, 1, 0, 3, 0, 221, 2, 0, 0, 247, 136, 120, 22, 216, 152, 72, 21, 125, 11, 30, 114, 41, 115, 72, 100, 30, 193, 97, 10, 220, 159, 150, 27, 52, 139, 55, 192, 5, 73, 112, 201, 2, 0, 0, 0, 62, 0, 0, 0, 95, 2, 0, 0, 11, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 82, 134, 64, 126, 113, 136, 213, 144, 54, 142, 214, 37, 245, 240, 14, 148, 76, 94, 202, 1, 189, 130, 110, 74, 72, 143, 98, 53, 85, 121, 164, 197, 0, 0, 0, 0, 253, 213, 1, 0, 72, 48, 69, 2, 32, 65, 139, 0, 13, 30, 64, 197, 246, 2, 183, 112, 99, 96, 77, 89, 10, 188, 131, 100, 82, 119, 249, 246, 68, 176, 4, 25, 59, 115, 56, 76, 35, 2, 33, 0, 141, 52, 123, 117, 86, 77, 84, 81, 238, 159, 198, 179, 190, 149, 44, 68, 149, 83, 179, 191, 200, 255, 176, 209, 120, 74, 60, 203, 166, 67, 249, 239, 1, 73, 48, 70, 2, 33, 0, 194, 250, 85, 221, 144, 146, 146, 111, 250, 156, 206, 244, 80, 249, 18, 219, 222, 141, 236, 211, 231, 13, 127, 184, 42, 86, 196, 20, 246, 145, 97, 118, 2, 33, 0, 131, 177, 109, 174, 109, 178, 201, 190, 192, 17, 222, 121, 216, 121, 220, 243, 153, 246, 219, 103, 141, 133, 63, 193, 205, 123, 135, 45, 237, 189, 200, 17, 1, 72, 48, 69, 2, 32, 97, 87, 36, 173, 203, 162, 70, 7, 218, 9, 113, 203, 132, 245, 221, 125, 154, 35, 123, 24, 84, 168, 159, 2, 4, 40, 237, 85, 75, 80, 49, 17, 2, 33, 0, 212, 27, 156, 201, 136, 126, 64, 11, 72, 21, 181, 28, 212, 115, 164, 56, 230, 61, 246, 233, 127, 100, 77, 172, 29, 156, 88, 219, 42, 57, 23, 155, 1, 72, 48, 69, 2, 32, 10, 203, 190, 241, 82, 112, 106, 210, 243, 135, 205, 235, 63, 129, 58, 72, 180, 179, 120, 101, 6, 107, 7, 2, 3, 251, 180, 184, 210, 32, 49, 206, 2, 33, 0, 198, 182, 22, 188, 89, 143, 198, 23, 8, 234, 248, 178, 120, 41, 185, 182, 98, 54, 129, 46, 177, 84, 25, 149, 64, 110, 105, 57, 210, 234, 210, 143, 1, 76, 173, 84, 33, 2, 176, 218, 130, 158, 156, 104, 146, 0, 213, 223, 221, 161, 223, 67, 107, 187, 240, 140, 72, 106, 194, 40, 153, 171, 230, 198, 15, 72, 252, 70, 22, 37, 33, 3, 118, 90, 189, 136, 79, 234, 30, 5, 224, 201, 207, 152, 33, 99, 53, 121, 147, 29, 156, 90, 21, 127, 72, 76, 26, 246, 125, 158, 109, 137, 1, 8, 33, 2, 158, 78, 71, 218, 128, 67, 220, 140, 90, 31, 113, 198, 222, 112, 240, 160, 157, 11, 132, 168, 67, 97, 81, 78, 161, 171, 76, 105, 32, 14, 90, 198, 33, 3, 253, 182, 237, 206, 225, 147, 136, 128, 229, 19, 132, 227, 207, 248, 176, 121, 170, 7, 38, 54, 11, 204, 248, 72, 79, 83, 243, 168, 120, 56, 241, 59, 33, 3, 247, 59, 77, 139, 42, 111, 56, 148, 201, 183, 163, 209, 62, 251, 28, 76, 33, 89, 184, 95, 77, 68, 88, 10, 119, 35, 163, 53, 161, 41, 184, 107, 85, 174, 255, 255, 255, 255, 2, 0, 119, 1, 0, 0, 0, 0,

This is looks like a nondeterministic state_hash computation.

Move (some of) sanbox tests to main crate

I think it could be beneficial to move at least some of "sandbox" tests to the main crate, creating or reorganizing mock implementations for components (e.g., the bitcoind connector) in the process.
Right now, sandbox tests uses lots of boilerplate code, which may hinder isolating the functionality being tested.

Static/compile-time DI with the help of associated types may be used:

trait Dependency {
    fn do_something();
}

trait ObjWithDependency {
    type D: Dependency;
   
    fn get_dependency(&self) -> &D;
   
    // Business logic goes here, can be overridden by implementations
    fn some_function(&self) {
        self.get_dependency().do_something();
    }
    // other methods
}

// default implementations, used in production
impl Dependency for DependencyImpl {
    // ...
}
impl ObjWithDependency for ObjWithDependencyImpl {
    type D = DependencyImpl;
    // dependency injection, e.g., via the constructor
}

// mock implementations, used in tests
impl Dependency for MockImpl {
    // ...   
}
impl ObjWithDependency for ObjWithMockImpl {
    type D = MockImpl;
    // ...
}

I am new with Rust, though, so maybe I'm missing something.

This issue is not unique to this repo, but we can start small :)

SigHashType is not verified correctly for bitcoin transactions

The actual SigHashType of bitcoin transaction signatures is seemingly ignored during the signature verification in the verify_input() method in transactions.rs. It is assumed that SigHashType is always All.

This may lead to an exploit (I did not test it, admittedly): The attacker may submit an invalid signature with another SigHashType, which would be valid under SigHashType::All, and thus make all validators attempt to create and send an invalid bitcoin transaction. The invalid transaction shouldn't make past bitcoind nodes, but the anchoring process would be effectively stalled.

Solution sketch:

  • Add explicit checks of SigHashType
  • Verify results of the bitcoind RPC calls (if this isn't done already)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.