exonum / exonum-btc-anchoring Goto Github PK
View Code? Open in Web Editor NEWExonum Bitcoin anchoring service (using multisignatures)
Home Page: https://exonum.com/doc/advanced/bitcoin-anchoring/
License: Apache License 2.0
Exonum Bitcoin anchoring service (using multisignatures)
Home Page: https://exonum.com/doc/advanced/bitcoin-anchoring/
License: Apache License 2.0
As far as I understand FundingTx
logic, the service can get new anchoring funds by supplying a new funding transaction through the common configuration (i.e., the config stored in the blockchain and manipulated by the +2/3 majority of admins). This is not very flexible; it may be desirable to be able to add funding txs automatically based on whitelist/blacklist rules (e.g., funds coming from KYC'd addresses, or having minimal BTC value).
Question raised in PR #13.
Improve the structure of the repository and write rules to organize service repositories.
Similar to exonum/exonum#159 issue, we need to check the anchoring code too.
Handle panic error while generating config when btc host unavailable
I think it could be beneficial to verify that scriptSig
fields are empty in anchoring transactions within MsgAnchoringSignature
transaction messages:
scriptSig
fields are discarded anywaytxid
-s each time. That is, the current implementation makes it easier to organize blockchain state bloat attacks. Although such attacks would not be fixed completely by fixing this, it may still be a good startProposed solution:
scriptSig
fields are empty in MsgAnchoringSignature::verify()
id()
-> ntxid()
in MsgAnchoringSignature::execute()
Sometimes in testnet with the many validators an error occurs:
thread 'main' panicked at 'We are fucked up...', /Users/aleksey/.cargo/git/checkouts/exonum-core-090ad07d3901293f/fc8ca6e/exonum/src/node/consensus.rs:333
stack backtrace:
1: 0x10f05924c - std::sys::imp::backtrace::tracing::imp::write::h36dcd6719b30e145
2: 0x10f05c51e - std::panicking::default_hook::{{closure}}::hce0d7336eef94718
3: 0x10f05c1c0 - std::panicking::default_hook::hd7b4c7a7e16abb43
4: 0x10f05c9d7 - std::panicking::rust_panic_with_hook::h33761bada49f3713
5: 0x10ef7c8d4 - std::panicking::begin_panic::h5871bdf0eba452a7
6: 0x10efb9f85 - exonum::node::consensus::<impl exonum::node::NodeHandler<S>>::has_majority_precommits::hcfeadcac8fe08d53
7: 0x10efb7978 - exonum::node::consensus::<impl exonum::node::NodeHandler<S>>::handle_consensus::h6c692b21e8fec4c4
8: 0x10efb3f4d - exonum::node::basic::<impl exonum::node::NodeHandler<S>>::handle_message::h4f5b0c087c6d67be
9: 0x10efc0259 - <exonum::node::NodeHandler<S> as exonum::events::EventHandler>::handle_event::h409503dfd98a8e4d
10: 0x10ef7ed23 - <mio::event_loop::EventLoop<H>>::run_once::hcf57c014b80ee99c
11: 0x10efc583f - exonum::node::Node::run::h386b2e1cd0988bd2
12: 0x10edb77f3 - timestamping::main::h74681c16ded293a5
13: 0x10f05d9ea - __rust_maybe_catch_panic
14: 0x10f05cda6 - std::rt::lang_start::h745bea112c3e5e1a
And after restart:
thread 'main' panicked at 'Block_hash incorrect in received block=Block { from: "PublicKey(6e56b6f2...)", to: "PublicKey(24cde76e...)", time: Timespec { sec: 1493195641, nsec: 70267000 }, block: Block { height: 2521, propose_round: 1, time: Timespec { sec: 1493193323, nsec: 601853000 }, prev_hash: "Hash(1bf452c7...)", tx_hash: "Hash(f71d6969...)", state_hash: "Hash(84733b4c...)" }, precommits: [Precommit { validator: 0, height: 2521, round: 1, propose_hash: "Hash(fcf7bb14...)", block_hash: "Hash(4d1be8ad...)" }, Precommit { validator: 2, height: 2521, round: 1, propose_hash: "Hash(fcf7bb14...)", block_hash: "Hash(4d1be8ad...)" }, Precommit { validator: 4, height: 2521, round: 1, propose_hash: "Hash(fcf7bb14...)", block_hash: "Hash(4d1be8ad...)" }, Precommit { validator: 3, height: 2521, round: 1, propose_hash: "Hash(fcf7bb14...)", block_hash: "Hash(4d1be8ad...)" }], transactions: [MessageBuffer { raw: [0, 0, 1, 0, 3, 0, 221, 2, 0, 0, 247, 136, 120, 22, 216, 152, 72, 21, 125, 11, 30, 114, 41, 115, 72, 100, 30, 193, 97, 10, 220, 159, 150, 27, 52, 139, 55, 192, 5, 73, 112, 201, 2, 0, 0, 0, 62, 0, 0, 0, 95, 2, 0, 0, 11, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 82, 134, 64, 126, 113, 136, 213, 144, 54, 142, 214, 37, 245, 240, 14, 148, 76, 94, 202, 1, 189, 130, 110, 74, 72, 143, 98, 53, 85, 121, 164, 197, 0, 0, 0, 0, 253, 213, 1, 0, 72, 48, 69, 2, 32, 65, 139, 0, 13, 30, 64, 197, 246, 2, 183, 112, 99, 96, 77, 89, 10, 188, 131, 100, 82, 119, 249, 246, 68, 176, 4, 25, 59, 115, 56, 76, 35, 2, 33, 0, 141, 52, 123, 117, 86, 77, 84, 81, 238, 159, 198, 179, 190, 149, 44, 68, 149, 83, 179, 191, 200, 255, 176, 209, 120, 74, 60, 203, 166, 67, 249, 239, 1, 73, 48, 70, 2, 33, 0, 194, 250, 85, 221, 144, 146, 146, 111, 250, 156, 206, 244, 80, 249, 18, 219, 222, 141, 236, 211, 231, 13, 127, 184, 42, 86, 196, 20, 246, 145, 97, 118, 2, 33, 0, 131, 177, 109, 174, 109, 178, 201, 190, 192, 17, 222, 121, 216, 121, 220, 243, 153, 246, 219, 103, 141, 133, 63, 193, 205, 123, 135, 45, 237, 189, 200, 17, 1, 72, 48, 69, 2, 32, 97, 87, 36, 173, 203, 162, 70, 7, 218, 9, 113, 203, 132, 245, 221, 125, 154, 35, 123, 24, 84, 168, 159, 2, 4, 40, 237, 85, 75, 80, 49, 17, 2, 33, 0, 212, 27, 156, 201, 136, 126, 64, 11, 72, 21, 181, 28, 212, 115, 164, 56, 230, 61, 246, 233, 127, 100, 77, 172, 29, 156, 88, 219, 42, 57, 23, 155, 1, 72, 48, 69, 2, 32, 10, 203, 190, 241, 82, 112, 106, 210, 243, 135, 205, 235, 63, 129, 58, 72, 180, 179, 120, 101, 6, 107, 7, 2, 3, 251, 180, 184, 210, 32, 49, 206, 2, 33, 0, 198, 182, 22, 188, 89, 143, 198, 23, 8, 234, 248, 178, 120, 41, 185, 182, 98, 54, 129, 46, 177, 84, 25, 149, 64, 110, 105, 57, 210, 234, 210, 143, 1, 76, 173, 84, 33, 2, 176, 218, 130, 158, 156, 104, 146, 0, 213, 223, 221, 161, 223, 67, 107, 187, 240, 140, 72, 106, 194, 40, 153, 171, 230, 198, 15, 72, 252, 70, 22, 37, 33, 3, 118, 90, 189, 136, 79, 234, 30, 5, 224, 201, 207, 152, 33, 99, 53, 121, 147, 29, 156, 90, 21, 127, 72, 76, 26, 246, 125, 158, 109, 137, 1, 8, 33, 2, 158, 78, 71, 218, 128, 67, 220, 140, 90, 31, 113, 198, 222, 112, 240, 160, 157, 11, 132, 168, 67, 97, 81, 78, 161, 171, 76, 105, 32, 14, 90, 198, 33, 3, 253, 182, 237, 206, 225, 147, 136, 128, 229, 19, 132, 227, 207, 248, 176, 121, 170, 7, 38, 54, 11, 204, 248, 72, 79, 83, 243, 168, 120, 56, 241, 59, 33, 3, 247, 59, 77, 139, 42, 111, 56, 148, 201, 183, 163, 209, 62, 251, 28, 76, 33, 89, 184, 95, 77, 68, 88, 10, 119, 35, 163, 53, 161, 41, 184, 107, 85, 174, 255, 255, 255, 255, 2, 0, 119, 1, 0, 0, 0, 0,
This is looks like a nondeterministic state_hash
computation.
Implement service logic for non-validator nodes:
FullNode
have bitcoind
it must validate anchoring chain by it.lect
payload by the given rules.Rules:
funding_tx
in genesis block.lect
must depends on previous lect
that is common at least for 2/3+1 validatorsThe current implementation of MsgAnchoringSignature
does not seem to verify the transaction sender in execute()
. This means that anyone can see a valid transaction of this type from a validator, change the from
field and Exonum sig and broadcast the modified transaction to the network. While it seems that the MsgAnchoringSignature
processing is idempotent for original and modified transaction(s), the attacker can at least waste computing resources of full nodes (spent on verifying the Bitcoin sig and other execute
steps).
Proposed solution:
from
field of MsgAnchoringSignature
like it is done in MsgAnchoringUpdateLatest
, and/orPropose implementing full-node anchoring support as a stand-alone script/application, which can be started
FundingTx
as input and query@slowli thoughts? is it separable/combinable with bitcoin oracle notion?
Current implementation in #21 has this restriction:
exonum-genesis-block
/FundingTx
, bitcoind has to be deployed/configured prior to fullnode start.FundingTx
--- older lects verification.handle_commit
in each new exonum-block.I think it could be beneficial to move at least some of "sandbox" tests to the main crate, creating or reorganizing mock implementations for components (e.g., the bitcoind
connector) in the process.
Right now, sandbox tests uses lots of boilerplate code, which may hinder isolating the functionality being tested.
Static/compile-time DI with the help of associated types may be used:
trait Dependency {
fn do_something();
}
trait ObjWithDependency {
type D: Dependency;
fn get_dependency(&self) -> &D;
// Business logic goes here, can be overridden by implementations
fn some_function(&self) {
self.get_dependency().do_something();
}
// other methods
}
// default implementations, used in production
impl Dependency for DependencyImpl {
// ...
}
impl ObjWithDependency for ObjWithDependencyImpl {
type D = DependencyImpl;
// dependency injection, e.g., via the constructor
}
// mock implementations, used in tests
impl Dependency for MockImpl {
// ...
}
impl ObjWithDependency for ObjWithMockImpl {
type D = MockImpl;
// ...
}
I am new with Rust, though, so maybe I'm missing something.
This issue is not unique to this repo, but we can start small :)
update_our_lect. When listunspent
does return no appropriate LECTs?
Such situation may be in the two cases.
listunspent
can return no appropriate LECTs. We should somehow find transitional transaction and re-broadcast it.To distinguish between these cases, a WaitingState
is proposed. We move into the WaitingState
after the transitional transaction was created; and we do not resume an anchoring process until the transitional tx receives sufficient amount of confirmations.
P.S. Yes, such state existed previously but was removed later. But now it seems we need it again..
Problem: fair node may create the fork of the anchoring chain without any byzantine behavior.
We build the new anchoring tx using common LECT as its input. But LECT could be already spent by another legitimate anchoring tx.
While updating our LECT, nodes defines its new value as the last transaction of the anchoring chain. Validator get this tx from its fullnode. But what if new anchoring txs were already generated, but are unknown to our full node yet? If so, validator set some tx in the middle of the anchoring chain as its new LECT.
tx_1->tx_2->...->new_lect->...->prev_lect
Proposed solution: It is better to re-broadcast our previously generated anchoring txs (since new_lect
for prev_lect
) rather than create new anchoring chain from new_lect
.
The MsgAnchoringSignature
transaction implementation effectively allows a single validator to submit an arbitrary number of signatures for the same anchoring transaction. This is because it is possible to create different valid signatures over the same bitcoin transaction and input index/pubkey.
The bug may probably lead to various exploits.
Solution sketch: do not accept more than one signature from the same validator in MsgAnchoringSignature
.
Now when we do not know previous tx
in lect
. We check their previous tx
recursively until we find transaction that we know. That means If node sleep too much time it need to check too many transactions.
But we can check that previous tx
known by +2/3
other validators. That means that this transaction is accepted by majority of the network and we can trust them.
Now anchoring service can use only one funding transaction. We may add support of multiple funding transactions
It seems that execute
of service's transactions does not verify that a proposed anchoring transaction spends change on the Bitcoin address that follows from the current service configuration. While this check may be redundant if the service operates under the assumed security model, it could be nice to check it just in case. After all, execute
of both MsgAnchoringSignature
and MsgAnchoringUpdateLatest
access the configuration anyway.
If i take the correct signature and change the first byte and then check the signed message, the verification code from secp256k1
returns true. But if I try to use this signature in bicoind
it will return an error like: 64: non-mandatory-script-verify-flag (Non-canonical DER signature)
.
See this code for details.
Payload must be contained in single output with the OP_RETURN
instruction.
Payload must have version and type information.
There are two types: regular
with the block hash and height informations and recover
with the additional information about previous anchoring chain.
Now transaction just unwraps from bytes without any checks.
Check that msg content contains correct block hash for the given height.
I think we should move bitcoinrpc
to a separate crate, because it's useful for other projects as well.
The actual SigHashType
of bitcoin transaction signatures is seemingly ignored during the signature verification in the verify_input()
method in transactions.rs. It is assumed that SigHashType
is always All
.
This may lead to an exploit (I did not test it, admittedly): The attacker may submit an invalid signature with another SigHashType
, which would be valid under SigHashType::All
, and thus make all validators attempt to create and send an invalid bitcoin transaction. The invalid transaction shouldn't make past bitcoind nodes, but the anchoring process would be effectively stalled.
Solution sketch:
SigHashType
We compare LECTs of every validator. If common LECT is found then new anchoring proposal (with signatures) is generated.
Now node do not send its signature if its LECT differs from the common LECT. Isn't it better to sign common anchoring proposal even if we are not agree with common LECT?
majority_count
is now hard-coded as a function directly within the crate as +2/3 of the input, i.e., the number of validators (= maintainers of the service). Maybe, it makes sense to move it to the common configuration? While +2/3 is a reasonable threshold for Byzantine fault tolerance, there could be a situation where another threshold is needed (e.g., we need moar security).
This could be further useful in a potential evolution of the service, in which the service maintainers may not correspond one-to-one to the validators.
Proposed solution: move majority_count
to the configuration of the service.
Since this crate is library, not binary, we should avoid adding Cargo.lock
into a git repository.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.