input-output-hk / jormungandr Goto Github PK
View Code? Open in Web Editor NEWprivacy voting blockchain node
Home Page: https://input-output-hk.github.io/jormungandr/
License: Apache License 2.0
privacy voting blockchain node
Home Page: https://input-output-hk.github.io/jormungandr/
License: Apache License 2.0
this starts to be a needed task now. We need to be able to generate a genesis file and to be able to start a blockchain from this. This will allow us to quickly tests things are going fine without the dependency of the mainnet, staging and testnet (so we can test fast without the long initial sync).
d
parameter.d
parameter. see delegation 5.9.3on start up we need to figure out how far we are behind the blockchain.
When trying to install jormungandr
on NixOS machine protobuf generation fails with the no such directory error:
Running `/home/qnikst/workspace/tweag/jormungandr/target/debug/build/jormungandr-1c68e7af3d29fb06/build-script-build`
[jormungandr 0.0.1] No such file or directory (os error 2)
error: failed to run custom build command for `jormungandr v0.0.1 (/home/qnikst/workspace/tweag/jormungandr)`
process didn't exit successfully: `/home/qnikst/workspace/tweag/jormungandr/target/debug/build/jormungandr-1c68e7af3d29fb06/build-script-build` (exit code: 1)
--- stderr
No such file or directory (os error 2)
It happens that tower-grps-build is trying to access non-existing locale directories
24897 openat(AT_FDCWD, "/nix/store/fg4yq8i8wd08xg3fy58l6q73cjy8hjr2-glibc-2.27/share/locale/en_US.UTF-8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
24897 openat(AT_FDCWD, "/nix/store/fg4yq8i8wd08xg3fy58l6q73cjy8hjr2-glibc-2.27/share/locale/en_US.utf8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
24897 openat(AT_FDCWD, "/nix/store/fg4yq8i8wd08xg3fy58l6q73cjy8hjr2-glibc-2.27/share/locale/en_US/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
24897 openat(AT_FDCWD, "/nix/store/fg4yq8i8wd08xg3fy58l6q73cjy8hjr2-glibc-2.27/share/locale/en.UTF-8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
24897 openat(AT_FDCWD, "/nix/store/fg4yq8i8wd08xg3fy58l6q73cjy8hjr2-glibc-2.27/share/locale/en.utf8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
24897 openat(AT_FDCWD, "/nix/store/fg4yq8i8wd08xg3fy58l6q73cjy8hjr2-glibc-2.27/share/locale/en/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
24897 write(2, "strace: exec: No such file or di"..., 40) = 40
Setting LOCALE_ARCHIVE
doesn't help.
Expected behaviour
jormungandr
- installs successfully.
Additional context
Motivation:
We need to test that jormangandr can continue to produce blocks after nodes are added and removed.
Tests:
With the protobuf monitor from #46 running in the background these operations should be possible without stopping block creation.
delete 1
delete 4 sequence
delete 4
These testcases should be also have varients that add:
with a passive node (i.e no keys configured) , ensuring that keeps going
with nodes being restarted instead of just stopped (this maybe be TDD at this stage)
The current design is to distinguish the different box components/workers/actors of the node (network-tokio-thread-pool, transaction pool, clock, ...).
currently we start threads without waiting for the threads at the end of the process. causing the process to terminate while we want it to await for all the different tasks
link to #4 (announcement part)
Definition
As a long term goal we need to be able to deliver jormungandr via docker to enable ease of deployment.
To do this we need to evaluate the various options available in the docker world for us, this task will cover that evaluation in the context of the following requirements:
Must haves
Nice to haves
Definitions
Deliverables
propagate to the subscribed connected nodes
Description: As a QA i want to be able to run our rust code in a repeatable manner so that i can develop and run tests for it.
Background:
In the Haskell Code we have demo script that allows people to run a simple setup to demonstrate the program, this also creates a standardised environment for testing.
This is the script that controls it.
https://github.com/input-output-hk/cardano-sl/blob/develop/scripts/launch/demo-nix.sh
This script covers generating the genesis data , importing it to the node , starting a wallet and making the funds available in that wallet and this should do the same.
I propose to keep the same parameters in this interests of parity
-d run with client auth disabled"
-w enable wallet"
-i INT number of wallets to import (default: 0)"
Requirements:
Acceptance Criteria:
the script should be run with -w -i 5 and a wallet service with 5 imported wallets in it will be available.
relates to #55
As we have a big RFC that will be implemented some I'd like to go with the simplest possible solution.
The requirements for the solution:
slog
or new structured logging RFC.The idea is to implement a new macro (examples are very fast proof of concept, and println!
was used instead of log
):
macro_rules! nlog {
($msg:expr) => { println!($msg) };
($msg:expr,$($params:expr),+) => { println!($msg,$( $params)* ) };
($msg:expr,$($params:expr),+;$($name:ident = $i:expr),+) =>
{ println!(concat!(concat!($("[",stringify!($name),"=","{:#?}","]",)+),$msg),$($i)+,$( $params)* ) };
}
The macros should have 3 forms:
a.b. default ones fully compatible with log!
c. extended one that can keep additional parameters after the ;
This syntax is very close to the slog
one.
Further extending:
use
new form and use a wrapper for free, i.e. rename nlog
to log
.nlog
to use slog
, asnlog(logger:msg,...)
. Unfortunately, on this step, we will have to either use a logger from the TLS or keep that in some other state, and this will require some refactoring (hopefully as simple as running sed
)P.S. this proposal does not suggest pluggable logging but only an easy switch to one or the other solution
Start passing a logger context, from start it could be a HashMap<Str,Str>
of the context, but in future that can logger object.
P.P.S.Further extending, I'm personally like the solutions that can create nested contexts, i.e.
if we can extend current logger with an additional context and pass that further down the calling stack.
Like that can be done in katip
framework in Haskell, but at this point, I'm not sure if there is an easy and compatible with other solutions way of achieving that.
Note: copied from comment
In order to understand the current state of unit testing and the coverage we have a review of existing unit tests is required.
Updates the to the test strategy to set some targets for coverage and definitions of unit testing requirements.
This is currently a command line option only.
NOTE: this issue needs to be evaluated first as starting a node without leadership means we are accepting starting a node without any chance of writing a block.
we need to handle block streaming to enable faster block synchronization between blocks
We don't have to support finding the common block yet in this ticket. This will be part of the synchronous query/client task.
k
k
relates to #26
implement simple query commands to:
We need to extend the logging option in the configuration file:
I can see cases where we might want to have the logs redirected to syslog but also the warning and the error ones to be on the standard error output too.
might need to sync with #55
implement KES API
link to #3
link to #6
if a connection drops we need to be able to try to reconnect and then either accept we cannot reconnect or drop the node,
link to #2
abstract current messages to prevent leakage of specific part onto the generic task of the node.
extend the configuration file to add the list of peers and merge them in the general settings.
I.e: add list connections there: https://github.com/input-output-hk/jormungandr/blob/master/src/settings/config.rs#L5-L9
merge with what the command line got here: https://github.com/input-output-hk/jormungandr/blob/master/src/settings/mod.rs#L62-L79
Something to note is that by default the Command Line override conflicting values in the configuration file.
related to #19
calculation of probability of election function as per genesis paper
unfortunately the structural logging is still only at the stage of discussion in the main log
crate (see rust-lang/log#296). So we won't be able to use that just yet.
we could consider using the slog
crate which seems to provide enough of the feature for us to use. However In the log crate's issue 296, slog maintainer was also talking that his crate may become redundant once log is structural logging compatible.
So ideally we will need to have some kind of abstract glue we could use on top of our current logs to be able to use the slog but easily be able to move back to the mainstream log crate.
log
to slog
easily after; The change will need to provide the same macros log
and slog
do but our macro will redirect to the log
's macros to start with.log
to slog
. We will need to make sure that the logging system is still async and is not blocking threads.mechanism to add faults in : transactions, blocks, network
e.g. network disruption (blocks blocked randomly, blocks man-in-the-middle changes, etc)
Currently the setup of the logger is done after the command line is parsed and after the configuration file is loaded and after they are both merged into the global setting of the application.
We need to allow an initial logging from the very beginning of the program allowing only for warnings and errors to be displayed and to always display to the standard error output.
This will allow us to display warnings and error when merging the command line options and the configuration file options into the settings (incompatible options, ignoring some states etc...)
there are 4 kind of commands to make jormungandr accept connections and connect to new nodes.
--legacy-listen 127.0.0.1:8000
start accepting connection at this address/port number--legacy-connect 127.0.0.1:8000
connect to a node that have used --legacy-listen 127.0.0.1:8000
so we will need the script to handle this and add these options to the node runner.
Organising the topology of the node is a bit difficult and complex. So as a start we can
make things a bit circular. So let assume we have decided to start n nodes:
k - 1
connects to node k
;n - 1
connects to node n
n
connects to node 1.+------------------+
| | Node 1 connects to Node2
| |
| Node 1 | +-------------------------------------------------+
| | |
| | v
| |
| | +-------------------------+
+------------------+ | |
| |
^ | |
| Node3 connects | |
| to node1 | Node 2 |
| | |
| | |
| +-----------------------+ | |
| | | | |
| | | | |
| | | | |
| | | +------+------------------+
| | Node 3 | |
| | | |
+-----------+ | |
| | <-----------------------+
| |
| | Node2 connects to node3
| |
+-----------------------+
So you will need to have something like:
node number | legacy-listen option | legacy-connect option |
---|---|---|
1 | --legacy-listen 127.0.0.1:8001 |
--legacy-connect 127.0.0.1:8002 |
2 | --legacy-listen 127.0.0.1:8003 |
--legacy-connect 127.0.0.1:8003 |
k | --legacy-listen 127.0.0.1:$((8000 + ${k})) |
--legacy-connect 127.0.0.1:$((8000 + ${k} + 1)) |
n | --legacy-listen 127.0.0.1:$((8000 + ${n})) |
--legacy-connect 127.0.0.1:8001 |
We need have a bash script/function to generate the key pairs that we will need to launch the nodes in #18
N
key pairN
public keyto generate the key pair, one can use cardano-cli debug generate-xprv
and the associated command to generate the public key from the private key.
We need to remember to save the N private key for the next steps of #18
Unify BFT and Genesis between one APIs to prevent duplication
We need to handle timeout on connection so we don't hold on connection that are not responsive
so far we might only need to set the path to the storage directory.
Motivation:
To enable verification of the BFT consensus we will need some code that can query our nodes via GRPC to read state and monitor block creation.
Requirements:
This code will be reused later as a test fixture that can monitor underlying state while other operations are performed via the cardano-cli. At that stage it should be integrated with a wider framework such as Theseus but for this task it need only be standalone.
Acceptance Criteria:
Code to be runnable from the command line against the demo , it should be observed reporting on the creation of at least 2 blocks.
link to #4 (the accept transaction part)
depends on #22
We need to continue the work from #31 , based on the generated config file and the N
key pair, we need to start N
nodes with the config file and the respective private key.
At the beginning we can assume that every nodes knows about each other existence and all connect to each other. This will be a good enough start for testing the protocol.
relates to #18
I have been following the instruction from the demo/Readme.md
and found the following error:
➜ demo git:(master) ✗ ./setup.sh demo 2 demo-genesis.json
Jormungandr - Setup generator
Using these options:
Folder: demo
Nodes: 2
Genesis: demo-genesis.json
CLI: /Users/nicolasdiprima/.cargo/bin/cardano-cli
Config: demo-config.yaml
Building Jormungandr
Build finished
Copying in template
template/bin -> demo/bin
template/bin/stop_nodes.sh -> demo/bin/stop_nodes.sh
template/bin/list_nodes.sh -> demo/bin/list_nodes.sh
template/bin/start_nodes.sh -> demo/bin/start_nodes.sh
template/bin/genkeypair.sh -> demo/bin/genkeypair.sh
Copying in binaries
Making Configs for 2 nodes
~/work/iohk/jormungandr/demo ~/work/iohk/jormungandr/demo
Making keys for node_1
PRIV = demo/nodes/1/node_1.xprv
PUB = demo/nodes/1/node_1.xpub
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Os { code: 2, kind: NotFound, message: "No such file or directory" }', src/libcore/result.rs:1009:5
note: Run with `RUST_BACKTRACE=1` for a backtrace.
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Os { code: 2, kind: NotFound, message: "No such file or directory" }', src/libcore/result.rs:1009:5
note: Run with `RUST_BACKTRACE=1` for a backtrace.
Adding key to global config
cat: demo/nodes/1/node_1.xpub: No such file or directory
./setup.sh: line 124: demo/config.yaml: No such file or directory
cat: demo/nodes/1/node_1.xprv: No such file or directory
Making keys for node_2
PRIV = demo/nodes/2/node_2.xprv
PUB = demo/nodes/2/node_2.xpub
./setup.sh: line 119: ../../bin/cardano-cli: No such file or directory
./setup.sh: line 120: ../../bin/cardano-cli: No such file or directory
Adding key to global config
cat: demo/nodes/2/node_2.xpub: No such file or directory
./setup.sh: line 124: demo/config.yaml: No such file or directory
cat: demo/nodes/2/node_2.xprv: No such file or directory
~/work/iohk/jormungandr/demo
./setup.sh: line 136: -1: substring expression < 0
Copying in genesis and patching in keys
./setup.sh: line 139: jq: command not found
Setup is complete
Its not Ragnarok yet but if you want to unleash your jormungandr do this:
cd demo/bin/
./start_nodes.sh
So to me it seems that there are multiple things that are wrong here:
../../bin/cardano-cli
?Steps to reproduce the behaviour:
cargo build
cd demo
cp ../cardano-deps/exe-common/genesis/5f20df933584822601f9e3f8c024eb5eb252fe8cefb24d1317dc3d432e940ebb.json demo-genesis.json
./setup.sh demo 2 demo-genesis.json
or even ./setup.sh demo 2 demo-genesis.json ~/.cargo/bin/cardano-cli
cardano-cli
should have been found properly... and not in an expected location;nothing to add
related to #41
Implement blockchain bootstrapping/synchronization via gPRC if any of the peers configured in the settings use the gRPC protocol. The method to use is StreamBlocksToTip
.
Part of #34.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.