Code Monkey home page Code Monkey logo

jormungandr's People

Contributors

amias-channer avatar amias-iohk avatar andrcmdr avatar codesandwich avatar dependabot-preview[bot] avatar dependabot[bot] avatar dkijania avatar ecioppettini avatar edolstra avatar eugene-babichenko avatar filip-dulic-bloxico avatar garbas avatar github-actions[bot] avatar kukkok3 avatar manveru avatar michaeljfazio avatar mmahut avatar mr-leshiy avatar mrzenioszeniou avatar mzabaluev avatar nahern avatar nicolasdp avatar nicopado avatar onicrom avatar qnikst avatar rinor avatar saibatizoku avatar sjmackenzie avatar vincenthz avatar zeegomo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

jormungandr's Issues

being able to generate a genesis file easily

this starts to be a needed task now. We need to be able to generate a genesis file and to be able to start a blockchain from this. This will allow us to quickly tests things are going fine without the dependency of the mainnet, staging and testnet (so we can test fast without the long initial sync).

Handle Request Transaction announcement and request

  • handle the transaction announcement (talk to the transaction task);
  • reply if we have it (reply from the transaction task);
  • reply if we don't have it (reply from the transaction task);
  • accept the accepted new transaction (talk to the transaction task);

bootstrap the blockchain on start up

on start up we need to figure out how far we are behind the blockchain.

  1. we need to load our storage system, find our current tip;
  2. connect to every remotes we know of and fetch the necessary nodes;
  • grpc one is done in #45
  • implement the old ntt one too

Problem finding protoc on NixOS

When trying to install jormungandr on NixOS machine protobuf generation fails with the no such directory error:

     Running `/home/qnikst/workspace/tweag/jormungandr/target/debug/build/jormungandr-1c68e7af3d29fb06/build-script-build`
[jormungandr 0.0.1] No such file or directory (os error 2)
error: failed to run custom build command for `jormungandr v0.0.1 (/home/qnikst/workspace/tweag/jormungandr)`
process didn't exit successfully: `/home/qnikst/workspace/tweag/jormungandr/target/debug/build/jormungandr-1c68e7af3d29fb06/build-script-build` (exit code: 1)
--- stderr
No such file or directory (os error 2)

It happens that tower-grps-build is trying to access non-existing locale directories

24897 openat(AT_FDCWD, "/nix/store/fg4yq8i8wd08xg3fy58l6q73cjy8hjr2-glibc-2.27/share/locale/en_US.UTF-8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
24897 openat(AT_FDCWD, "/nix/store/fg4yq8i8wd08xg3fy58l6q73cjy8hjr2-glibc-2.27/share/locale/en_US.utf8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
24897 openat(AT_FDCWD, "/nix/store/fg4yq8i8wd08xg3fy58l6q73cjy8hjr2-glibc-2.27/share/locale/en_US/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
24897 openat(AT_FDCWD, "/nix/store/fg4yq8i8wd08xg3fy58l6q73cjy8hjr2-glibc-2.27/share/locale/en.UTF-8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
24897 openat(AT_FDCWD, "/nix/store/fg4yq8i8wd08xg3fy58l6q73cjy8hjr2-glibc-2.27/share/locale/en.utf8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
24897 openat(AT_FDCWD, "/nix/store/fg4yq8i8wd08xg3fy58l6q73cjy8hjr2-glibc-2.27/share/locale/en/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
24897 write(2, "strace: exec: No such file or di"..., 40) = 40

Setting LOCALE_ARCHIVE doesn't help.

Expected behaviour
jormungandr - installs successfully.

Additional context

block creation

  • add basic leadership algorithm
  • abstract slot leadership criterions
  • gather transaction pool
  • create block
  • sign block
  • "append" to blockchain/network

Testing: survive removing nodes from a running demo

Motivation:
We need to test that jormangandr can continue to produce blocks after nodes are added and removed.

Tests:
With the protobuf monitor from #46 running in the background these operations should be possible without stopping block creation.

delete 1

  • launch the demo with 4 leader nodes
  • delete a node

delete 4 sequence

  • launch the demo with 5 nodes
  • delete 1
  • wait
  • delete 1
  • wait
  • delete 1
  • wait
  • delete 1

delete 4

  • launch the demo with 5 nodes
  • delete 4

These testcases should be also have varients that add:
with a passive node (i.e no keys configured) , ensuring that keeps going
with nodes being restarted instead of just stopped (this maybe be TDD at this stage)

Task management: join threads at the end of the main.rs' main function

The current design is to distinguish the different box components/workers/actors of the node (network-tokio-thread-pool, transaction pool, clock, ...).

currently we start threads without waiting for the threads at the end of the process. causing the process to terminate while we want it to await for all the different tasks

Docker packaging

Definition
As a long term goal we need to be able to deliver jormungandr via docker to enable ease of deployment.

To do this we need to evaluate the various options available in the docker world for us, this task will cover that evaluation in the context of the following requirements:

Must haves

  • We should be able to persist the nodes state folders between runs or wipe them each time
  • We will need a way to swap in a known state
  • Keys and genesis data should be swappable independently of state data.

Nice to haves

  • ability to connect debuggers into the docker environmnet

Definitions

  • keys = the xprv and xpub files the node uses
  • genesis = the genesis.json file all the nodes share
  • state = the blob, epoch , chainstate, index, pack , tag and refpack folders each node creates.

Deliverables

  • A spec for our docker environment
  • A discussion of that spec with possible refinements
  • A branch containing a POC implimentation of a docker environment

network v1.5

  • protobuf/grpc integration in parallel with the current network
  • add protobuf support for current (or similar) messages (send transaction/block, query getblocks)
  • improvement to current messages APIs
  • peer2peer APIs

Implement a demo launcher

Description: As a QA i want to be able to run our rust code in a repeatable manner so that i can develop and run tests for it.

Background:
In the Haskell Code we have demo script that allows people to run a simple setup to demonstrate the program, this also creates a standardised environment for testing.

This is the script that controls it.
https://github.com/input-output-hk/cardano-sl/blob/develop/scripts/launch/demo-nix.sh

This script covers generating the genesis data , importing it to the node , starting a wallet and making the funds available in that wallet and this should do the same.

I propose to keep the same parameters in this interests of parity
-d run with client auth disabled"
-w enable wallet"
-i INT number of wallets to import (default: 0)"

Requirements:

  • Initially this will not be a nix script but there should be should be calable from nix.
  • It should run until stopped with a Ctrl +C.
  • It should handle any privilege escalations required in a way that still allows it to run in screen.

Acceptance Criteria:

the script should be run with -w -i 5 and a wallet service with 5 imported wallets in it will be available.

structural logging, step 1: prepare for smooth transition

relates to #55

The proposal

As we have a big RFC that will be implemented some I'd like to go with the simplest possible solution.
The requirements for the solution:

  1. The solution should be compatible with the current logging framework.
  2. The solution should provide enough API for using slog or new structured logging RFC.
  3. The solution should allow easy migration to the more complex frameworks.

The idea is to implement a new macro (examples are very fast proof of concept, and println! was used instead of log):

macro_rules! nlog {
    ($msg:expr) => { println!($msg) };
    ($msg:expr,$($params:expr),+) => { println!($msg,$( $params)* ) };
    ($msg:expr,$($params:expr),+;$($name:ident = $i:expr),+) =>
       { println!(concat!(concat!($("[",stringify!($name),"=","{:#?}","]",)+),$msg),$($i)+,$( $params)* ) };
}

The macros should have 3 forms:
a.b. default ones fully compatible with log!
c. extended one that can keep additional parameters after the ;

This syntax is very close to the slog one.

Further extending:

  1. step0, as the basic forms are fully compatible with the current framework we can just use new form and use a wrapper for free, i.e. rename nlog to log.
  2. step1, if we choose to move to slog we just need to update a definition of the nlog to use slog, as
    syntax is very close transformation will be pretty simple, the only requirement is that we will need to add an additional logger object (proposed syntax is nlog(logger:msg,...). Unfortunately, on this step, we will have to either use a logger from the TLS or keep that in some other state, and this will require some refactoring (hopefully as simple as running sed)

P.S. this proposal does not suggest pluggable logging but only an easy switch to one or the other solution

Alternative

Start passing a logger context, from start it could be a HashMap<Str,Str> of the context, but in future that can logger object.

P.P.S.Further extending, I'm personally like the solutions that can create nested contexts, i.e.
if we can extend current logger with an additional context and pass that further down the calling stack.
Like that can be done in katip framework in Haskell, but at this point, I'm not sure if there is an easy and compatible with other solutions way of achieving that.

Note: copied from comment

Testing: Review existing unit tests

Motivation:

In order to understand the current state of unit testing and the coverage we have a review of existing unit tests is required.

Methodology:

  • Examine the repo
  • Talk to developers

Outcome:

Updates the to the test strategy to set some targets for coverage and definitions of unit testing requirements.

Configuration: start node without-leadership

This is currently a command line option only.

NOTE: this issue needs to be evaluated first as starting a node without leadership means we are accepting starting a node without any chance of writing a block.

handle block streaming

we need to handle block streaming to enable faster block synchronization between blocks

handle request GetBlockHeaders (respond BlockHeaders)

  • handle the request for GetBlockHeaders
    • get tip
    • get range
  • respond appropriately (we may not have the response from the synchronous query task, but we can prepare the response tooling).

We don't have to support finding the common block yet in this ticket. This will be part of the synchronous query/client task.

Add support for Genesis consensus

  • VRF implementation
  • KES implementation
  • Block leadership selection
  • Get distribution stakes
  • Chain selection in k
  • Chain selection out of k

protobuf/GRPC: simple queries

relates to #26

implement simple query commands to:

  • GetTip. This command will return the head of the blockchain.
    • input: none;
    • output: BlockDate and Hash. We may want to send the BlockHeader for now for compatibility with the other blockchain though;
  • GetBlocks: get blocks
    • input: a range of block [from..to];
    • output: stream of blocks;
  • GetEpoch: get the pack file of a given epoch (to try to be faster to sync blocks);
    • input: epoch date;
    • output: pack file
  • SetTransaction: propose a new transaction to the node;
    • input: transaction
    • output: Either: Accepted/Rejected(Why), where Why is either: SignatureInvalid/DoubleSpend/AlreadyGotIt

Configuration: logging options

We need to extend the logging option in the configuration file:

  • where to dump the logs;
  • the verbosity level of the logs;

I can see cases where we might want to have the logs redirected to syslog but also the warning and the error ones to be on the standard error output too.

might need to sync with #55

Generalize the network types

abstract current messages to prevent leakage of specific part onto the generic task of the node.

  • specific protocols (ntt, protobuf, pigeon, ...) can be added/removed painlessly
  • node types are independent from specific protocols

Configuration: add the list of peers in the configuration file

extend the configuration file to add the list of peers and merge them in the general settings.

I.e: add list connections there: https://github.com/input-output-hk/jormungandr/blob/master/src/settings/config.rs#L5-L9
merge with what the command line got here: https://github.com/input-output-hk/jormungandr/blob/master/src/settings/mod.rs#L62-L79

Something to note is that by default the Command Line override conflicting values in the configuration file.

Structural Logging: setup the foundation for structural logging

unfortunately the structural logging is still only at the stage of discussion in the main log crate (see rust-lang/log#296). So we won't be able to use that just yet.

we could consider using the slog crate which seems to provide enough of the feature for us to use. However In the log crate's issue 296, slog maintainer was also talking that his crate may become redundant once log is structural logging compatible.

So ideally we will need to have some kind of abstract glue we could use on top of our current logs to be able to use the slog but easily be able to move back to the mainstream log crate.

  • make a first change to abstract the logging mechanism, so we can switch from log to slog easily after; The change will need to provide the same macros log and slog do but our macro will redirect to the log's macros to start with.
  • in a second change, we will do the move from log to slog. We will need to make sure that the logging system is still async and is not blocking threads.

Fault injection

mechanism to add faults in : transactions, blocks, network

e.g. network disruption (blocks blocked randomly, blocks man-in-the-middle changes, etc)

Logging: have a default logging enabled from the very beginning

Currently the setup of the logger is done after the command line is parsed and after the configuration file is loaded and after they are both merged into the global setting of the application.

We need to allow an initial logging from the very beginning of the program allowing only for warnings and errors to be displayed and to always display to the standard error output.

This will allow us to display warnings and error when merging the command line options and the configuration file options into the settings (incompatible options, ignoring some states etc...)

Demo Script: make the node connect to each other

there are 4 kind of commands to make jormungandr accept connections and connect to new nodes.

the NTT ones:

  • --legacy-listen 127.0.0.1:8000 start accepting connection at this address/port number
  • --legacy-connect 127.0.0.1:8000 connect to a node that have used --legacy-listen 127.0.0.1:8000

so we will need the script to handle this and add these options to the node runner.

Organising the topology of the node is a bit difficult and complex. So as a start we can
make things a bit circular. So let assume we have decided to start n nodes:

  • node 1 connects to node 2;
  • node 2 connects to node 3;
  • ...
  • node k - 1 connects to node k;
  • ...
  • node n - 1 connects to node n
  • node n connects to node 1.
+------------------+
|                  |  Node 1 connects to Node2
|                  |
|   Node 1         | +-------------------------------------------------+
|                  |                                                   |
|                  |                                                   v
|                  |
|                  |                                        +-------------------------+
+------------------+                                        |                         |
                                                            |                         |
     ^                                                      |                         |
     | Node3 connects                                       |                         |
     | to node1                                             |     Node 2              |
     |                                                      |                         |
     |                                                      |                         |
     |           +-----------------------+                  |                         |
     |           |                       |                  |                         |
     |           |                       |                  |                         |
     |           |                       |                  |                         |
     |           |                       |                  +------+------------------+
     |           |     Node 3            |                         |
     |           |                       |                         |
     +-----------+                       |                         |
                 |                       | <-----------------------+
                 |                       |
                 |                       |             Node2 connects to node3
                 |                       |
                 +-----------------------+

So you will need to have something like:

node number legacy-listen option legacy-connect option
1 --legacy-listen 127.0.0.1:8001 --legacy-connect 127.0.0.1:8002
2 --legacy-listen 127.0.0.1:8003 --legacy-connect 127.0.0.1:8003
k --legacy-listen 127.0.0.1:$((8000 + ${k})) --legacy-connect 127.0.0.1:$((8000 + ${k} + 1))
n --legacy-listen 127.0.0.1:$((8000 + ${n})) --legacy-connect 127.0.0.1:8001

Consensus API

Unify BFT and Genesis between one APIs to prevent duplication

handle network timeouts

We need to handle timeout on connection so we don't hold on connection that are not responsive

testing: Create protobuf monitor

Motivation:

To enable verification of the BFT consensus we will need some code that can query our nodes via GRPC to read state and monitor block creation.

Requirements:

  • read the tip state of each node
  • observe each node progressing to the same tip state
  • ensure this happens within an agreed timescale
  • log failures without exiting

This code will be reused later as a test fixture that can monitor underlying state while other operations are performed via the cardano-cli. At that stage it should be integrated with a wider framework such as Theseus but for this task it need only be standalone.

Acceptance Criteria:

Code to be runnable from the command line against the demo , it should be observed reporting on the creation of at least 2 blocks.

testing: launch N participating nodes

We need to continue the work from #31 , based on the generated config file and the N key pair, we need to start N nodes with the config file and the respective private key.

At the beginning we can assume that every nodes knows about each other existence and all connect to each other. This will be a good enough start for testing the protocol.

relates to #18

demo script is not working

Describe the bug

I have been following the instruction from the demo/Readme.md and found the following error:

➜  demo git:(master) ✗ ./setup.sh demo 2 demo-genesis.json 

Jormungandr - Setup generator

Using these options:
  Folder: demo
  Nodes: 2
  Genesis: demo-genesis.json
  CLI: /Users/nicolasdiprima/.cargo/bin/cardano-cli
  Config: demo-config.yaml

Building Jormungandr


Build finished

Copying in template
template/bin -> demo/bin
template/bin/stop_nodes.sh -> demo/bin/stop_nodes.sh
template/bin/list_nodes.sh -> demo/bin/list_nodes.sh
template/bin/start_nodes.sh -> demo/bin/start_nodes.sh
template/bin/genkeypair.sh -> demo/bin/genkeypair.sh

Copying in binaries

Making Configs for 2 nodes
~/work/iohk/jormungandr/demo ~/work/iohk/jormungandr/demo
Making keys for node_1
PRIV = demo/nodes/1/node_1.xprv
PUB  = demo/nodes/1/node_1.xpub
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Os { code: 2, kind: NotFound, message: "No such file or directory" }', src/libcore/result.rs:1009:5
note: Run with `RUST_BACKTRACE=1` for a backtrace.
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Os { code: 2, kind: NotFound, message: "No such file or directory" }', src/libcore/result.rs:1009:5
note: Run with `RUST_BACKTRACE=1` for a backtrace.
Adding key to global config
cat: demo/nodes/1/node_1.xpub: No such file or directory
./setup.sh: line 124: demo/config.yaml: No such file or directory

cat: demo/nodes/1/node_1.xprv: No such file or directory
Making keys for node_2
PRIV = demo/nodes/2/node_2.xprv
PUB  = demo/nodes/2/node_2.xpub
./setup.sh: line 119: ../../bin/cardano-cli: No such file or directory
./setup.sh: line 120: ../../bin/cardano-cli: No such file or directory
Adding key to global config
cat: demo/nodes/2/node_2.xpub: No such file or directory
./setup.sh: line 124: demo/config.yaml: No such file or directory

cat: demo/nodes/2/node_2.xprv: No such file or directory
~/work/iohk/jormungandr/demo
./setup.sh: line 136: -1: substring expression < 0
Copying in genesis and patching in keys
./setup.sh: line 139: jq: command not found

Setup is complete

Its not Ragnarok yet but if you want to unleash your jormungandr do this:

cd demo/bin/
./start_nodes.sh

So to me it seems that there are multiple things that are wrong here:

  1. the script does not fail at the first error it sees and continue until the end. Every error reported by every commands should be handled and taken care of;
  2. the CLI is found in the path (see the first line of the output above) yet it tries to find it somewhere in ../../bin/cardano-cli?

To Reproduce

Steps to reproduce the behaviour:

  1. clone the repository fresh;
  2. cargo build
  3. cd demo
  4. cp ../cardano-deps/exe-common/genesis/5f20df933584822601f9e3f8c024eb5eb252fe8cefb24d1317dc3d432e940ebb.json demo-genesis.json
  5. ./setup.sh demo 2 demo-genesis.json or even ./setup.sh demo 2 demo-genesis.json ~/.cargo/bin/cardano-cli

Expected behavior

  1. the cardano-cli should have been found properly... and not in an expected location;
  2. the script should fail at the first error it sees instead of having unexpected behaviour and returning that the setup was successful

Additional context

nothing to add

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.