Code Monkey home page Code Monkey logo

quorum-examples's Introduction

Quorum Examples

⚠️ Project Deprecation Notice ⚠️

quorum-examples will be deprecated on July 11th 2022, from when we will stop supporting the project.

From now on, we encourage all users to use to quorum-dev-quickstart which is a similar tool offering extra compatibility with Quorum products, in particular Hyperledger Besu and Orchestrate.

We will continue to support quorum-examples in particular fixing bugs until the end of July 2022.

If you have any questions or concerns, please reach out to the ConsenSys protocol engineering team on #Discord or by email.

Usage notice

This project is meant for Quorum users (mainly Quorum contributors) who are already familiar with Quorum deployments and who are looking for some configurations for their network.

If you have a limited experience with Quorum, or if you are looking to start a Quorum network for some testing purposes then you should instead use our quorum-dev-quickstart.

We do not guarantee that all scripts in this project work out of the box, in particular some scripts may be out of date and will require some adjustments from users to properly work on latest Quorum versions.

About

Current examples include:

  • 7nodes: Starts up a fully-functioning Quorum environment consisting of 7 independent nodes. From this example one can test consensus, privacy, and all the expected functionality of an Ethereum platform.

Additional examples exist highlighting and showcasing the functionality offered by the Quorum platform. An up-to-date list can be found in the Quorum Documentation site.

Installation

Clone the quorum-examples repo.

git clone https://github.com/Consensys/quorum-examples.git

Important note: Any account/encryption keys used in the quorum-examples repo are for demonstration and testing purposes only. Before running a real environment, new keys should be generated using Geth's account tool, Tessera's -keygen option, and Constellation's --generate-keys option

Prepare your environment

A 7 node Quorum network must be running before the example can be run. The quorum-examples repo provides the means to create a pre-configured sample network in minutes.

There are 3 ways to start the sample network, each method is detailed below:

  1. By running a pre-configured Vagrant virtual-machine environment which comes complete with Quorum, Constellation, Tessera and the 7nodes example already installed. Bash scripts provided in the examples are used to create the sample network: Running with Vagrant
  2. By running docker-compose against a preconfigured compose file to create the sample network: Running with Docker
  3. By installing Quorum and Tessera/Constellation locally and using bash scripts provided in the examples to create the sample network: Running locally

Your environment must be prepared differently depending on the method being used to run the example.

Running with Vagrant

  1. Install VirtualBox

  2. Install Vagrant

  3. Download and start the Vagrant instance (note: running vagrant up takes approx 5 mins):

    git clone https://github.com/Consensys/quorum-examples
    cd quorum-examples
    vagrant up
    vagrant ssh
  4. To shutdown the Vagrant instance, run vagrant suspend. To delete it, run vagrant destroy. To start from scratch, run vagrant up after destroying the instance.

Troubleshooting Vagrant

  • If you are behind a proxy server, please see Consensys/quorum#23.
  • If you are using macOS and get an error saying that the ubuntu/xenial64 image doesn't exist, please run sudo rm -r /opt/vagrant/embedded/bin/curl. This is usually due to issues with the version of curl bundled with Vagrant.
  • If you receive the error default: cp: cannot open '/path/to/geth.ipc' for reading: Operation not supported after running vagrant up, run ./raft-init.sh within the 7nodes directory on your local machine. This will remove temporary files created after running 7nodes locally and will enable vagrant up to execute correctly.

Troubleshooting Vagrant: VBoxManage error during vagrant up

If encountering an error like

VBoxManage: error: Details: code NS_ERROR_FAILURE (0x80004005), component MachineWrap, interface IMachine

during vagrant up try the following:

  • macOS - Open Security & Privacy system preferences after VirtualBox installation. Allow installation of software from Oracle (as described here). Uninstalling and installing VirtualBox may be required to show the prompt again.
  • Download VM VirtualBox Extension Pack from VirtualBox downloads (macOS - Also allow installation as described above).

Troubleshooting Vagrant: Memory usage

  • The Vagrant instance is allocated 6 GB of memory. This is defined in the Vagrantfile, v.memory = 6144. This has been deemed a suitable value to allow the VM and examples to run as expected. The memory allocation can be changed by updating this value and running vagrant reload to apply the change.

  • If the machine you are using has less than 8 GB memory you will likely encounter system issues such as slow down and unresponsiveness when starting the Vagrant instance as your machine will not have the capacity to run the VM. There are several steps that can be taken to overcome this:

    1. Shutdown any running processes that are not required
    2. If running the 7nodes example, reduce the number of nodes started up. See the 7nodes: Reducing the number of nodes for info on how to do this.
    3. Set up and run the examples locally. Running locally reduces the load on your memory compared to running in Vagrant.

Running with Docker

  1. Install Docker (https://www.docker.com/get-started)
    • If your Docker distribution does not contain docker-compose, follow this to install Docker Compose
    • Make sure your Docker daemon has at least 4G memory
    • Required Docker Engine 18.02.0+ and Docker Compose 1.21+
  2. Download and run docker-compose
    git clone https://github.com/Consensys/quorum-examples
    cd quorum-examples
    docker-compose up -d
  3. By default, the Quorum network is created with Tessera privacy managers and Istanbul BFT consensus. To use Raft consensus, set the environment variable QUORUM_CONSENSUS=raft before running docker-compose. To start a Quorum node without its associated privacy transaction manager, set PRIVATE_CONFIG=ignore. QUORUM_CONSENSUS and PRIVATE_CONFIG can be set together.
    PRIVATE_CONFIG=ignore QUORUM_CONSENSUS=raft docker-compose up -d
    Note that additional geth command line parameters can also be specified via the environment variable QUORUM_GETH_ARGS
  4. Run docker ps to verify that all quorum-examples containers (7 nodes and 7 tx managers) are healthy
  5. Run docker logs <container-name> -f to view the logs for a particular container
  6. Note: to run the 7nodes demo, use the following snippet to open geth Javascript console to a desired node (using container name from docker ps) and send a private transaction
    $ docker exec -it quorum-examples_node1_1 geth attach /qdata/dd/geth.ipc
    Welcome to the Geth JavaScript console!
    
    instance: Geth/node1-istanbul/v1.7.2-stable/linux-amd64/go1.9.7
    coinbase: 0xd8dba507e85f116b1f7e231ca8525fc9008a6966
    at block: 70 (Thu, 18 Oct 2018 14:49:47 UTC)
     datadir: /qdata/dd
     modules: admin:1.0 debug:1.0 eth:1.0 istanbul:1.0 miner:1.0 net:1.0 personal:1.0 rpc:1.0 txpool:1.0 web3:1.0
    
    > loadScript('/examples/private-contract.js')
  7. Shutdown Quorum Network
    docker-compose down

Troubleshooting Docker

  1. Docker is frozen
    • Check if your Docker daemon is allocated enough memory (minimum 4G)
  2. Tessera crashes due to missing file/directory
    • This is due to the location of quorum-examples folder is not shared
    • Please refer to Docker documentation for more details:
  3. If you run Docker inside Docker, make sure to run the container with --privileged

Running locally

Note: Quorum must be run on Ubuntu-based/macOS machines. Constellation can only be run on Ubuntu-based machines. Running the examples therefore requires an Ubuntu-based/macOS machine. If running the examples using Constellation then an Ubuntu-based machine is required.

  1. Install Golang

  2. Download and build Quorum:

    git clone https://github.com/Consensys/quorum
    cd quorum
    make
    GETHDIR=`pwd`; export PATH=$GETHDIR/build/bin:$PATH
    cd ..
  3. Download and build Tessera (see README for build options)

    git clone https://github.com/Consensys/tessera.git
    cd tessera
    mvn install
  4. Download quorum-examples

    git clone https://github.com/Consensys/quorum-examples

Starting the 7nodes sample network

Note: This is not required if docker-compose has been used to prepare the network as the docker-compose command performs these actions for you

Shell scripts are included in the examples to make it simple to configure the network and start submitting transactions.

All logs and temporary data are written to the qdata folder.

The sample network can be created to run using Istanbul BFT, QBFT, Raft or Clique POA consensus mechanisms. In the following commands replace {consensus} with one of raft, istanbul qbft or clique depending on the consensus mechanism you want to use.

  1. Navigate to the 7nodes example directory, configure the Quorum nodes and initialize accounts & keystores:

    cd path/to/7nodes
    ./{consensus}-init.sh
  2. Start the Quorum and privacy manager nodes (Constellation or Tessera):

    • If running in Vagrant:

      ./{consensus}-start.sh

      By default, Tessera will be used as the privacy manager. To use Constellation run the following:

      ./{consensus}-start.sh constellation
      
    • If running locally:

      TESSERA_{JAR|SCRIPT}=/path/to/jar-or-startscript ./{consensus}-start.sh
      

      The {consensus}-start.sh scripts look for a Tessera executable at default paths unique to the Vagrant environment. When running locally these defaults must be overriden with the TESSERA_SCRIPT or TESSERA_JAR environment variables. Set TESSERA_SCRIPT when using the newer versions of Tessera distributed as a .tar - extract the tar and set TESSERA_SCRIPT to the contained runnable script. Set TESSERA_JAR when using older versions of Tessera distributed as an executable .jar.

  3. You are now ready to start sending private/public transactions between the nodes

  4. To stop the network:

    ./stop.sh

Running the example

quorum-examples includes some simple transaction contracts to demonstrate the privacy features of Quorum. See the 7nodes Example page for details on how to run them.

Variations

Reducing the number of nodes

It is easy to reduce the number of nodes used in the example network. You may want to do this for memory usage reasons or just to experiment with a different network configuration.

For example, to run the example with 5 nodes instead of 7, follow these steps:

  1. Update the list of nodes involved in consensus

    • If using Raft
      1. Remove node 6 and node 7's enode addresses from permissioned-nodes.json (i.e. the entries with raftport 50406 and 50407). Ensure that there is no trailing comma on the last row of enode details in the file.
    • If using IBFT
      1. Find the 20-byte address representations of node 6 and node 7's nodekey (nodekeys located at qdata/dd{i}/geth/nodekey). There are many ways to do this, one is to run a script making use of ethereumjs-wallet:
        const wlt = require('ethereumjs-wallet');
        
        var nodekey = '1be3b50b31734be48452c29d714941ba165ef0cbf3ccea8ca16c45e3d8d45fb0';
        var wallet = wlt.fromPrivateKey(Buffer.from(nodekey, 'hex'));
        
        console.log('addr: ' + wallet.getAddressString());
      2. Use istanbul-tools to decode the extraData field in istanbul-genesis.json
        git clone https://github.com/Consensys/istanbul-tools.git
        cd istanbul-tools
        make istanbul
        ./build/bin/istanbul extra decode --extradata <...>
      3. Copy the output into a new .toml file and update the formatting to the following:
        vanity = "0x0000000000000000000000000000000000000000000000000000000000000000"
        validators = [
          "0xd8dba507e85f116b1f7e231ca8525fc9008a6966",
          "0x6571d97f340c8495b661a823f2c2145ca47d63c2",
          ...
        ]
      4. Remove the addresses of node 6 and node 7 from the validators list
      5. Use istanbul-tools to encode the .toml as extraData
        ./build/bin/istanbul extra encode --config /path/to/conf.toml
      6. Update the extraData field in istanbul-genesis.json with output from the encoding
    • If using QBFT
      1. Find the 20-byte address representations of node 6 and node 7's nodekey (nodekeys located at qdata/dd{i}/geth/nodekey). There are many ways to do this, one is to run a script making use of ethereumjs-wallet:
        const wlt = require('ethereumjs-wallet');
        
        var nodekey = '1be3b50b31734be48452c29d714941ba165ef0cbf3ccea8ca16c45e3d8d45fb0';
        var wallet = wlt.fromPrivateKey(Buffer.from(nodekey, 'hex'));
        
        console.log('addr: ' + wallet.getAddressString());
      2. Use istanbul-tools to decode the extraData field in qbft-genesis.json
        git clone https://github.com/Consensys/istanbul-tools.git
        cd istanbul-tools
        make qbft
        ./build/bin/qbft extra decode --extradata <...>
      3. Copy the output into a new .toml file and update the formatting to the following:
        vanity = "0x0000000000000000000000000000000000000000000000000000000000000000"
        validators = [
          "0xd8dba507e85f116b1f7e231ca8525fc9008a6966",
          "0x6571d97f340c8495b661a823f2c2145ca47d63c2",
          ...
        ]
      4. Remove the addresses of node 6 and node 7 from the validators list
      5. Use istanbul-tools to encode the .toml as extraData
        ./build/bin/qbft extra encode --config /path/to/conf.toml
      6. Update the extraData field in qbft-genesis.json with output from the encoding
  2. After making these changes, the relevant init/start scripts can be run (replace {consensus} with the relevent consensus mechanism in the following):

    # ./{consensus}-init.sh --numNodes 5
    # ./{consensus}-start.sh
  3. private-contract.js by default sends a transaction to node 7. As node 7 will no longer be started this must be updated to instead send to node 5:

    1. Copy node 5's public key from ./keys/tm5.pub

    2. Replace the existing privateFor in private-contract.js with the key copied from tm5.pub key, e.g.:

      var simple = simpleContract.new(42, {from:web3.eth.accounts[0], data: bytecode, gas: 0x47b760, privateFor: ["R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY="]}, function(e, contract) {...}

You can then follow steps described above to verify that node 5 can see the transaction payload and that nodes 2-4 are unable to see the payload.

Using a Tessera remote enclave

Tessera v0.9 introduced the ability to run the privacy manager's enclave as a separate process from the Transaction Manager. This is a more secure way of being able to manage and interact with your keys.

To start a sample 7nodes network that uses remote enclaves run ./{consensus}-start.sh tessera-remote. By default this will start 7 Transaction Managers, the first 4 of which use a remote enclave. If you wish to change this number, you will need to add the extra parameter --remoteEnclaves X in the --tesseraOptions, e.g. ./{consensus}-start.sh tessera-remote --tesseraOptions "--remoteEnclaves 7".

Experimenting with alternative curves in Tessera

By default tessera uses the NaCl(salt) library in order to encrypt private payloads. If you would like to experiment with/use alternative curves/symmetric ciphers you can choose to configure the EC Encryptor (which relies on JCA to perform a similar logic to NaCl). The tessera initialization script uses the the following environment variables to generate the encryptor section of the tessera configuration file:

Environment Variable Name Default Value Description
ENCRYPTOR_TYPE NACL The encryptor type. Possible values are EC or NACL.
ENCRYPTOR_EC_ELLIPTIC_CURVE secp256r1 The elliptic curve to use. See SunEC provider for other options. Depending on the JCE provider you are using there may be additional curves available.
ENCRYPTOR_EC_SYMMETRIC_CIPHER AES/GCM/NoPadding The symmetric cipher to use for encrypting data (GCM IS MANDATORY as an initialisation vector is supplied during encryption).
ENCRYPTOR_EC_NONCE_LENGTH 24 The nonce length (used as the initialization vector - IV - for symmetric encryption).
ENCRYPTOR_EC_SHARED_KEY_LENGTH 32 The key length used for symmetric encryption (keep in mind the key derivation operation always produces 32 byte keys - so the encryption algorithm must support it).

Based on the default values above (provided ENCRYPTOR_TYPE is defined as EC) the following configuration entry is produced:

...
    "encryptor": {
        "type":"EC",
        "properties":{
            "symmetricCipher":"AES/GCM/NoPadding",
            "ellipticCurve":"secp256r1",
            "nonceLength":"24",
            "sharedKeyLength":"32"
        }
    }
...

Example:

export ENCRYPTOR_TYPE=EC
export ENCRYPTOR_EC_ELLIPTIC_CURVE=sect571k1
./raft-init.sh

Next steps: Sending transactions

Some simple transaction contracts are included in quorum-examples to demonstrate the privacy features of Quorum. To learn how to use them see the 7nodes README.

Getting Help

Stuck at some step? Please join our discord community for support.

quorum-examples's People

Contributors

antonydenyer avatar apratt3377 avatar bmcd avatar chetan avatar chris-j-h avatar fixanoid avatar jbhurat avatar joelburget avatar joshuafernandes avatar jpmsam avatar kepalas avatar krish1979 avatar libby avatar markya0616 avatar melowe avatar namtruong avatar nicolae-leonte-go avatar nmvalera avatar patrickmn avatar prd-fox avatar rgegriff avatar rsarres avatar satpalsandhu61 avatar siziyman avatar szkered avatar trung avatar tylobban avatar viaruc avatar vietlq avatar vsmk98 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

quorum-examples's Issues

Missing static-nodes.json file

When we run init scripts of ibft and raft, it shows static-nodes.json file is not present in raft directory under the 7nodes directory.

Generating Public Key for a Given Address

I stood up the 7nodes example. Runs fine, but there is a public key:

ROAZBWtSacxXQrOe3FGAqJDyJjFePR5ce4TSIzmJ0Bc=

provided for the privateFor field in order to change a contract value while keeping it private. How was this value derived? Can it be done for other accounts?

The source of the solidity contract for the both example

Hi,
Am exploring QUORUM and am trying to find the solidity contract files which lead to the code in the genesis.json in the examples. It would be great if someone can provide me the location of it and if there is a documentation on the genesis.json as whole how each node is created and how one node is differentiated from other (voter vs creator).

How to do Send Transaction from account A to B on quorum

Hi Team

I have done the private setup of a raft based quorum. The example is working properly.
But when I try to do coin transfer from one account to another then I am seeing the below error in the log. Its a fresh setup and no mining has been done. But still seeing the mining error.

eth.sendTransaction({from:web3.eth.accounts[0], to: web3.eth.accounts[1], privateFor: ["ROAZBWtSacxXQrOe3FGAqJDyJjFePR5ce4TSIzmJ0Bc="], value: "1" });
Error: Post http+unix://c/send: dial unix qdata/tm1.ipc: connect: connection refused
at web3.js:3143:20
at web3.js:6347:15
at web3.js:5081:36
at :1:1

ubuntu@blockchain03:~/quorum-examples$ tail qdata/logs/7.log
INFO [11-23|12:26:54] 🔨 Mined block number=1 hash=b2b24061 elapsed=7.067924ms
INFO [11-23|12:26:54] QUORUM-CHECKPOINT name=TX-ACCEPTED tx=0xd51abe3f6e5fa272aa6158d08c783191c19504d345374b0369d389504f164a8b
INFO [11-23|12:26:54] QUORUM-CHECKPOINT name=BLOCK-CREATED block=b2b240612fe20e580500c1f6f9949fe6bcf8ce133be7c44cd3a568fa6ee7dbb7
INFO [11-23|12:26:54] persisted the latest applied index index=9
INFO [11-23|12:38:55] Generated next block block num=2 num txes=1
INFO [11-23|12:38:55] 🔨 Mined block number=2 hash=7610b91a elapsed=2.865896ms
INFO [11-23|12:38:55] Non-extending block block=7610b9…99cfb0 parent=b2b240…e7dbb7 head=62c2c0…5a3c94
INFO [11-23|12:38:55] persisted the latest applied index index=10
INFO [11-23|12:38:55] Handling InvalidRaftOrdering invalid block=7610b9…99cfb0 current head=62c2c0…5a3c94
INFO [11-23|12:38:55] Someone else mined invalid block; ignoring block=7610b9…99cfb0

Our intention to do a simple payment from account A to account B. When I dont specify the privateFor the transaction successfully gets submitted but doesnot get succeeded. It awaits in the pending queue.

eth.sendTransaction({from:eth.accounts[0], to:eth.accounts[1], value:1})
"0x00cd1f4ea060c53aa1edacbcd11c3b4f9c32a9e04cf545018de02518e35c18cb"
txpool.inspect.pending
{
0x0638E1574728b6D862dd5d3A3E0942c3be47D996: {
0: "contract creation: 0 wei + 4700000 × 0 gas"
},
0x9186eb3d20Cbd1F5f992a950d808C4495153ABd5: {
0: "contract creation: 0 wei + 4700000 × 0 gas"
},
0xcA843569e3427144cEad5e4d5999a3D0cCF92B8e: {
0: "contract creation: 0 wei + 4700000 × 0 gas",
1: "0x2C80Eba934Fa0deE778FD0029bcD77A2cD31959e: 10000000000000000000000000 wei + 90000 × 0 gas",
2: "0x0fBDc686b912d7722dc86510934589E0AAf3b55A: 1 wei + 90000 × 0 gas",
3: "0x0fBDc686b912d7722dc86510934589E0AAf3b55A: 1 wei + 90000 × 0 gas"
},
0xed9d02e382b34818e88B88a309c7fe71E65f419d: {
0: "contract creation: 0 wei + 4700000 × 0 gas",
1: "contract creation: 0 wei + 4700000 × 0 gas",
2: "contract creation: 1 wei + 90000 × 0 gas",
3: "contract creation: 1 wei + 90000 × 0 gas"
}
}

7nodes error on private.get()

Version:
Geth
Version: 1.7.2-stable
Git Commit: 2a5a1d6146e38b7025868ad0cfa183f42de3554e
Quorum Version: 2.0.0
Architecture: amd64
Network Id: 1
Go Version: go1.9.3
Operating System: linux
GOPATH=
GOROOT=/usr/local/go

Context
After running the 7nodes example on terminal 1, node7 can't see the initialized value.

Expected output:
In terminal window 1 (node 1)

private.get()
42
In terminal window 2 (node 4) :
private.get()
0
In terminal window 3 (node 7) :
private.get()
42

Actual output:
In terminal window 1 (node 1) :

private.get()
42
In terminal window 1 (node 1) :
private.get()
0
In terminal window 1 (node 1) :
private.get()
0

7nodes-error

Examples do not work with latest Constellation

If using the latest Constellation from https://github.com/jpmorganchase/constellation private transactions start failing to "Unknown recipient"-error. This is because the sample configurations have something like:

archivalPublicKeyPath = "keys/tm1a.pub"
archivalPrivateKeyPath = "keys/tm1a.key"

Even though the archival feature is removed from latest Constellation, the public key from the config gets picked up due to this line:

https://github.com/jpmorganchase/constellation/blob/e6aed2ba7590d034c7213cfc86a940931e627ef0/Constellation/Node/Config.hs#L69

This causes all private transactions to be tried to be sent to keys/tm1a.pub which nobody advertise so all private transactions fails.

Fix proposal is to edit the examples by removing the usage of the archival feature.

bitrot in the documentation?

I only recently learned of quorum and deployed it with quorum-examples on virtualbox & vagrant. I hit some gotchas trying to reconcile the documentation to reality. These include:

  • raft-start.sh is supposed to send an initial transaction, according to README, but the last line of it echoes a command to invoke something else to send that transaction. That's easily missed. In addition, the command it says to run is runscript while the file is runscript.sh.
  • The log doesn't seem to have the address, despite 'README` saying

The address can be found in the node 1's log file 7nodes/qdata/logs/1.log, or alternatively by reading the contractAddress param after calling eth.getTransactionReceipt(txHash)

If in fact the address is there, please be clearer how to find it. The only address I found was of the wallet, but I don't think that's what you meant.

  • Finally, I didn't actually get the demo to progress as expected. When I opened the console on 1, 4, and 7, and I updated the address to reflect the new transaction I created manually with runscript.sh, I showed a balance of 42 on node 1 (correct), 0 on node 4 (correct, but you can't tell the difference between this and having a bad address), and 0 on node 7 (incorrect).

PrivateFor - adding new participant

A question to build on issue 21.

The solution was

The contract is only known to nodes that you put in privateFor (and your own node) for the transaction that instantiates the contract, so node 4 never saw a transaction that created the contract, and the new "set" transaction means nothing to it.

If you put all of the nodes in privateFor when you deploy the contract, then send transactions that are limited to fewer participants, you should get the behavior you want, although we recommend that you keep the recipients list constant for any given private contract.

That is awesome, will work. But I may not know all the participant node / participant of the contract upfront. Hence I may not be able to deploy the contract into a participant when the first stage of deployment happens, but will need to add a transaction to share with this participant at a later date. What other types of workaround can I do to make the above scenario work?

Vagrant up issue

Hi,
I m trying to use Quorum on my PC (Ubuntu 16.04 ) Vagrant version 2.0.2 Virtual box version [5.2.6]
vagrant

Can you direct me to a fix
I forgot to mention. I m currently running Ubuntu as a virtual machine and I got to know about nested virtualization if anyone can help me figure about nested virtualization ( the host machine is windows 10)

What determines the roles of nodes in 7 nodes raft example?

Could someone help me understand how the roles of nodes are classified as block-maker, voter or observer in the raft aspect of 7-nodes examples. The genesis files does not have any reference to the address or is it pre-assigned somehow?

newer constellation doesn't work with the 7nodes example

I'm trying to figure out how to run the 7nodes example with a new constellation release 0.1.0 and it seems that the existing configuration files break the example, as well as my best guess at changing the config files to the new format.

With the existing config files, the node crashes with an error message like
"encrypt: Sender cannot be a recipient", and with the new ones I get something like

Errors while running sendPayload: [Left "Unknown recipient\nCallStack (from HasCallStack):\n  error, called at ./Constellation/Node.hs:157:21 in constellation-0.1.0.0-6e8lqFuBPtzCz3iEft1Zk0:Constellation.Node"]

here's what I'm using for the config file for node 1 for example

url = "http://127.0.0.1:9001/"
port = 9001
socket = "qdata/tm1.ipc"
othernodes = []
publickeys = ["keys/tm1.pub"]
privatekeys = ["keys/tm1.key"]
alwayssendto = []
storage = "qdata/constellation1"
verbosity = 3

I'm not really sure what I'm doing wrong, any comments would help.

raftport querystring missing when running raft 7 nodes example

Hi,

When running ./raft-init.sh, then ./raft-start.sh the nodes fail to startup due to this error:

Fatal: raftport querystring parameter not specified in static-node enode ID: enode://ac6b1096ca56b9f6d004b779ae3728bf83f8e22453404cc3cef16a3d9b96608bc67c4b30db88e0a5a6c6390213f7acbe1153ff6d23ce57380104288ae19373ef@127.0.0.1:21000?discport=0. please check your static-nodes.json file.

From the raft docs, it looks like the static-nodes.json file is missing the raftport query string mentioned here https://github.com/jpmorganchase/quorum/blob/master/raft/doc.md#initial-configuration-and-enacting-membership-changes. Could I be running the script incorrectly and it handles adding the raftport somewhere else?

-Charles

Generating keys for Quorum nodes

Hello guys ,
Im trying to build a quorum network from scratch, and im trying to get inspired by the 7 nodes example of quorum-examples.

Here is what i did :
I used the Quorum-Genesis utility in order to create the genesis file. Then i have set up my bootnode and saved its key and enode address to use it later when starting my nodes .

Then i wanted to make an init.sh and start.sh scripts similar to those in the 7 nodes example in order to make it easier to start my network. However i found some issues.

In the 7 nodes example there is a folder called "keys" which contains the UTC files for all the accounts specified in the genesis.json file also it has the public keys,private keys and archival keys to start constellations . I don't understand how am i supposed to generate all those keys???? and obviously i need those public keys and archival keys in order to populate the "tm.conf" files so i can start my constellations later by doing :
constellation-node tm.conf.

If anyone could clarify this for me i would really appreciate it,
thank you .

Question: private state partially updated

Hello Quorum experts.

I'm just wondering if it is possible to construct the following situation in Quorum (based on SimpleStorage privacy example):

Step 1:
from:N1 privateFor:N2,N3 - instantiate new contract
N2, N3 can call get on the contract and will see the same number: 42

Step 2:
from:N1 privateFor:N2 - set(73)
I assume N2 calling get will see 73
But what about N3? Will his get still return 42?

Regards,
Ivica

making a permissioned network with raft

Hello guys,
Im working on the 7nodes example and i have started the cluster using raft consensus. And then i tried to make a permissioned network consisting of 3 nodes, following the steps mentioned in the wiki but it's not working . and i'm starting to wonder maybe it's not possible to make a permissioned network if you are using Raft ????

Thank you,

Can not get contract address in a permissioned network

Hello guys,
i'm working on the 7nodes example of quorum-examples and there is a behaviour that i don't get :
First, when i ran the start.sh script it has deployed the simple storage contract and it returned the transactionHash, when i do " eth.getTransactionReceipt("transactionHash") from any of the 7 nodes it returns the contract address and other informations.
However , when i have set up a permissioned network , and then i ran the script "start.sh" it has indeed returned a transactionHash but when i try running "eth.getTransactionReceipt("transactionHash") it returns null.
I don't understand why after setting the permissioned network i can no longuer get the contract address ,,, i would really appreciate if anyone clarifies this point ,,
PS: i have ran the nodes ( 3, 6, and 7 ) with the --permissioned flag. And i have noticed that if i run the node 7 without that flag everything works fine and i can get the contract address , but when i run it with the flag i get the behaviour i talked about earlier .
Thank you

Error with raft-start.sh

When I run raft-init.sh, no problem. After that, I run raft-start.sh, it shows the error:
Fatal: Unable to attach to remote geth: dial unix qdata/dd1/geth.ipc: connect: no such file or directory
I checked the folder qdata/dd1, there is no file named geth.ipc.
Did I miss something?

Error(`index out of range`) with istanbul-start.sh in quorum-examples/7node

When I run ./istanbul-start.sh in quorum-examples/7node I get some panic info.
The log of ./istanbul-start.sh is:
[*] Starting Constellation nodes [*] Starting node 1 [*] Starting node 2 [*] Starting node 3 [*] Starting node 4 [*] Starting node 5 [*] Starting node 6 [*] Starting node 7 [*] Waiting for nodes to start [*] Sending first transaction Contract transaction send: TransactionHash: 0x95fc627555a6089a35377393e9b4280c4482b89e940db2f6ec4d3c508b743f31 waiting to be mined... true All nodes configured. See 'qdata/logs' for logs, and run e.g. 'geth attach qdata/dd1/geth.ipc' to attach to the first Geth node

but when i use geth attach qdata/dd1/geth.ipc I get the log:
Fatal: Unable to attach to remote geth: dial unix qdata/dd1/geth.ipc: connect: connection refused
and the log of geth in qdata/dd1/geth.ipc is:

`nohup: appending output to 'nohup.out'
INFO [11-30|06:47:22] Starting peer-to-peer node instance="Geth/vquorum 2.0.0 (geth 1.7.2-stable)-3d91976f/linux-amd64/go1.7.3"
INFO [11-30|06:47:22] Allocated cache and file handles database=/home/ubuntu/quorum-examples/7nodes/qdata/dd1/geth/chaindata cache=128 handles=1024
WARN [11-30|06:47:22] Upgrading database to use lookup entries
INFO [11-30|06:47:22] Reconfiguring old genesis as Quorum
INFO [11-30|06:47:22] Initialised chain configuration config="{ChainID: Homestead: 1 DAO: DAOSupport: false EIP150: 2 EIP155: 3 EIP158: 3 Byzantium: IsQuorum: true Engine: unknown}"
INFO [11-30|06:47:22] Initialising Ethereum protocol versions="[63 62]" network=1
INFO [11-30|06:47:22] Loaded most recent local header number=0 hash=62c2c0…5a3c94 td=1
INFO [11-30|06:47:22] Loaded most recent local full block number=0 hash=62c2c0…5a3c94 td=1
INFO [11-30|06:47:22] Loaded most recent local fast block number=0 hash=62c2c0…5a3c94 td=1
INFO [11-30|06:47:22] Regenerated local transaction journal transactions=0 accounts=0
INFO [11-30|06:47:22] Starting P2P networking
INFO [11-30|06:47:22] Database deduplication successful deduped=0
INFO [11-30|06:47:24] UDP listener up self=enode://ac6b1096ca56b9f6d004b779ae3728bf83f8e22453404cc3cef16a3d9b96608bc67c4b30db88e0a5a6c6390213f7acbe1153ff6d23ce57380104288ae19373ef@[::]:21000
INFO [11-30|06:47:24] HTTP endpoint opened: http://0.0.0.0:22000
INFO [11-30|06:47:24] IPC endpoint opened: /home/ubuntu/quorum-examples/7nodes/qdata/dd1/geth.ipc
INFO [11-30|06:47:24] RLPx listener up self=enode://ac6b1096ca56b9f6d004b779ae3728bf83f8e22453404cc3cef16a3d9b96608bc67c4b30db88e0a5a6c6390213f7acbe1153ff6d23ce57380104288ae19373ef@[::]:21000
WARN [11-30|06:47:24] Removing static dial candidate id=ac6b1096ca56b9f6 addr=127.0.0.1:21000 err="is self"
INFO [11-30|06:47:25] Unlocked account address=0xed9d02e382b34818e88B88a309c7fe71E65f419d
INFO [11-30|06:47:25] Transaction pool price threshold updated price=18000000000
INFO [11-30|06:47:25] Starting mining operation
INFO [11-30|06:47:25] Commit new mining work number=1 txs=0 uncles=0 elapsed=116.289µs
INFO [11-30|06:47:32] sending private tx data=6060604052341561000f57600080fd5b604051602080610149833981016040528080519060200190919050505b806000819055505b505b610104806100456000396000f30060606040526000357c0100000000000000000000000000000000000000000000000000000000900463ffffffff1680632a1afcd914605157806360fe47b11460775780636d4ce63c146097575b600080fd5b3415605b57600080fd5b606160bd565b6040518082815260200191505060405180910390f35b3415608157600080fd5b6095600480803590602001909190505060c3565b005b341560a157600080fd5b60a760ce565b6040518082815260200191505060405180910390f35b60005481565b806000819055505b50565b6000805490505b905600a165627a7a72305820d5851baab720bba574474de3d09dbeaabc674a15f4dd93b974908476542c23f00029000000000000000000000000000000000000000000000000000000000000002a privatefrom= privatefor="[ROAZBWtSacxXQrOe3FGAqJDyJjFePR5ce4TSIzmJ0Bc=]"
INFO [11-30|06:47:32] sent private tx data=1c1a05a947980098fb9fa0373afb6683d56d248bc1b0ab5ee3646923230848a4da9ba3b29e91453eaa5e817881676c58f67eacf7484d30022e7629cf4a7c69de privatefrom= privatefor="[ROAZBWtSacxXQrOe3FGAqJDyJjFePR5ce4TSIzmJ0Bc=]"
INFO [11-30|06:47:32] Submitted contract creation fullhash=0x95fc627555a6089a35377393e9b4280c4482b89e940db2f6ec4d3c508b743f31 to=0x1932c48b2bF8102Ba33B4A6B545C32236e342f34
INFO [11-30|06:47:34] Committed address=0xd8Dba507e85F116b1f7e231cA8525fC9008A6966 hash=f4bafb…a29842 number=1
INFO [11-30|06:47:34] Imported new chain segment blocks=1 txs=0 mgas=0.000 elapsed=1.145ms mgasps=0.000 number=1 hash=f4bafb…a29842
panic: runtime error: index out of range

goroutine 56 [running]:
panic(0xef9020, 0xc4200100d0)
/usr/local/go/src/runtime/panic.go:500 +0x1a1
github.com/ethereum/go-ethereum/core/types.NewTransactionsByPriceAndNonce(0x192b620, 0xc420d710c0, 0xc420d9f620, 0x0)
/home/ubuntu/quorum/build/_workspace/src/github.com/ethereum/go-ethereum/core/types/transaction.go:399 +0x475
github.com/ethereum/go-ethereum/miner.(*worker).commitNewWork(0xc420274b40)
/home/ubuntu/quorum/build/_workspace/src/github.com/ethereum/go-ethereum/miner/worker.go:463 +0x97b
github.com/ethereum/go-ethereum/miner.(*worker).update(0xc420274b40)
/home/ubuntu/quorum/build/_workspace/src/github.com/ethereum/go-ethereum/miner/worker.go:259 +0x232
created by github.com/ethereum/go-ethereum/miner.newWorker
/home/ubuntu/quorum/build/_workspace/src/github.com/ethereum/go-ethereum/miner/worker.go:158 +0x5ea`

Permission Denied

When running ./raft-init.sh on vagrant, I get a long list of "permission denied"... I've listed the code below:

vagrant@ubuntu-xenial:~/quorum-examples/7nodes$ ./raft-init.sh
[*] Cleaning up temporary data directories
rm: cannot remove 'qdata/constellation5/payloads': Permission denied
rm: cannot remove 'qdata/constellation7/payloads': Permission denied
rm: cannot remove 'qdata/dd5/static-nodes.json': Permission denied
rm: cannot remove 'qdata/dd5/keystore/key5': Permission denied
rm: cannot remove 'qdata/dd5/geth/nodekey': Permission denied
rm: cannot remove 'qdata/dd5/geth/chaindata/LOG': Permission denied
rm: cannot remove 'qdata/dd5/geth/chaindata/CURRENT': Permission denied
rm: cannot remove 'qdata/dd5/geth/chaindata/LOCK': Permission denied
rm: cannot remove 'qdata/dd5/geth/chaindata/000001.log': Permission denied
rm: cannot remove 'qdata/dd5/geth/chaindata/MANIFEST-000000': Permission denied
rm: cannot remove 'qdata/dd5/geth/lightchaindata/LOG': Permission denied
rm: cannot remove 'qdata/dd5/geth/lightchaindata/CURRENT': Permission denied
rm: cannot remove 'qdata/dd5/geth/lightchaindata/LOCK': Permission denied
rm: cannot remove 'qdata/dd5/geth/lightchaindata/000001.log': Permission denied
rm: cannot remove 'qdata/dd5/geth/lightchaindata/MANIFEST-000000': Permission denied
rm: cannot remove 'qdata/constellation6/payloads': Permission denied
rm: cannot remove 'qdata/constellation2/payloads': Permission denied
rm: cannot remove 'qdata/dd3/static-nodes.json': Permission denied
rm: cannot remove 'qdata/dd3/keystore': Permission denied
rm: cannot remove 'qdata/dd3/geth/nodekey': Permission denied
rm: cannot remove 'qdata/dd3/geth/chaindata/LOG': Permission denied
rm: cannot remove 'qdata/dd3/geth/chaindata/CURRENT': Permission denied
rm: cannot remove 'qdata/dd3/geth/chaindata/LOCK': Permission denied
rm: cannot remove 'qdata/dd3/geth/chaindata/000001.log': Permission denied
rm: cannot remove 'qdata/dd3/geth/chaindata/MANIFEST-000000': Permission denied
rm: cannot remove 'qdata/dd3/geth/lightchaindata/LOG': Permission denied
rm: cannot remove 'qdata/dd3/geth/lightchaindata/CURRENT': Permission denied
rm: cannot remove 'qdata/dd3/geth/lightchaindata/LOCK': Permission denied
rm: cannot remove 'qdata/dd3/geth/lightchaindata/000001.log': Permission denied
rm: cannot remove 'qdata/dd3/geth/lightchaindata/MANIFEST-000000': Permission denied
rm: cannot remove 'qdata/dd3/permissioned-nodes.json': Permission denied
rm: cannot remove 'qdata/dd6/static-nodes.json': Permission denied
rm: cannot remove 'qdata/dd6/keystore': Permission denied
rm: cannot remove 'qdata/dd6/geth/nodekey': Permission denied
rm: cannot remove 'qdata/dd6/geth/chaindata/LOG': Permission denied
rm: cannot remove 'qdata/dd6/geth/chaindata/CURRENT': Permission denied
rm: cannot remove 'qdata/dd6/geth/chaindata/LOCK': Permission denied
rm: cannot remove 'qdata/dd6/geth/chaindata/000001.log': Permission denied
rm: cannot remove 'qdata/dd6/geth/chaindata/MANIFEST-000000': Permission denied
rm: cannot remove 'qdata/dd6/geth/lightchaindata/LOG': Permission denied
rm: cannot remove 'qdata/dd6/geth/lightchaindata/CURRENT': Permission denied
rm: cannot remove 'qdata/dd6/geth/lightchaindata/LOCK': Permission denied
rm: cannot remove 'qdata/dd6/geth/lightchaindata/000001.log': Permission denied
rm: cannot remove 'qdata/dd6/geth/lightchaindata/MANIFEST-000000': Permission denied
rm: cannot remove 'qdata/dd1/static-nodes.json': Permission denied
rm: cannot remove 'qdata/dd1/keystore/key1': Permission denied
rm: cannot remove 'qdata/dd1/geth/nodekey': Permission denied
rm: cannot remove 'qdata/dd1/geth/chaindata/LOG': Permission denied
rm: cannot remove 'qdata/dd1/geth/chaindata/CURRENT': Permission denied
rm: cannot remove 'qdata/dd1/geth/chaindata/LOCK': Permission denied
rm: cannot remove 'qdata/dd1/geth/chaindata/000001.log': Permission denied
rm: cannot remove 'qdata/dd1/geth/chaindata/MANIFEST-000000': Permission denied
rm: cannot remove 'qdata/dd1/geth/lightchaindata/LOG': Permission denied
rm: cannot remove 'qdata/dd1/geth/lightchaindata/CURRENT': Permission denied
rm: cannot remove 'qdata/dd1/geth/lightchaindata/LOCK': Permission denied
rm: cannot remove 'qdata/dd1/geth/lightchaindata/000001.log': Permission denied
rm: cannot remove 'qdata/dd1/geth/lightchaindata/MANIFEST-000000': Permission denied
rm: cannot remove 'qdata/dd1/permissioned-nodes.json': Permission denied
rm: cannot remove 'qdata/dd2/static-nodes.json': Permission denied
rm: cannot remove 'qdata/dd2/keystore/key3': Permission denied
rm: cannot remove 'qdata/dd2/keystore/key2': Permission denied
rm: cannot remove 'qdata/dd2/geth/nodekey': Permission denied
rm: cannot remove 'qdata/dd2/geth/chaindata/LOG': Permission denied
rm: cannot remove 'qdata/dd2/geth/chaindata/CURRENT': Permission denied
rm: cannot remove 'qdata/dd2/geth/chaindata/LOCK': Permission denied
rm: cannot remove 'qdata/dd2/geth/chaindata/000001.log': Permission denied
rm: cannot remove 'qdata/dd2/geth/chaindata/MANIFEST-000000': Permission denied
rm: cannot remove 'qdata/dd2/geth/lightchaindata/LOG': Permission denied
rm: cannot remove 'qdata/dd2/geth/lightchaindata/CURRENT': Permission denied
rm: cannot remove 'qdata/dd2/geth/lightchaindata/LOCK': Permission denied
rm: cannot remove 'qdata/dd2/geth/lightchaindata/000001.log': Permission denied
rm: cannot remove 'qdata/dd2/geth/lightchaindata/MANIFEST-000000': Permission denied
rm: cannot remove 'qdata/dd2/permissioned-nodes.json': Permission denied
rm: cannot remove 'qdata/constellation1/payloads': Permission denied
rm: cannot remove 'qdata/constellation3/payloads': Permission denied
rm: cannot remove 'qdata/dd4/static-nodes.json': Permission denied
rm: cannot remove 'qdata/dd4/keystore/key4': Permission denied
rm: cannot remove 'qdata/dd4/geth/nodekey': Permission denied
rm: cannot remove 'qdata/dd4/geth/chaindata/LOG': Permission denied
rm: cannot remove 'qdata/dd4/geth/chaindata/CURRENT': Permission denied
rm: cannot remove 'qdata/dd4/geth/chaindata/LOCK': Permission denied
rm: cannot remove 'qdata/dd4/geth/chaindata/000001.log': Permission denied
rm: cannot remove 'qdata/dd4/geth/chaindata/MANIFEST-000000': Permission denied
rm: cannot remove 'qdata/dd4/geth/lightchaindata/LOG': Permission denied
rm: cannot remove 'qdata/dd4/geth/lightchaindata/CURRENT': Permission denied
rm: cannot remove 'qdata/dd4/geth/lightchaindata/LOCK': Permission denied
rm: cannot remove 'qdata/dd4/geth/lightchaindata/000001.log': Permission denied
rm: cannot remove 'qdata/dd4/geth/lightchaindata/MANIFEST-000000': Permission denied
rm: cannot remove 'qdata/dd4/permissioned-nodes.json': Permission denied
rm: cannot remove 'qdata/logs/constellation1.log': Permission denied
rm: cannot remove 'qdata/logs/4.log': Permission denied
rm: cannot remove 'qdata/logs/5.log': Permission denied
rm: cannot remove 'qdata/logs/constellation6.log': Permission denied
rm: cannot remove 'qdata/logs/2.log': Permission denied
rm: cannot remove 'qdata/logs/7.log': Permission denied
rm: cannot remove 'qdata/logs/3.log': Permission denied
rm: cannot remove 'qdata/logs/constellation3.log': Permission denied
rm: cannot remove 'qdata/logs/constellation7.log': Permission denied
rm: cannot remove 'qdata/logs/constellation5.log': Permission denied
rm: cannot remove 'qdata/logs/6.log': Permission denied
rm: cannot remove 'qdata/logs/1.log': Permission denied
rm: cannot remove 'qdata/logs/constellation4.log': Permission denied
rm: cannot remove 'qdata/logs/constellation2.log': Permission denied
rm: cannot remove 'qdata/dd7/static-nodes.json': Permission denied
rm: cannot remove 'qdata/dd7/keystore': Permission denied
rm: cannot remove 'qdata/dd7/geth/nodekey': Permission denied
rm: cannot remove 'qdata/dd7/geth/chaindata/LOG': Permission denied
rm: cannot remove 'qdata/dd7/geth/chaindata/CURRENT': Permission denied
rm: cannot remove 'qdata/dd7/geth/chaindata/LOCK': Permission denied
rm: cannot remove 'qdata/dd7/geth/chaindata/000001.log': Permission denied
rm: cannot remove 'qdata/dd7/geth/chaindata/MANIFEST-000000': Permission denied
rm: cannot remove 'qdata/dd7/geth/lightchaindata/LOG': Permission denied
rm: cannot remove 'qdata/dd7/geth/lightchaindata/CURRENT': Permission denied
rm: cannot remove 'qdata/dd7/geth/lightchaindata/LOCK': Permission denied
rm: cannot remove 'qdata/dd7/geth/lightchaindata/000001.log': Permission denied
rm: cannot remove 'qdata/dd7/geth/lightchaindata/MANIFEST-000000': Permission denied
rm: cannot remove 'qdata/constellation4/payloads': Permission denied

Any tips on how to get "permission" to move forward?

Thanks in advance!

Code stays in infinite loop

the below code in the file constellation-start.sh never exits as all the 7 conditions are never true.

while $DOWN; do
    sleep 0.1
    DOWN=false
    for i in {1..7}
    do
	if [ ! -S "qdata/c$i/tm.ipc" ]; then
            DOWN=true
	fi
    done
done

key copying errors in raft-init.sh and istanbul-init.sh

those file contain the lines:

cp keys/key2 qdata/dd2/keystore
cp keys/key3 qdata/dd2/keystore

so the key for node 2 is overwritten and replace by node 3's key, then below in the files there is no code to copy node 3's key to the proper folder

Vagrant error

Running vagrant up yields an error:

    default: chown: cannot access '/home/ubuntu/quorum'
    default: : No such file or directory

... probably because quorum wasn't installed in /home/ubuntu, it was installed in /home/vagrant.

When vagrant up is run, it does this:

    default: SSH address: 127.0.0.1:2222
    default: SSH username: vagrant
    default: SSH auth method: private key

.. which means it logs in as 'vagrant', right? So then later it does git clone , it installs that in /home/vagrant, NOT /home/ubuntu, which means the chown will fail, right? Am I missing something? How does this work for anyone? Furthermore, then when doing vagrant ssh in to the virtual box, the directions just say to cd to the examples directly, but that's been copied to ubuntu. Am I missing directions for how to actually ssh in to the virtual box as ubuntu?

init.sh : command not found

Situation:
I'm running through the tutorial and have successfully ssh into...
ubuntu@ubuntu-xenial:~/quorum-examples/7nodes$

I'm trying to execute init.sh and start.sh

looking at list files, I see both init.sh and start.sh available in 7nodes
screen shot 2017-08-11 at 3 46 57 pm

Expected Behavior:
I expected to have the 7 nodes up and running.

Actual Behavior:

instead I received...
"ubuntu@ubuntu-xenial:~/quorum-examples/7nodes$ init.sh
init.sh: command not found"

privateFor not working

Hi ,
Followed every step shown here https://github.com/jpmorganchase/quorum-examples/tree/master/examples/7nodes .
when i execute
private.set(4,{from:eth.coinbase,privateFor:["ROAZBWtSacxXQrOe3FGAqJDyJjFePR5ce4TSIzmJ0Bc="]});
node 7 gets updated as expected .
But when i execute private.set(4,{from:eth.coinbase,privateFor:["oNspPPgszVUFw0qmGFfWwh1uxVUXgvBxleXORHj07g8="]});
to update node 4 it doesn't get updated.
But when i try to update node 1 or node 7 from any other node both these nodes get updated. but when i try to update nodes 2-6 from any other node nothing gets updated..
Is this expected behaviour ?

Private Transaction is throwing error and bringing all the 7nodes down.

I am writing this contract ::
var testContract = web3.eth.contract([{"constant":false,"inputs":[],"name":"getValue","outputs":[{"name":"a","type":"uint256"}],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":false,"inputs":[{"name":"a","type":"uint256"}],"name":"setValue","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"}]);
var test = testContract.new(
{
from: web3.eth.accounts[0],
data: '0x6060604052341561000f57600080fd5b60ce8061001d6000396000f30060606040526000357c0100000000000000000000000000000000000000000000000000000000900463ffffffff168063209652551460465780635524107714606c57600080fd5b3415605057600080fd5b6056608c565b6040518082815260200191505060405180910390f35b3415607657600080fd5b608a60048080359060200190919050506098565b005b60008054905080905090565b80600081905550505600a165627a7a7230582024e016d5e5847222cc207652bed2e9ed9b95abe6ae8f9e2acf46d6ac5f401d740029',
gas: '4700000',
privateFor: ["QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=", "1iTZde/ndBHvzhcl7V68x44Vx7pl8nwx9LqnM/AfJUg="]
}, function (e, contract){
console.log(e, contract);
if (typeof contract.address !== 'undefined') {
console.log('Contract mined! address: ' + contract.address + ' transactionHash: ' + contract.transactionHash);
}
})

[6:46]
but getting error and all the nodes are coming down.

Issue::::
INFO [11-15|12:58:28] sending private tx data=6060604052341561000f57600080fd5b60ce8061001d6000396000f30060606040526000357c0100000000000000000000000000000000000000000000000000000000900463ffffffff168063209652551460465780635524107714606c57600080fd5b3415605057600080fd5b6056608c565b6040518082815260200191505060405180910390f35b3415607657600080fd5b608a60048080359060200190919050506098565b005b60008054905080905090565b80600081905550505600a165627a7a7230582024e016d5e5847222cc207652bed2e9ed9b95abe6ae8f9e2acf46d6ac5f401d740029 privatefrom= privatefor="[QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc= 1iTZde/ndBHvzhcl7V68x44Vx7pl8nwx9LqnM/AfJUg=]"
INFO [11-15|12:58:28] sent private tx data=3935801486d7c735609b68952f96f08e4032df71f8911ece30ac152097178b78a98a79609e57b78107e28075c22e346a2e69abef3789ae5e7ed8019256b312d8 privatefrom= privatefor="[QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc= 1iTZde/ndBHvzhcl7V68x44Vx7pl8nwx9LqnM/AfJUg=]"
INFO [11-15|12:58:28] Submitted contract creation fullhash=0xc28f32887f553b0e167c3cc5872e4918ce363559d26286472dfe9905c1eddb97 to=0x1932c48b2bF8102Ba33B4A6B545C32236e342f34
INFO [11-15|12:58:30] Committed address=0xd8Dba507e85F116b1f7e231cA8525fC9008A6966 hash=5e4fdb…df3e77 number=70
INFO [11-15|12:58:30] Imported new chain segment blocks=1 txs=0 mgas=0.000 elapsed=1.840ms mgasps=0.000 number=70 hash=5e4fdb…df3e77
panic: runtime error: index out of range

Can you suggest what is the issue

This is happening when we starting the node using the Istanbul BFT mode

vagrant up failing on OSX VBox 5.6.2

I am trying to run vagrant up it gets through most of it, however, later in the script it throws an error and exits...

    default: cp: cannot open '/vagrant/examples/7nodes/qdata/tm1.ipc' for reading: Operation not supported
    default: cp: cannot open '/vagrant/examples/7nodes/qdata/tm2.ipc' for reading: Operation not supported
    default: cp: cannot open '/vagrant/examples/7nodes/qdata/tm3.ipc' for reading: Operation not supported
    default: cp: cannot open '/vagrant/examples/7nodes/qdata/tm7.ipc' for reading: Operation not supported
    default: cp: cannot open '/vagrant/examples/7nodes/qdata/tm6.ipc' for reading: Operation not supported
    default: cp: cannot open '/vagrant/examples/7nodes/qdata/tm4.ipc' for reading: Operation not supported
    default: cp: cannot open '/vagrant/examples/7nodes/qdata/tm5.ipc' for reading: Operation not supported

Is 4Gb of RAM really needed to run 7Nodes?

I've just cloned the quorum-examples and I ran out of memory due to having an other virtual machine running at the time I started vagrant.

Is 4Gb of RAM really needed to run this example project?

If yes: I'm wondering of updating the README.md file with the required VM's RAM
If no: I thing we could decrease the memory in the Vagrantfile (v.memory = 4096) to the needed amount.

Thank you.

Running raft-start.sh doesn't create geth.ipc

I was following the 5nodesRTGS example for Quorum.
Steps performed:

INFO [01-31|15:23:00] Successfully wrote genesis state database=lightchaindata hash=c23b4e…8b1b71

  • Next I run: sudo ./raft-start.sh

[] Starting Constellation nodes
[
] Starting node 1 - Bank 1
[] Starting node 2 - Bank 2
[
] Starting node 3 - Bank 3
[] Starting node 4 - Regulator
[
] Starting node 5 - Observer
[*] Waiting for nodes to start

All nodes configured. See 'qdata/logs' for logs, and run e.g. 'geth attach qdata/dd4/geth.ipc' to attach to the RTGS Regulator Geth node

  • geth attach ipc:qdata/dd1/geth.ipc

Fatal: Unable to attach to remote geth: dial unix qdata/dd1/geth.ipc: connect: no such file or directory

This is where i get the error. After running raft-start.sh , if I check qdata/dd1, it has no geth.ipc generated.

Contract deplyment transaction excuting successfully but no contract deployed...

quorum -version: v2.0.0
consensus: raft

we have done four node setup, node1 showing up as a minter and other as verifiers.
when we are trying to deploy 7 nodes example contract from node1. It returns transaction receipt as bellow

eth.getTransactionReceipt("0xab500348ffc57f231d9569064f6da7a5b3912ace829070862d15e1273fbb265f")
{
blockHash: "0x8b31dc8a9074909c228867b3bed240df5b33963ba95dd392caff3f46d316493c",
blockNumber: 12,
contractAddress: "0x3f7cb7b26dad554cde28e4933b8e4861137dd081",
cumulativeGasUsed: 90000,
from: "0x2ac8e4d477bf82e3988f8b1d1071b6e275483dfe",
gasUsed: 90000,
logs: [],
logsBloom: "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
root: "0x0ec0ef674a4436bd4509ea8f50c3fe89cc82f21bcb0e82a4679e11023fd304dc",
to: null,
transactionHash: "0xab500348ffc57f231d9569064f6da7a5b3912ace829070862d15e1273fbb265f",
transactionIndex: 0
}

but getCode is returns null means contract is not deployed.

eth.getCode("0x3f7cb7b26dad554cde28e4933b8e4861137dd081")
"0x"

could you help us understand this behavior?

Fatal: invalid genesis file: missing 0x prefix for hex data

This is a fresh install... do I need to install/start geth on my Ubuntu instance? I'm running Mac OS.

13inch:7nodes tom$ pwd
/Users/tom/GitHub/quorum-examples/examples/7nodes
13inch:7nodes tom$ ls
README.md raft stop.sh
bench-private-async.sh raft-init.sh tm1.conf
bench-private-sync.sh raft-start.sh tm2.conf
bench-public-sync.sh runscript.sh tm3.conf
genesis.json script1.js tm4.conf
init.sh send-private-async.lua tm5.conf
keys send-private-sync.lua tm6.conf
passwords.txt send-public-sync.lua tm7.conf
qdata start.sh
13inch:7nodes tom$ ./init.sh
[] Cleaning up temporary data directories
[
] Configuring node 1
Fatal: invalid genesis file: missing 0x prefix for hex data
13inch:7nodes tom$

Unable to execute contracts with privateFor in Docker

I am working on a POC where the 7 nodes would be broken up into 7 different containers of Docker where each container will contain a geth and a constellation node respectively.
I have exposed the constellation TCP port as well as the geth TCP and UDP ports for containers to interact with each other and infact the nodes are able to discover each other.
I have run a normal geth with --nodekeyhex to run it as a bootnode and other nodes connecting to the enode of the bootnode.
I am able to run normal ethereum contracts on one of the geth rpc terminal but whenever I am running a contract containing "privateFor", it is throwing me the error below:

Error: Invalid JSON RPC response: {"error":{"EOF":"EOF","code":-32603},"id":8,"version":"2.0"} undefined

Contract:
contract SimpleStorage {

uint val;

function setValue(uint _val) {
    val = _val;
}

function getValue() returns (uint) {
    return val;
}

}

Web3Deploy:
var simplestorageContract = web3.eth.contract([{"constant":false,"inputs":[],"name":"getValue","outputs":[{"name":"","type":"uint256"}],"payable":false,"type":"function"},{"constant":false,"inputs":[{"name":"_val","type":"uint256"}],"name":"setValue","outputs":[],"payable":false,"type":"function"}]);
var simplestorage = simplestorageContract.new(
{
from: web3.eth.accounts[0],
data: '0x6060604052341561000c57fe5b5b60c68061001b6000396000f30060606040526000357c0100000000000000000000000000000000000000000000000000000000900463ffffffff1680632096525514604457806355241077146067575bfe5b3415604b57fe5b60516084565b6040518082815260200191505060405180910390f35b3415606e57fe5b60826004808035906020019091905050608f565b005b600060005490505b90565b806000819055505b505600a165627a7a7230582082de0a01ef4b2c7d677328ded3bee7d0b70e2f06f1036ec3991461de7b3aa1160029',
gas: '4700000',
privateFor: ["BULeR8JyUWhiuuCMU/HLA0Q5pzkYT+cHII3ZKBey3Bo=","QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc="]
}, function (e, contract){
console.log(e, contract);
if (typeof contract.address !== 'undefined') {
console.log('Contract mined! address: ' + contract.address + ' transactionHash: ' + contract.transactionHash);
}
})
Error from geth logs:
E0829 12:36:42.399089 rpc/server.go:152] goroutine 16110 [running]:
github.com/ethereum/go-ethereum/rpc.(*Server).serveRequest.func1(0xc42035b620, 0x1735580, 0xc422d324b0)
/quorum/build/_workspace/src/github.com/ethereum/go-ethereum/rpc/server.go:151 +0x133
panic(0xd828a0, 0x16fa5e0)
/usr/local/go/src/runtime/panic.go:489 +0x2cf
github.com/ethereum/go-ethereum/internal/ethapi.(*PublicTransactionPoolAPI).SendTransaction(0xc4200d8310, 0x7ff317f85780, 0xc420b122c0, 0x1848b382e3029ded, 0x71fec709a3888be8, 0x9d415fe6, 0x0, 0xc42240d040, 0xc42240d0e0, 0xc42240d100, ...)
/quorum/build/_workspace/src/github.com/ethereum/go-ethereum/internal/ethapi/api.go:1226 +0xebe
reflect.Value.call(0xc4263fc380, 0xc42000f130, 0x13, 0xe8f221, 0x4, 0xc420b0ed80, 0x3, 0x4, 0x1, 0xdd56a0, ...)
/usr/local/go/src/reflect/value.go:434 +0x91f
reflect.Value.Call(0xc4263fc380, 0xc42000f130, 0x13, 0xc420b0ed80, 0x3, 0x4, 0x1, 0x1, 0xc42240d0a0)
/usr/local/go/src/reflect/value.go:302 +0xa4
github.com/ethereum/go-ethereum/rpc.(*Server).handle(0xc42035b620, 0x7ff317f85780, 0xc420b122c0, 0x1735580, 0xc422d324b0, 0xc4223fb730, 0xf, 0xd83c40, 0xc422d32520)
/quorum/build/_workspace/src/github.com/ethereum/go-ethereum/rpc/server.go:318 +0x752
github.com/ethereum/go-ethereum/rpc.(*Server).exec(0xc42035b620, 0x7ff317f85780, 0xc420b122c0, 0x1735580, 0xc422d324b0, 0xc4223fb730)
/quorum/build/_workspace/src/github.com/ethereum/go-ethereum/rpc/server.go:340 +0x1ba
github.com/ethereum/go-ethereum/rpc.(*Server).serveRequest(0xc42035b620, 0x1735580, 0xc422d324b0, 0x9a3901, 0x1, 0x0, 0x0)
/quorum/build/_workspace/src/github.com/ethereum/go-ethereum/rpc/server.go:208 +0x412
github.com/ethereum/go-ethereum/rpc.(*Server).ServeSingleRequest(0xc42035b620, 0x1735580, 0xc422d324b0, 0x1)
/quorum/build/_workspace/src/github.com/ethereum/go-ethereum/rpc/server.go:232 +0x4e
github.com/ethereum/go-ethereum/rpc.(*Server).ServeHTTP(0xc42035b620, 0x1730120, 0xc420147500, 0xc4223a7200)
/quorum/build/_workspace/src/github.com/ethereum/go-ethereum/rpc/http.go:162 +0x36c
github.com/ethereum/go-ethereum/vendor/github.com/rs/cors.(*Cors).Handler.func1(0x1730120, 0xc420147500, 0xc4223a7200)
/quorum/build/_workspace/src/github.com/ethereum/go-ethereum/vendor/github.com/rs/cors/cors.go:190 +0xe9
net/http.HandlerFunc.ServeHTTP(0xc4201c7100, 0x1730120, 0xc420147500, 0xc4223a7200)
/usr/local/go/src/net/http/server.go:1942 +0x44
net/http.serverHandler.ServeHTTP(0xc420093e40, 0x1730120, 0xc420147500, 0xc4223a7200)
/usr/local/go/src/net/http/server.go:2568 +0x92
net/http.(*conn).serve(0xc4207cd400, 0x17311e0, 0xc421002880)
/usr/local/go/src/net/http/server.go:1825 +0x612
created by net/http.(*Server).Serve
/usr/local/go/src/net/http/server.go:2668 +0x2ce

New node to join the network

Hi I understand that we have to define the all the the makers, voters node in the storage section of the genesis json file.

Is it possible to create a new and late coming node and join an existing quorum network that is actively running and was previously initialized based on the genesis json file which doesn't contain the storage info of the new node?

Thank you.

transaction performance

How to test the transaction performance of 7nodes?such as how many transactions per second.

What happens when bootnode goes down ?

So I have setup 2 nodes, 1 on Azure and 1 on AWS and was able to successfully sync the data between them. In the node 1 is started as single block maker and boot node, and node 2 as voter. Later I added node 2 as block maker using the 0x20 with the permission of node 1, so this is as contract level. Now what happens when node 1 gone down, can I still publish my Tx to node 2 which was blockmaker assigned by node 1 ? or in this case can I define node 2 as boot node as well ?

Voter nodes don't vote on ChainHeadEvent (New Block)

A node will only vote on a ChainHeadEvent if the node is synced with the chain. In the 7 node example the nodes do not set there sync status to true as they do not ever undertake a chain download.

The maker node has the --singleblockmaker flag set so its synced flag is set to true.

The chain progresses because the maker node votes twice and the voting nodes vote when their deadline timer expires and a CreateBlock event occurs.

Expected behaviour can be achieved by setting the --singleblockmaker flag to all nodes.

7nodes example not works

ubuntu@ubuntu-xenial:~/quorum-examples/7nodes$ ./istanbul-init.sh
[*] Cleaning up temporary data directories
[*] Configuring node 1
cp: cannot stat 'raft/static-nodes.json': No such file or directory

Also there is no such file in repo quorum-examples/examples/7nodes/raft/

Many empty transactions in local cluster

I am running a 5 node cluster (followed instructions here for the most part). When I paused the Block Maker, I noticed a ton of transactions stacking up in the txpool. Here is the output of txpool.inspect.pending:

> txpool.inspect.pending
{
  0x0638e1574728b6d862dd5d3a3e0942c3be47d996: {
    0: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    1: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    10: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    11: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    12: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    13: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    14: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    15: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    16: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    17: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    18: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    19: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    2: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    20: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    21: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    22: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    23: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    24: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    25: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    26: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    27: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    28: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    29: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    3: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    30: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    31: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    32: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    33: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    34: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    35: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    36: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    37: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    38: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    39: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    4: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    40: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    41: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    42: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    43: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    44: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    45: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    46: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    47: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    48: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    49: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    5: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    50: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    51: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    52: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    53: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    54: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    55: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    56: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    57: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    58: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    59: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    6: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    60: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    61: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    62: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    63: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    64: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    65: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    66: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    7: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    8: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    9: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas"
  },
  0x0fbdc686b912d7722dc86510934589e0aaf3b55a: {
    0: "0x0000000000000000000000000000000000000020: 0 wei + 91996 × 0 gas"
  },
  0x9186eb3d20cbd1f5f992a950d808c4495153abd5: {
    0: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    1: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    10: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    11: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    12: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    13: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    14: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    15: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    16: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    17: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    18: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    19: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    2: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    20: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    21: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    22: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    23: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    24: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    25: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    26: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    27: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    28: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    29: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    3: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    30: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    31: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    32: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    33: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    34: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    35: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    36: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    37: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    38: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    39: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    4: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    40: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    41: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    42: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    43: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    44: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    45: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    46: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    47: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    48: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    49: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    5: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    50: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    51: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    52: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    53: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    54: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    55: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    56: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    57: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    58: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    59: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    6: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    60: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    61: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    62: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    63: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    64: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    65: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    66: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    67: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    68: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    69: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    7: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    70: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    71: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    72: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    8: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    9: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas"
  }
}

7nodes example raft-start.sh error

After running raft-init.sh without problems if I run raft-start.sh, it returns the following error:

./raft-start.sh
[*] Starting Constellation nodes
[*] Starting node 1 (permissioned)
[*] Starting node 2 (permissioned)
[*] Starting node 3 (permissioned)
[*] Starting node 4 (permissioned)
[*] Starting node 5 (unpermissioned)
[*] Starting node 6 (unpermissioned)
[*] Starting node 7 (unpermissioned)
[*] Waiting for nodes to start
[*] Sending first transaction
panic: MustNew error: Get http+unix://c/upcheck: dial unix: missing address

goroutine 1 [running]:
panic(0xc1f6e0, 0xc4201d8b70)
        /usr/local/go/src/runtime/panic.go:500 +0x1a1
github.com/ethereum/go-ethereum/private/constellation.MustNew(0xc42000e00f, 0x8, 0xc42000e00f)
        /quorum/build/_workspace/src/github.com/ethereum/go-ethereum/private/constellation/constellation.go:76 +0x114
github.com/ethereum/go-ethereum/private.FromEnvironmentOrNil(0xd5a99d, 0xe, 0x1538f40, 0xc420047ea0)
        /quorum/build/_workspace/src/github.com/ethereum/go-ethereum/private/private.go:19 +0x52
github.com/ethereum/go-ethereum/private.init()
        /quorum/build/_workspace/src/github.com/ethereum/go-ethereum/private/private.go:22 +0x67
github.com/ethereum/go-ethereum/core.init()
        /quorum/build/_workspace/src/github.com/ethereum/go-ethereum/core/vm_env.go:165 +0xf4
github.com/ethereum/go-ethereum/cmd/utils.init()
        /quorum/build/_workspace/src/github.com/ethereum/go-ethereum/cmd/utils/version.go:65 +0x76
main.init()
        github.com/ethereum/go-ethereum/cmd/geth/_obj/_cgo_import.go:6 +0x58

Am i missing something?

Data sync is not happening between nodes deployed on different machines

I have boot node and node 1 deployed on one machine as block maker and voter and relevant invoking tm2.conf file looks like -


url = "http://10.10.10.5:9000/"
port = 9000
socketPath = "qdata/tm2.ipc"
otherNodeUrls = []
publicKeyPath = "keys/tm2.pub"
privateKeyPath = "keys/tm2.key"
archivalPublicKeyPath = "keys/tm2a.pub"
archivalPrivateKeyPath = "keys/tm2a.key"
storagePath = "qdata/constellation2"

and node3 deployed on another machine which is ONLY voter and uses tm5.conf which config looks like -

url = "http://10.11.11.4:9000/"
port = 9000
socketPath = "qdata/tm5.ipc"
otherNodeUrls = ["http://10.10.10.5:9000/"]
publicKeyPath = "keys/tm5.pub"
privateKeyPath = "keys/tm5.key"
archivalPublicKeyPath = "keys/tm5a.pub"
archivalPrivateKeyPath = "keys/tm5a.key"
storagePath = "qdata/constellation5"

both the nodes constellations logs show no errors. In the constellation log for node 1 the log is displayed as -


nohup: appending output to 'nohup.out'
2017 Sep-03 05:24:32216663 [INFO] Constellation initializing using config file tm2.conf
2017 Sep-03 05:24:32217548 [INFO] Log level is LevelDebug
2017 Sep-03 05:24:32217620 [INFO] Utilizing 2 core(s)
2017 Sep-03 05:24:32220587 [INFO] Constructing Enclave using keypairs (keys/tm2.pub, keys/tm2.key) (keys/tm2a.pub, keys/tm2a.key)
2017 Sep-03 05:24:32220802 [INFO] Initializing storage qdata/constellation2
2017 Sep-03 05:24:32276282 [INFO] Internal API listening on qdata/tm2.ipc
2017 Sep-03 05:24:32276271 [INFO] External API listening on port 9000
2017 Sep-03 05:24:32276219 [INFO] Node started
2017 Sep-03 05:24:38344272 [DEBUG] Request from : ApiUpcheck; Response: ApiUpcheckR
2017 Sep-03 05:25:09751069 [DEBUG] Request from 10.11.11.4:49940: ApiPartyInfo (PartyInfo {piUrl = "http://10.11.11.4:9000/", piRcpts = fromList [("R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY=","http://10.11.11.4:9000/")], piParties = fromList ["http://10.10.10.5:9000/"]}); Response: ApiPartyInfoR (PartyInfo {piUrl = "http://10.10.10.5:9000/", piRcpts = fromList [("QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=","http://10.10.10.5:9000/"),("R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY=","http://10.11.11.4:9000/")], piParties = fromList ["http://10.10.10.5:9000/"]})
2017 Sep-03 05:29:35051830 [DEBUG] Request from 10.10.10.5:48254: ApiPartyInfo (PartyInfo {piUrl = "http://10.10.10.5:9000/", piRcpts = fromList [("QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=","http://10.10.10.5:9000/"),("R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY=","http://10.11.11.4:9000/")], piParties = fromList ["http://10.10.10.5:9000/"]}); Response: ApiPartyInfoR (PartyInfo {piUrl = "http://10.10.10.5:9000/", piRcpts = fromList [("QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=","http://10.10.10.5:9000/"),("R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY=","http://10.11.11.4:9000/")], piParties = fromList ["http://10.10.10.5:9000/"]})
2017 Sep-03 05:30:09760130 [DEBUG] Request from 10.11.11.4:50490: ApiPartyInfo (PartyInfo {piUrl = "http://10.11.11.4:9000/", piRcpts = fromList [("QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=","http://10.10.10.5:9000/"),("R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY=","http://10.11.11.4:9000/")], piParties = fromList ["http://10.10.10.5:9000/"]}); Response: ApiPartyInfoR (PartyInfo {piUrl = "http://10.10.10.5:9000/", piRcpts = fromList [("QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=","http://10.10.10.5:9000/"),("R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY=","http://10.11.11.4:9000/")], piParties = fromList ["http://10.10.10.5:9000/"]})
2017 Sep-03 05:34:37725427 [DEBUG] Request from 10.10.10.5:49104: ApiPartyInfo (PartyInfo {piUrl = "http://10.10.10.5:9000/", piRcpts = fromList [("QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=","http://10.10.10.5:9000/"),("R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY=","http://10.11.11.4:9000/")], piParties = fromList ["http://10.10.10.5:9000/"]}); Response: ApiPartyInfoR (PartyInfo {piUrl = "http://10.10.10.5:9000/", piRcpts = fromList [("QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=","http://10.10.10.5:9000/"),("R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY=","http://10.11.11.4:9000/")], piParties = fromList ["http://10.10.10.5:9000/"]})
2017 Sep-03 05:35:09798877 [DEBUG] Request from 10.11.11.4:51044: ApiPartyInfo (PartyInfo {piUrl = "http://10.11.11.4:9000/", piRcpts = fromList [("QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=","http://10.10.10.5:9000/"),("R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY=","http://10.11.11.4:9000/")], piParties = fromList ["http://10.10.10.5:9000/"]}); Response: ApiPartyInfoR (PartyInfo {piUrl = "http://10.10.10.5:9000/", piRcpts = fromList [("QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=","http://10.10.10.5:9000/"),("R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY=","http://10.11.11.4:9000/")], piParties = fromList ["http://10.10.10.5:9000/"]})
2017 Sep-03 05:39:40398721 [DEBUG] Request from 10.10.10.5:49964: ApiPartyInfo (PartyInfo {piUrl = "http://10.10.10.5:9000/", piRcpts = fromList [("QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=","http://10.10.10.5:9000/"),("R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY=","http://10.11.11.4:9000/")], piParties = fromList ["http://10.10.10.5:9000/"]}); Response: ApiPartyInfoR (PartyInfo {piUrl = "http://10.10.10.5:9000/", piRcpts = fromList [("QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=","http://10.10.10.5:9000/"),("R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY=","http://10.11.11.4:9000/")], piParties = fromList ["http://10.10.10.5:9000/"]})
2017 Sep-03 05:40:09837242 [DEBUG] Request from 10.11.11.4:51600: ApiPartyInfo (PartyInfo {piUrl = "http://10.11.11.4:9000/", piRcpts = fromList [("QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=","http://10.10.10.5:9000/"),("R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY=","http://10.11.11.4:9000/")], piParties = fromList ["http://10.10.10.5:9000/"]}); Response: ApiPartyInfoR (PartyInfo {piUrl = "http://10.10.10.5:9000/", piRcpts = fromList [("QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=","http://10.10.10.5:9000/"),("R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY=","http://10.11.11.4:9000/")], piParties = fromList ["http://10.10.10.5:9000/"]})
2017 Sep-03 05:44:42984722 [DEBUG] Request from 10.10.10.5:50820: ApiPartyInfo (PartyInfo {piUrl = "http://10.10.10.5:9000/", piRcpts = fromList [("QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=","http://10.10.10.5:9000/"),("R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY=","http://10.11.11.4:9000/")], piParties = fromList ["http://10.10.10.5:9000/"]}); Response: ApiPartyInfoR (PartyInfo {piUrl = "http://10.10.10.5:9000/", piRcpts = fromList [("QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=","http://10.10.10.5:9000/"),("R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY=","http://10.11.11.4:9000/")], piParties = fromList ["http://10.10.10.5:9000/"]})

and node 3 constellation log shown as -

nohup: appending output to 'nohup.out'
2017 Sep-03 05:25:09524816 [INFO] Constellation initializing using config file tm5.conf
2017 Sep-03 05:25:09528401 [INFO] Log level is LevelDebug
2017 Sep-03 05:25:09528491 [INFO] Utilizing 2 core(s)
2017 Sep-03 05:25:09528924 [INFO] Constructing Enclave using keypairs (keys/tm5.pub, keys/tm5.key) (keys/tm5a.pub, keys/tm5a.key)
2017 Sep-03 05:25:09529209 [INFO] Initializing storage qdata/constellation5
2017 Sep-03 05:25:09606851 [INFO] External API listening on port 9000
2017 Sep-03 05:25:09606862 [INFO] Internal API listening on qdata/tm5.ipc
2017 Sep-03 05:25:09606804 [INFO] Node started
~

So the problem here is, I am able to deploy contract and add data using contract functions in node 1, however pointing to node3 and calling the same function on contract address of node 1 gives me back null. Transactions are not synced between the nodes I guess ?. And here I am performing public transaction so all the participating should be able to see and read the data? but thats not happening in node 3 case, can somebody please help me resolve this.

cute pic -
https://www.instagram.com/p/BYGgz26BuZA/?hl=en&taken-by=utamaruru

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.