Code Monkey home page Code Monkey logo

quorum-examples's Issues

Voter nodes don't vote on ChainHeadEvent (New Block)

A node will only vote on a ChainHeadEvent if the node is synced with the chain. In the 7 node example the nodes do not set there sync status to true as they do not ever undertake a chain download.

The maker node has the --singleblockmaker flag set so its synced flag is set to true.

The chain progresses because the maker node votes twice and the voting nodes vote when their deadline timer expires and a CreateBlock event occurs.

Expected behaviour can be achieved by setting the --singleblockmaker flag to all nodes.

Error with raft-start.sh

When I run raft-init.sh, no problem. After that, I run raft-start.sh, it shows the error:
Fatal: Unable to attach to remote geth: dial unix qdata/dd1/geth.ipc: connect: no such file or directory
I checked the folder qdata/dd1, there is no file named geth.ipc.
Did I miss something?

Vagrant error

Running vagrant up yields an error:

    default: chown: cannot access '/home/ubuntu/quorum'
    default: : No such file or directory

... probably because quorum wasn't installed in /home/ubuntu, it was installed in /home/vagrant.

When vagrant up is run, it does this:

    default: SSH address: 127.0.0.1:2222
    default: SSH username: vagrant
    default: SSH auth method: private key

.. which means it logs in as 'vagrant', right? So then later it does git clone , it installs that in /home/vagrant, NOT /home/ubuntu, which means the chown will fail, right? Am I missing something? How does this work for anyone? Furthermore, then when doing vagrant ssh in to the virtual box, the directions just say to cd to the examples directly, but that's been copied to ubuntu. Am I missing directions for how to actually ssh in to the virtual box as ubuntu?

privateFor not working

Hi ,
Followed every step shown here https://github.com/jpmorganchase/quorum-examples/tree/master/examples/7nodes .
when i execute
private.set(4,{from:eth.coinbase,privateFor:["ROAZBWtSacxXQrOe3FGAqJDyJjFePR5ce4TSIzmJ0Bc="]});
node 7 gets updated as expected .
But when i execute private.set(4,{from:eth.coinbase,privateFor:["oNspPPgszVUFw0qmGFfWwh1uxVUXgvBxleXORHj07g8="]});
to update node 4 it doesn't get updated.
But when i try to update node 1 or node 7 from any other node both these nodes get updated. but when i try to update nodes 2-6 from any other node nothing gets updated..
Is this expected behaviour ?

Error(`index out of range`) with istanbul-start.sh in quorum-examples/7node

When I run ./istanbul-start.sh in quorum-examples/7node I get some panic info.
The log of ./istanbul-start.sh is:
[*] Starting Constellation nodes [*] Starting node 1 [*] Starting node 2 [*] Starting node 3 [*] Starting node 4 [*] Starting node 5 [*] Starting node 6 [*] Starting node 7 [*] Waiting for nodes to start [*] Sending first transaction Contract transaction send: TransactionHash: 0x95fc627555a6089a35377393e9b4280c4482b89e940db2f6ec4d3c508b743f31 waiting to be mined... true All nodes configured. See 'qdata/logs' for logs, and run e.g. 'geth attach qdata/dd1/geth.ipc' to attach to the first Geth node

but when i use geth attach qdata/dd1/geth.ipc I get the log:
Fatal: Unable to attach to remote geth: dial unix qdata/dd1/geth.ipc: connect: connection refused
and the log of geth in qdata/dd1/geth.ipc is:

`nohup: appending output to 'nohup.out'
INFO [11-30|06:47:22] Starting peer-to-peer node instance="Geth/vquorum 2.0.0 (geth 1.7.2-stable)-3d91976f/linux-amd64/go1.7.3"
INFO [11-30|06:47:22] Allocated cache and file handles database=/home/ubuntu/quorum-examples/7nodes/qdata/dd1/geth/chaindata cache=128 handles=1024
WARN [11-30|06:47:22] Upgrading database to use lookup entries
INFO [11-30|06:47:22] Reconfiguring old genesis as Quorum
INFO [11-30|06:47:22] Initialised chain configuration config="{ChainID: Homestead: 1 DAO: DAOSupport: false EIP150: 2 EIP155: 3 EIP158: 3 Byzantium: IsQuorum: true Engine: unknown}"
INFO [11-30|06:47:22] Initialising Ethereum protocol versions="[63 62]" network=1
INFO [11-30|06:47:22] Loaded most recent local header number=0 hash=62c2c0…5a3c94 td=1
INFO [11-30|06:47:22] Loaded most recent local full block number=0 hash=62c2c0…5a3c94 td=1
INFO [11-30|06:47:22] Loaded most recent local fast block number=0 hash=62c2c0…5a3c94 td=1
INFO [11-30|06:47:22] Regenerated local transaction journal transactions=0 accounts=0
INFO [11-30|06:47:22] Starting P2P networking
INFO [11-30|06:47:22] Database deduplication successful deduped=0
INFO [11-30|06:47:24] UDP listener up self=enode://ac6b1096ca56b9f6d004b779ae3728bf83f8e22453404cc3cef16a3d9b96608bc67c4b30db88e0a5a6c6390213f7acbe1153ff6d23ce57380104288ae19373ef@[::]:21000
INFO [11-30|06:47:24] HTTP endpoint opened: http://0.0.0.0:22000
INFO [11-30|06:47:24] IPC endpoint opened: /home/ubuntu/quorum-examples/7nodes/qdata/dd1/geth.ipc
INFO [11-30|06:47:24] RLPx listener up self=enode://ac6b1096ca56b9f6d004b779ae3728bf83f8e22453404cc3cef16a3d9b96608bc67c4b30db88e0a5a6c6390213f7acbe1153ff6d23ce57380104288ae19373ef@[::]:21000
WARN [11-30|06:47:24] Removing static dial candidate id=ac6b1096ca56b9f6 addr=127.0.0.1:21000 err="is self"
INFO [11-30|06:47:25] Unlocked account address=0xed9d02e382b34818e88B88a309c7fe71E65f419d
INFO [11-30|06:47:25] Transaction pool price threshold updated price=18000000000
INFO [11-30|06:47:25] Starting mining operation
INFO [11-30|06:47:25] Commit new mining work number=1 txs=0 uncles=0 elapsed=116.289µs
INFO [11-30|06:47:32] sending private tx data=6060604052341561000f57600080fd5b604051602080610149833981016040528080519060200190919050505b806000819055505b505b610104806100456000396000f30060606040526000357c0100000000000000000000000000000000000000000000000000000000900463ffffffff1680632a1afcd914605157806360fe47b11460775780636d4ce63c146097575b600080fd5b3415605b57600080fd5b606160bd565b6040518082815260200191505060405180910390f35b3415608157600080fd5b6095600480803590602001909190505060c3565b005b341560a157600080fd5b60a760ce565b6040518082815260200191505060405180910390f35b60005481565b806000819055505b50565b6000805490505b905600a165627a7a72305820d5851baab720bba574474de3d09dbeaabc674a15f4dd93b974908476542c23f00029000000000000000000000000000000000000000000000000000000000000002a privatefrom= privatefor="[ROAZBWtSacxXQrOe3FGAqJDyJjFePR5ce4TSIzmJ0Bc=]"
INFO [11-30|06:47:32] sent private tx data=1c1a05a947980098fb9fa0373afb6683d56d248bc1b0ab5ee3646923230848a4da9ba3b29e91453eaa5e817881676c58f67eacf7484d30022e7629cf4a7c69de privatefrom= privatefor="[ROAZBWtSacxXQrOe3FGAqJDyJjFePR5ce4TSIzmJ0Bc=]"
INFO [11-30|06:47:32] Submitted contract creation fullhash=0x95fc627555a6089a35377393e9b4280c4482b89e940db2f6ec4d3c508b743f31 to=0x1932c48b2bF8102Ba33B4A6B545C32236e342f34
INFO [11-30|06:47:34] Committed address=0xd8Dba507e85F116b1f7e231cA8525fC9008A6966 hash=f4bafb…a29842 number=1
INFO [11-30|06:47:34] Imported new chain segment blocks=1 txs=0 mgas=0.000 elapsed=1.145ms mgasps=0.000 number=1 hash=f4bafb…a29842
panic: runtime error: index out of range

goroutine 56 [running]:
panic(0xef9020, 0xc4200100d0)
/usr/local/go/src/runtime/panic.go:500 +0x1a1
github.com/ethereum/go-ethereum/core/types.NewTransactionsByPriceAndNonce(0x192b620, 0xc420d710c0, 0xc420d9f620, 0x0)
/home/ubuntu/quorum/build/_workspace/src/github.com/ethereum/go-ethereum/core/types/transaction.go:399 +0x475
github.com/ethereum/go-ethereum/miner.(*worker).commitNewWork(0xc420274b40)
/home/ubuntu/quorum/build/_workspace/src/github.com/ethereum/go-ethereum/miner/worker.go:463 +0x97b
github.com/ethereum/go-ethereum/miner.(*worker).update(0xc420274b40)
/home/ubuntu/quorum/build/_workspace/src/github.com/ethereum/go-ethereum/miner/worker.go:259 +0x232
created by github.com/ethereum/go-ethereum/miner.newWorker
/home/ubuntu/quorum/build/_workspace/src/github.com/ethereum/go-ethereum/miner/worker.go:158 +0x5ea`

Examples do not work with latest Constellation

If using the latest Constellation from https://github.com/jpmorganchase/constellation private transactions start failing to "Unknown recipient"-error. This is because the sample configurations have something like:

archivalPublicKeyPath = "keys/tm1a.pub"
archivalPrivateKeyPath = "keys/tm1a.key"

Even though the archival feature is removed from latest Constellation, the public key from the config gets picked up due to this line:

https://github.com/jpmorganchase/constellation/blob/e6aed2ba7590d034c7213cfc86a940931e627ef0/Constellation/Node/Config.hs#L69

This causes all private transactions to be tried to be sent to keys/tm1a.pub which nobody advertise so all private transactions fails.

Fix proposal is to edit the examples by removing the usage of the archival feature.

Question: private state partially updated

Hello Quorum experts.

I'm just wondering if it is possible to construct the following situation in Quorum (based on SimpleStorage privacy example):

Step 1:
from:N1 privateFor:N2,N3 - instantiate new contract
N2, N3 can call get on the contract and will see the same number: 42

Step 2:
from:N1 privateFor:N2 - set(73)
I assume N2 calling get will see 73
But what about N3? Will his get still return 42?

Regards,
Ivica

Permission Denied

When running ./raft-init.sh on vagrant, I get a long list of "permission denied"... I've listed the code below:

vagrant@ubuntu-xenial:~/quorum-examples/7nodes$ ./raft-init.sh
[*] Cleaning up temporary data directories
rm: cannot remove 'qdata/constellation5/payloads': Permission denied
rm: cannot remove 'qdata/constellation7/payloads': Permission denied
rm: cannot remove 'qdata/dd5/static-nodes.json': Permission denied
rm: cannot remove 'qdata/dd5/keystore/key5': Permission denied
rm: cannot remove 'qdata/dd5/geth/nodekey': Permission denied
rm: cannot remove 'qdata/dd5/geth/chaindata/LOG': Permission denied
rm: cannot remove 'qdata/dd5/geth/chaindata/CURRENT': Permission denied
rm: cannot remove 'qdata/dd5/geth/chaindata/LOCK': Permission denied
rm: cannot remove 'qdata/dd5/geth/chaindata/000001.log': Permission denied
rm: cannot remove 'qdata/dd5/geth/chaindata/MANIFEST-000000': Permission denied
rm: cannot remove 'qdata/dd5/geth/lightchaindata/LOG': Permission denied
rm: cannot remove 'qdata/dd5/geth/lightchaindata/CURRENT': Permission denied
rm: cannot remove 'qdata/dd5/geth/lightchaindata/LOCK': Permission denied
rm: cannot remove 'qdata/dd5/geth/lightchaindata/000001.log': Permission denied
rm: cannot remove 'qdata/dd5/geth/lightchaindata/MANIFEST-000000': Permission denied
rm: cannot remove 'qdata/constellation6/payloads': Permission denied
rm: cannot remove 'qdata/constellation2/payloads': Permission denied
rm: cannot remove 'qdata/dd3/static-nodes.json': Permission denied
rm: cannot remove 'qdata/dd3/keystore': Permission denied
rm: cannot remove 'qdata/dd3/geth/nodekey': Permission denied
rm: cannot remove 'qdata/dd3/geth/chaindata/LOG': Permission denied
rm: cannot remove 'qdata/dd3/geth/chaindata/CURRENT': Permission denied
rm: cannot remove 'qdata/dd3/geth/chaindata/LOCK': Permission denied
rm: cannot remove 'qdata/dd3/geth/chaindata/000001.log': Permission denied
rm: cannot remove 'qdata/dd3/geth/chaindata/MANIFEST-000000': Permission denied
rm: cannot remove 'qdata/dd3/geth/lightchaindata/LOG': Permission denied
rm: cannot remove 'qdata/dd3/geth/lightchaindata/CURRENT': Permission denied
rm: cannot remove 'qdata/dd3/geth/lightchaindata/LOCK': Permission denied
rm: cannot remove 'qdata/dd3/geth/lightchaindata/000001.log': Permission denied
rm: cannot remove 'qdata/dd3/geth/lightchaindata/MANIFEST-000000': Permission denied
rm: cannot remove 'qdata/dd3/permissioned-nodes.json': Permission denied
rm: cannot remove 'qdata/dd6/static-nodes.json': Permission denied
rm: cannot remove 'qdata/dd6/keystore': Permission denied
rm: cannot remove 'qdata/dd6/geth/nodekey': Permission denied
rm: cannot remove 'qdata/dd6/geth/chaindata/LOG': Permission denied
rm: cannot remove 'qdata/dd6/geth/chaindata/CURRENT': Permission denied
rm: cannot remove 'qdata/dd6/geth/chaindata/LOCK': Permission denied
rm: cannot remove 'qdata/dd6/geth/chaindata/000001.log': Permission denied
rm: cannot remove 'qdata/dd6/geth/chaindata/MANIFEST-000000': Permission denied
rm: cannot remove 'qdata/dd6/geth/lightchaindata/LOG': Permission denied
rm: cannot remove 'qdata/dd6/geth/lightchaindata/CURRENT': Permission denied
rm: cannot remove 'qdata/dd6/geth/lightchaindata/LOCK': Permission denied
rm: cannot remove 'qdata/dd6/geth/lightchaindata/000001.log': Permission denied
rm: cannot remove 'qdata/dd6/geth/lightchaindata/MANIFEST-000000': Permission denied
rm: cannot remove 'qdata/dd1/static-nodes.json': Permission denied
rm: cannot remove 'qdata/dd1/keystore/key1': Permission denied
rm: cannot remove 'qdata/dd1/geth/nodekey': Permission denied
rm: cannot remove 'qdata/dd1/geth/chaindata/LOG': Permission denied
rm: cannot remove 'qdata/dd1/geth/chaindata/CURRENT': Permission denied
rm: cannot remove 'qdata/dd1/geth/chaindata/LOCK': Permission denied
rm: cannot remove 'qdata/dd1/geth/chaindata/000001.log': Permission denied
rm: cannot remove 'qdata/dd1/geth/chaindata/MANIFEST-000000': Permission denied
rm: cannot remove 'qdata/dd1/geth/lightchaindata/LOG': Permission denied
rm: cannot remove 'qdata/dd1/geth/lightchaindata/CURRENT': Permission denied
rm: cannot remove 'qdata/dd1/geth/lightchaindata/LOCK': Permission denied
rm: cannot remove 'qdata/dd1/geth/lightchaindata/000001.log': Permission denied
rm: cannot remove 'qdata/dd1/geth/lightchaindata/MANIFEST-000000': Permission denied
rm: cannot remove 'qdata/dd1/permissioned-nodes.json': Permission denied
rm: cannot remove 'qdata/dd2/static-nodes.json': Permission denied
rm: cannot remove 'qdata/dd2/keystore/key3': Permission denied
rm: cannot remove 'qdata/dd2/keystore/key2': Permission denied
rm: cannot remove 'qdata/dd2/geth/nodekey': Permission denied
rm: cannot remove 'qdata/dd2/geth/chaindata/LOG': Permission denied
rm: cannot remove 'qdata/dd2/geth/chaindata/CURRENT': Permission denied
rm: cannot remove 'qdata/dd2/geth/chaindata/LOCK': Permission denied
rm: cannot remove 'qdata/dd2/geth/chaindata/000001.log': Permission denied
rm: cannot remove 'qdata/dd2/geth/chaindata/MANIFEST-000000': Permission denied
rm: cannot remove 'qdata/dd2/geth/lightchaindata/LOG': Permission denied
rm: cannot remove 'qdata/dd2/geth/lightchaindata/CURRENT': Permission denied
rm: cannot remove 'qdata/dd2/geth/lightchaindata/LOCK': Permission denied
rm: cannot remove 'qdata/dd2/geth/lightchaindata/000001.log': Permission denied
rm: cannot remove 'qdata/dd2/geth/lightchaindata/MANIFEST-000000': Permission denied
rm: cannot remove 'qdata/dd2/permissioned-nodes.json': Permission denied
rm: cannot remove 'qdata/constellation1/payloads': Permission denied
rm: cannot remove 'qdata/constellation3/payloads': Permission denied
rm: cannot remove 'qdata/dd4/static-nodes.json': Permission denied
rm: cannot remove 'qdata/dd4/keystore/key4': Permission denied
rm: cannot remove 'qdata/dd4/geth/nodekey': Permission denied
rm: cannot remove 'qdata/dd4/geth/chaindata/LOG': Permission denied
rm: cannot remove 'qdata/dd4/geth/chaindata/CURRENT': Permission denied
rm: cannot remove 'qdata/dd4/geth/chaindata/LOCK': Permission denied
rm: cannot remove 'qdata/dd4/geth/chaindata/000001.log': Permission denied
rm: cannot remove 'qdata/dd4/geth/chaindata/MANIFEST-000000': Permission denied
rm: cannot remove 'qdata/dd4/geth/lightchaindata/LOG': Permission denied
rm: cannot remove 'qdata/dd4/geth/lightchaindata/CURRENT': Permission denied
rm: cannot remove 'qdata/dd4/geth/lightchaindata/LOCK': Permission denied
rm: cannot remove 'qdata/dd4/geth/lightchaindata/000001.log': Permission denied
rm: cannot remove 'qdata/dd4/geth/lightchaindata/MANIFEST-000000': Permission denied
rm: cannot remove 'qdata/dd4/permissioned-nodes.json': Permission denied
rm: cannot remove 'qdata/logs/constellation1.log': Permission denied
rm: cannot remove 'qdata/logs/4.log': Permission denied
rm: cannot remove 'qdata/logs/5.log': Permission denied
rm: cannot remove 'qdata/logs/constellation6.log': Permission denied
rm: cannot remove 'qdata/logs/2.log': Permission denied
rm: cannot remove 'qdata/logs/7.log': Permission denied
rm: cannot remove 'qdata/logs/3.log': Permission denied
rm: cannot remove 'qdata/logs/constellation3.log': Permission denied
rm: cannot remove 'qdata/logs/constellation7.log': Permission denied
rm: cannot remove 'qdata/logs/constellation5.log': Permission denied
rm: cannot remove 'qdata/logs/6.log': Permission denied
rm: cannot remove 'qdata/logs/1.log': Permission denied
rm: cannot remove 'qdata/logs/constellation4.log': Permission denied
rm: cannot remove 'qdata/logs/constellation2.log': Permission denied
rm: cannot remove 'qdata/dd7/static-nodes.json': Permission denied
rm: cannot remove 'qdata/dd7/keystore': Permission denied
rm: cannot remove 'qdata/dd7/geth/nodekey': Permission denied
rm: cannot remove 'qdata/dd7/geth/chaindata/LOG': Permission denied
rm: cannot remove 'qdata/dd7/geth/chaindata/CURRENT': Permission denied
rm: cannot remove 'qdata/dd7/geth/chaindata/LOCK': Permission denied
rm: cannot remove 'qdata/dd7/geth/chaindata/000001.log': Permission denied
rm: cannot remove 'qdata/dd7/geth/chaindata/MANIFEST-000000': Permission denied
rm: cannot remove 'qdata/dd7/geth/lightchaindata/LOG': Permission denied
rm: cannot remove 'qdata/dd7/geth/lightchaindata/CURRENT': Permission denied
rm: cannot remove 'qdata/dd7/geth/lightchaindata/LOCK': Permission denied
rm: cannot remove 'qdata/dd7/geth/lightchaindata/000001.log': Permission denied
rm: cannot remove 'qdata/dd7/geth/lightchaindata/MANIFEST-000000': Permission denied
rm: cannot remove 'qdata/constellation4/payloads': Permission denied

Any tips on how to get "permission" to move forward?

Thanks in advance!

Private Transaction is throwing error and bringing all the 7nodes down.

I am writing this contract ::
var testContract = web3.eth.contract([{"constant":false,"inputs":[],"name":"getValue","outputs":[{"name":"a","type":"uint256"}],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":false,"inputs":[{"name":"a","type":"uint256"}],"name":"setValue","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"}]);
var test = testContract.new(
{
from: web3.eth.accounts[0],
data: '0x6060604052341561000f57600080fd5b60ce8061001d6000396000f30060606040526000357c0100000000000000000000000000000000000000000000000000000000900463ffffffff168063209652551460465780635524107714606c57600080fd5b3415605057600080fd5b6056608c565b6040518082815260200191505060405180910390f35b3415607657600080fd5b608a60048080359060200190919050506098565b005b60008054905080905090565b80600081905550505600a165627a7a7230582024e016d5e5847222cc207652bed2e9ed9b95abe6ae8f9e2acf46d6ac5f401d740029',
gas: '4700000',
privateFor: ["QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=", "1iTZde/ndBHvzhcl7V68x44Vx7pl8nwx9LqnM/AfJUg="]
}, function (e, contract){
console.log(e, contract);
if (typeof contract.address !== 'undefined') {
console.log('Contract mined! address: ' + contract.address + ' transactionHash: ' + contract.transactionHash);
}
})

[6:46]
but getting error and all the nodes are coming down.

Issue::::
INFO [11-15|12:58:28] sending private tx data=6060604052341561000f57600080fd5b60ce8061001d6000396000f30060606040526000357c0100000000000000000000000000000000000000000000000000000000900463ffffffff168063209652551460465780635524107714606c57600080fd5b3415605057600080fd5b6056608c565b6040518082815260200191505060405180910390f35b3415607657600080fd5b608a60048080359060200190919050506098565b005b60008054905080905090565b80600081905550505600a165627a7a7230582024e016d5e5847222cc207652bed2e9ed9b95abe6ae8f9e2acf46d6ac5f401d740029 privatefrom= privatefor="[QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc= 1iTZde/ndBHvzhcl7V68x44Vx7pl8nwx9LqnM/AfJUg=]"
INFO [11-15|12:58:28] sent private tx data=3935801486d7c735609b68952f96f08e4032df71f8911ece30ac152097178b78a98a79609e57b78107e28075c22e346a2e69abef3789ae5e7ed8019256b312d8 privatefrom= privatefor="[QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc= 1iTZde/ndBHvzhcl7V68x44Vx7pl8nwx9LqnM/AfJUg=]"
INFO [11-15|12:58:28] Submitted contract creation fullhash=0xc28f32887f553b0e167c3cc5872e4918ce363559d26286472dfe9905c1eddb97 to=0x1932c48b2bF8102Ba33B4A6B545C32236e342f34
INFO [11-15|12:58:30] Committed address=0xd8Dba507e85F116b1f7e231cA8525fC9008A6966 hash=5e4fdb…df3e77 number=70
INFO [11-15|12:58:30] Imported new chain segment blocks=1 txs=0 mgas=0.000 elapsed=1.840ms mgasps=0.000 number=70 hash=5e4fdb…df3e77
panic: runtime error: index out of range

Can you suggest what is the issue

This is happening when we starting the node using the Istanbul BFT mode

Vagrant up issue

Hi,
I m trying to use Quorum on my PC (Ubuntu 16.04 ) Vagrant version 2.0.2 Virtual box version [5.2.6]
vagrant

Can you direct me to a fix
I forgot to mention. I m currently running Ubuntu as a virtual machine and I got to know about nested virtualization if anyone can help me figure about nested virtualization ( the host machine is windows 10)

7nodes example not works

ubuntu@ubuntu-xenial:~/quorum-examples/7nodes$ ./istanbul-init.sh
[*] Cleaning up temporary data directories
[*] Configuring node 1
cp: cannot stat 'raft/static-nodes.json': No such file or directory

Also there is no such file in repo quorum-examples/examples/7nodes/raft/

Running raft-start.sh doesn't create geth.ipc

I was following the 5nodesRTGS example for Quorum.
Steps performed:

INFO [01-31|15:23:00] Successfully wrote genesis state database=lightchaindata hash=c23b4e…8b1b71

  • Next I run: sudo ./raft-start.sh

[] Starting Constellation nodes
[
] Starting node 1 - Bank 1
[] Starting node 2 - Bank 2
[
] Starting node 3 - Bank 3
[] Starting node 4 - Regulator
[
] Starting node 5 - Observer
[*] Waiting for nodes to start

All nodes configured. See 'qdata/logs' for logs, and run e.g. 'geth attach qdata/dd4/geth.ipc' to attach to the RTGS Regulator Geth node

  • geth attach ipc:qdata/dd1/geth.ipc

Fatal: Unable to attach to remote geth: dial unix qdata/dd1/geth.ipc: connect: no such file or directory

This is where i get the error. After running raft-start.sh , if I check qdata/dd1, it has no geth.ipc generated.

What determines the roles of nodes in 7 nodes raft example?

Could someone help me understand how the roles of nodes are classified as block-maker, voter or observer in the raft aspect of 7-nodes examples. The genesis files does not have any reference to the address or is it pre-assigned somehow?

Can not get contract address in a permissioned network

Hello guys,
i'm working on the 7nodes example of quorum-examples and there is a behaviour that i don't get :
First, when i ran the start.sh script it has deployed the simple storage contract and it returned the transactionHash, when i do " eth.getTransactionReceipt("transactionHash") from any of the 7 nodes it returns the contract address and other informations.
However , when i have set up a permissioned network , and then i ran the script "start.sh" it has indeed returned a transactionHash but when i try running "eth.getTransactionReceipt("transactionHash") it returns null.
I don't understand why after setting the permissioned network i can no longuer get the contract address ,,, i would really appreciate if anyone clarifies this point ,,
PS: i have ran the nodes ( 3, 6, and 7 ) with the --permissioned flag. And i have noticed that if i run the node 7 without that flag everything works fine and i can get the contract address , but when i run it with the flag i get the behaviour i talked about earlier .
Thank you

The source of the solidity contract for the both example

Hi,
Am exploring QUORUM and am trying to find the solidity contract files which lead to the code in the genesis.json in the examples. It would be great if someone can provide me the location of it and if there is a documentation on the genesis.json as whole how each node is created and how one node is differentiated from other (voter vs creator).

Is 4Gb of RAM really needed to run 7Nodes?

I've just cloned the quorum-examples and I ran out of memory due to having an other virtual machine running at the time I started vagrant.

Is 4Gb of RAM really needed to run this example project?

If yes: I'm wondering of updating the README.md file with the required VM's RAM
If no: I thing we could decrease the memory in the Vagrantfile (v.memory = 4096) to the needed amount.

Thank you.

Many empty transactions in local cluster

I am running a 5 node cluster (followed instructions here for the most part). When I paused the Block Maker, I noticed a ton of transactions stacking up in the txpool. Here is the output of txpool.inspect.pending:

> txpool.inspect.pending
{
  0x0638e1574728b6d862dd5d3a3e0942c3be47d996: {
    0: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    1: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    10: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    11: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    12: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    13: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    14: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    15: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    16: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    17: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    18: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    19: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    2: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    20: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    21: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    22: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    23: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    24: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    25: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    26: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    27: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    28: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    29: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    3: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    30: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    31: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    32: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    33: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    34: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    35: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    36: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    37: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    38: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    39: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    4: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    40: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    41: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    42: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    43: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    44: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    45: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    46: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    47: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    48: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    49: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    5: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    50: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    51: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    52: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    53: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    54: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    55: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    56: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    57: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    58: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    59: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    6: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    60: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    61: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    62: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    63: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    64: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    65: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    66: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    7: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    8: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    9: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas"
  },
  0x0fbdc686b912d7722dc86510934589e0aaf3b55a: {
    0: "0x0000000000000000000000000000000000000020: 0 wei + 91996 × 0 gas"
  },
  0x9186eb3d20cbd1f5f992a950d808c4495153abd5: {
    0: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    1: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    10: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    11: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    12: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    13: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    14: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    15: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    16: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    17: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    18: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    19: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    2: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    20: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    21: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    22: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    23: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    24: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    25: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    26: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    27: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    28: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    29: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    3: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    30: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    31: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    32: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    33: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    34: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    35: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    36: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    37: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    38: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    39: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    4: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    40: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    41: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    42: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    43: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    44: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    45: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    46: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    47: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    48: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    49: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    5: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    50: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    51: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    52: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    53: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    54: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    55: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    56: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    57: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    58: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    59: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    6: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    60: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    61: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    62: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    63: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    64: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    65: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    66: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    67: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    68: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    69: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    7: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    70: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    71: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    72: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    8: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas",
    9: "0x0000000000000000000000000000000000000020: 0 wei + 31355 × 0 gas"
  }
}

key copying errors in raft-init.sh and istanbul-init.sh

those file contain the lines:

cp keys/key2 qdata/dd2/keystore
cp keys/key3 qdata/dd2/keystore

so the key for node 2 is overwritten and replace by node 3's key, then below in the files there is no code to copy node 3's key to the proper folder

Unable to execute contracts with privateFor in Docker

I am working on a POC where the 7 nodes would be broken up into 7 different containers of Docker where each container will contain a geth and a constellation node respectively.
I have exposed the constellation TCP port as well as the geth TCP and UDP ports for containers to interact with each other and infact the nodes are able to discover each other.
I have run a normal geth with --nodekeyhex to run it as a bootnode and other nodes connecting to the enode of the bootnode.
I am able to run normal ethereum contracts on one of the geth rpc terminal but whenever I am running a contract containing "privateFor", it is throwing me the error below:

Error: Invalid JSON RPC response: {"error":{"EOF":"EOF","code":-32603},"id":8,"version":"2.0"} undefined

Contract:
contract SimpleStorage {

uint val;

function setValue(uint _val) {
    val = _val;
}

function getValue() returns (uint) {
    return val;
}

}

Web3Deploy:
var simplestorageContract = web3.eth.contract([{"constant":false,"inputs":[],"name":"getValue","outputs":[{"name":"","type":"uint256"}],"payable":false,"type":"function"},{"constant":false,"inputs":[{"name":"_val","type":"uint256"}],"name":"setValue","outputs":[],"payable":false,"type":"function"}]);
var simplestorage = simplestorageContract.new(
{
from: web3.eth.accounts[0],
data: '0x6060604052341561000c57fe5b5b60c68061001b6000396000f30060606040526000357c0100000000000000000000000000000000000000000000000000000000900463ffffffff1680632096525514604457806355241077146067575bfe5b3415604b57fe5b60516084565b6040518082815260200191505060405180910390f35b3415606e57fe5b60826004808035906020019091905050608f565b005b600060005490505b90565b806000819055505b505600a165627a7a7230582082de0a01ef4b2c7d677328ded3bee7d0b70e2f06f1036ec3991461de7b3aa1160029',
gas: '4700000',
privateFor: ["BULeR8JyUWhiuuCMU/HLA0Q5pzkYT+cHII3ZKBey3Bo=","QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc="]
}, function (e, contract){
console.log(e, contract);
if (typeof contract.address !== 'undefined') {
console.log('Contract mined! address: ' + contract.address + ' transactionHash: ' + contract.transactionHash);
}
})
Error from geth logs:
E0829 12:36:42.399089 rpc/server.go:152] goroutine 16110 [running]:
github.com/ethereum/go-ethereum/rpc.(*Server).serveRequest.func1(0xc42035b620, 0x1735580, 0xc422d324b0)
/quorum/build/_workspace/src/github.com/ethereum/go-ethereum/rpc/server.go:151 +0x133
panic(0xd828a0, 0x16fa5e0)
/usr/local/go/src/runtime/panic.go:489 +0x2cf
github.com/ethereum/go-ethereum/internal/ethapi.(*PublicTransactionPoolAPI).SendTransaction(0xc4200d8310, 0x7ff317f85780, 0xc420b122c0, 0x1848b382e3029ded, 0x71fec709a3888be8, 0x9d415fe6, 0x0, 0xc42240d040, 0xc42240d0e0, 0xc42240d100, ...)
/quorum/build/_workspace/src/github.com/ethereum/go-ethereum/internal/ethapi/api.go:1226 +0xebe
reflect.Value.call(0xc4263fc380, 0xc42000f130, 0x13, 0xe8f221, 0x4, 0xc420b0ed80, 0x3, 0x4, 0x1, 0xdd56a0, ...)
/usr/local/go/src/reflect/value.go:434 +0x91f
reflect.Value.Call(0xc4263fc380, 0xc42000f130, 0x13, 0xc420b0ed80, 0x3, 0x4, 0x1, 0x1, 0xc42240d0a0)
/usr/local/go/src/reflect/value.go:302 +0xa4
github.com/ethereum/go-ethereum/rpc.(*Server).handle(0xc42035b620, 0x7ff317f85780, 0xc420b122c0, 0x1735580, 0xc422d324b0, 0xc4223fb730, 0xf, 0xd83c40, 0xc422d32520)
/quorum/build/_workspace/src/github.com/ethereum/go-ethereum/rpc/server.go:318 +0x752
github.com/ethereum/go-ethereum/rpc.(*Server).exec(0xc42035b620, 0x7ff317f85780, 0xc420b122c0, 0x1735580, 0xc422d324b0, 0xc4223fb730)
/quorum/build/_workspace/src/github.com/ethereum/go-ethereum/rpc/server.go:340 +0x1ba
github.com/ethereum/go-ethereum/rpc.(*Server).serveRequest(0xc42035b620, 0x1735580, 0xc422d324b0, 0x9a3901, 0x1, 0x0, 0x0)
/quorum/build/_workspace/src/github.com/ethereum/go-ethereum/rpc/server.go:208 +0x412
github.com/ethereum/go-ethereum/rpc.(*Server).ServeSingleRequest(0xc42035b620, 0x1735580, 0xc422d324b0, 0x1)
/quorum/build/_workspace/src/github.com/ethereum/go-ethereum/rpc/server.go:232 +0x4e
github.com/ethereum/go-ethereum/rpc.(*Server).ServeHTTP(0xc42035b620, 0x1730120, 0xc420147500, 0xc4223a7200)
/quorum/build/_workspace/src/github.com/ethereum/go-ethereum/rpc/http.go:162 +0x36c
github.com/ethereum/go-ethereum/vendor/github.com/rs/cors.(*Cors).Handler.func1(0x1730120, 0xc420147500, 0xc4223a7200)
/quorum/build/_workspace/src/github.com/ethereum/go-ethereum/vendor/github.com/rs/cors/cors.go:190 +0xe9
net/http.HandlerFunc.ServeHTTP(0xc4201c7100, 0x1730120, 0xc420147500, 0xc4223a7200)
/usr/local/go/src/net/http/server.go:1942 +0x44
net/http.serverHandler.ServeHTTP(0xc420093e40, 0x1730120, 0xc420147500, 0xc4223a7200)
/usr/local/go/src/net/http/server.go:2568 +0x92
net/http.(*conn).serve(0xc4207cd400, 0x17311e0, 0xc421002880)
/usr/local/go/src/net/http/server.go:1825 +0x612
created by net/http.(*Server).Serve
/usr/local/go/src/net/http/server.go:2668 +0x2ce

Generating keys for Quorum nodes

Hello guys ,
Im trying to build a quorum network from scratch, and im trying to get inspired by the 7 nodes example of quorum-examples.

Here is what i did :
I used the Quorum-Genesis utility in order to create the genesis file. Then i have set up my bootnode and saved its key and enode address to use it later when starting my nodes .

Then i wanted to make an init.sh and start.sh scripts similar to those in the 7 nodes example in order to make it easier to start my network. However i found some issues.

In the 7 nodes example there is a folder called "keys" which contains the UTC files for all the accounts specified in the genesis.json file also it has the public keys,private keys and archival keys to start constellations . I don't understand how am i supposed to generate all those keys???? and obviously i need those public keys and archival keys in order to populate the "tm.conf" files so i can start my constellations later by doing :
constellation-node tm.conf.

If anyone could clarify this for me i would really appreciate it,
thank you .

Missing static-nodes.json file

When we run init scripts of ibft and raft, it shows static-nodes.json file is not present in raft directory under the 7nodes directory.

How to do Send Transaction from account A to B on quorum

Hi Team

I have done the private setup of a raft based quorum. The example is working properly.
But when I try to do coin transfer from one account to another then I am seeing the below error in the log. Its a fresh setup and no mining has been done. But still seeing the mining error.

eth.sendTransaction({from:web3.eth.accounts[0], to: web3.eth.accounts[1], privateFor: ["ROAZBWtSacxXQrOe3FGAqJDyJjFePR5ce4TSIzmJ0Bc="], value: "1" });
Error: Post http+unix://c/send: dial unix qdata/tm1.ipc: connect: connection refused
at web3.js:3143:20
at web3.js:6347:15
at web3.js:5081:36
at :1:1

ubuntu@blockchain03:~/quorum-examples$ tail qdata/logs/7.log
INFO [11-23|12:26:54] 🔨 Mined block number=1 hash=b2b24061 elapsed=7.067924ms
INFO [11-23|12:26:54] QUORUM-CHECKPOINT name=TX-ACCEPTED tx=0xd51abe3f6e5fa272aa6158d08c783191c19504d345374b0369d389504f164a8b
INFO [11-23|12:26:54] QUORUM-CHECKPOINT name=BLOCK-CREATED block=b2b240612fe20e580500c1f6f9949fe6bcf8ce133be7c44cd3a568fa6ee7dbb7
INFO [11-23|12:26:54] persisted the latest applied index index=9
INFO [11-23|12:38:55] Generated next block block num=2 num txes=1
INFO [11-23|12:38:55] 🔨 Mined block number=2 hash=7610b91a elapsed=2.865896ms
INFO [11-23|12:38:55] Non-extending block block=7610b9…99cfb0 parent=b2b240…e7dbb7 head=62c2c0…5a3c94
INFO [11-23|12:38:55] persisted the latest applied index index=10
INFO [11-23|12:38:55] Handling InvalidRaftOrdering invalid block=7610b9…99cfb0 current head=62c2c0…5a3c94
INFO [11-23|12:38:55] Someone else mined invalid block; ignoring block=7610b9…99cfb0

Our intention to do a simple payment from account A to account B. When I dont specify the privateFor the transaction successfully gets submitted but doesnot get succeeded. It awaits in the pending queue.

eth.sendTransaction({from:eth.accounts[0], to:eth.accounts[1], value:1})
"0x00cd1f4ea060c53aa1edacbcd11c3b4f9c32a9e04cf545018de02518e35c18cb"
txpool.inspect.pending
{
0x0638E1574728b6D862dd5d3A3E0942c3be47D996: {
0: "contract creation: 0 wei + 4700000 × 0 gas"
},
0x9186eb3d20Cbd1F5f992a950d808C4495153ABd5: {
0: "contract creation: 0 wei + 4700000 × 0 gas"
},
0xcA843569e3427144cEad5e4d5999a3D0cCF92B8e: {
0: "contract creation: 0 wei + 4700000 × 0 gas",
1: "0x2C80Eba934Fa0deE778FD0029bcD77A2cD31959e: 10000000000000000000000000 wei + 90000 × 0 gas",
2: "0x0fBDc686b912d7722dc86510934589E0AAf3b55A: 1 wei + 90000 × 0 gas",
3: "0x0fBDc686b912d7722dc86510934589E0AAf3b55A: 1 wei + 90000 × 0 gas"
},
0xed9d02e382b34818e88B88a309c7fe71E65f419d: {
0: "contract creation: 0 wei + 4700000 × 0 gas",
1: "contract creation: 0 wei + 4700000 × 0 gas",
2: "contract creation: 1 wei + 90000 × 0 gas",
3: "contract creation: 1 wei + 90000 × 0 gas"
}
}

newer constellation doesn't work with the 7nodes example

I'm trying to figure out how to run the 7nodes example with a new constellation release 0.1.0 and it seems that the existing configuration files break the example, as well as my best guess at changing the config files to the new format.

With the existing config files, the node crashes with an error message like
"encrypt: Sender cannot be a recipient", and with the new ones I get something like

Errors while running sendPayload: [Left "Unknown recipient\nCallStack (from HasCallStack):\n  error, called at ./Constellation/Node.hs:157:21 in constellation-0.1.0.0-6e8lqFuBPtzCz3iEft1Zk0:Constellation.Node"]

here's what I'm using for the config file for node 1 for example

url = "http://127.0.0.1:9001/"
port = 9001
socket = "qdata/tm1.ipc"
othernodes = []
publickeys = ["keys/tm1.pub"]
privatekeys = ["keys/tm1.key"]
alwayssendto = []
storage = "qdata/constellation1"
verbosity = 3

I'm not really sure what I'm doing wrong, any comments would help.

transaction performance

How to test the transaction performance of 7nodes?such as how many transactions per second.

Generating Public Key for a Given Address

I stood up the 7nodes example. Runs fine, but there is a public key:

ROAZBWtSacxXQrOe3FGAqJDyJjFePR5ce4TSIzmJ0Bc=

provided for the privateFor field in order to change a contract value while keeping it private. How was this value derived? Can it be done for other accounts?

Code stays in infinite loop

the below code in the file constellation-start.sh never exits as all the 7 conditions are never true.

while $DOWN; do
    sleep 0.1
    DOWN=false
    for i in {1..7}
    do
	if [ ! -S "qdata/c$i/tm.ipc" ]; then
            DOWN=true
	fi
    done
done

New node to join the network

Hi I understand that we have to define the all the the makers, voters node in the storage section of the genesis json file.

Is it possible to create a new and late coming node and join an existing quorum network that is actively running and was previously initialized based on the genesis json file which doesn't contain the storage info of the new node?

Thank you.

PrivateFor - adding new participant

A question to build on issue 21.

The solution was

The contract is only known to nodes that you put in privateFor (and your own node) for the transaction that instantiates the contract, so node 4 never saw a transaction that created the contract, and the new "set" transaction means nothing to it.

If you put all of the nodes in privateFor when you deploy the contract, then send transactions that are limited to fewer participants, you should get the behavior you want, although we recommend that you keep the recipients list constant for any given private contract.

That is awesome, will work. But I may not know all the participant node / participant of the contract upfront. Hence I may not be able to deploy the contract into a participant when the first stage of deployment happens, but will need to add a transaction to share with this participant at a later date. What other types of workaround can I do to make the above scenario work?

What happens when bootnode goes down ?

So I have setup 2 nodes, 1 on Azure and 1 on AWS and was able to successfully sync the data between them. In the node 1 is started as single block maker and boot node, and node 2 as voter. Later I added node 2 as block maker using the 0x20 with the permission of node 1, so this is as contract level. Now what happens when node 1 gone down, can I still publish my Tx to node 2 which was blockmaker assigned by node 1 ? or in this case can I define node 2 as boot node as well ?

7nodes error on private.get()

Version:
Geth
Version: 1.7.2-stable
Git Commit: 2a5a1d6146e38b7025868ad0cfa183f42de3554e
Quorum Version: 2.0.0
Architecture: amd64
Network Id: 1
Go Version: go1.9.3
Operating System: linux
GOPATH=
GOROOT=/usr/local/go

Context
After running the 7nodes example on terminal 1, node7 can't see the initialized value.

Expected output:
In terminal window 1 (node 1)

private.get()
42
In terminal window 2 (node 4) :
private.get()
0
In terminal window 3 (node 7) :
private.get()
42

Actual output:
In terminal window 1 (node 1) :

private.get()
42
In terminal window 1 (node 1) :
private.get()
0
In terminal window 1 (node 1) :
private.get()
0

7nodes-error

Contract deplyment transaction excuting successfully but no contract deployed...

quorum -version: v2.0.0
consensus: raft

we have done four node setup, node1 showing up as a minter and other as verifiers.
when we are trying to deploy 7 nodes example contract from node1. It returns transaction receipt as bellow

eth.getTransactionReceipt("0xab500348ffc57f231d9569064f6da7a5b3912ace829070862d15e1273fbb265f")
{
blockHash: "0x8b31dc8a9074909c228867b3bed240df5b33963ba95dd392caff3f46d316493c",
blockNumber: 12,
contractAddress: "0x3f7cb7b26dad554cde28e4933b8e4861137dd081",
cumulativeGasUsed: 90000,
from: "0x2ac8e4d477bf82e3988f8b1d1071b6e275483dfe",
gasUsed: 90000,
logs: [],
logsBloom: "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
root: "0x0ec0ef674a4436bd4509ea8f50c3fe89cc82f21bcb0e82a4679e11023fd304dc",
to: null,
transactionHash: "0xab500348ffc57f231d9569064f6da7a5b3912ace829070862d15e1273fbb265f",
transactionIndex: 0
}

but getCode is returns null means contract is not deployed.

eth.getCode("0x3f7cb7b26dad554cde28e4933b8e4861137dd081")
"0x"

could you help us understand this behavior?

making a permissioned network with raft

Hello guys,
Im working on the 7nodes example and i have started the cluster using raft consensus. And then i tried to make a permissioned network consisting of 3 nodes, following the steps mentioned in the wiki but it's not working . and i'm starting to wonder maybe it's not possible to make a permissioned network if you are using Raft ????

Thank you,

Data sync is not happening between nodes deployed on different machines

I have boot node and node 1 deployed on one machine as block maker and voter and relevant invoking tm2.conf file looks like -


url = "http://10.10.10.5:9000/"
port = 9000
socketPath = "qdata/tm2.ipc"
otherNodeUrls = []
publicKeyPath = "keys/tm2.pub"
privateKeyPath = "keys/tm2.key"
archivalPublicKeyPath = "keys/tm2a.pub"
archivalPrivateKeyPath = "keys/tm2a.key"
storagePath = "qdata/constellation2"

and node3 deployed on another machine which is ONLY voter and uses tm5.conf which config looks like -

url = "http://10.11.11.4:9000/"
port = 9000
socketPath = "qdata/tm5.ipc"
otherNodeUrls = ["http://10.10.10.5:9000/"]
publicKeyPath = "keys/tm5.pub"
privateKeyPath = "keys/tm5.key"
archivalPublicKeyPath = "keys/tm5a.pub"
archivalPrivateKeyPath = "keys/tm5a.key"
storagePath = "qdata/constellation5"

both the nodes constellations logs show no errors. In the constellation log for node 1 the log is displayed as -


nohup: appending output to 'nohup.out'
2017 Sep-03 05:24:32216663 [INFO] Constellation initializing using config file tm2.conf
2017 Sep-03 05:24:32217548 [INFO] Log level is LevelDebug
2017 Sep-03 05:24:32217620 [INFO] Utilizing 2 core(s)
2017 Sep-03 05:24:32220587 [INFO] Constructing Enclave using keypairs (keys/tm2.pub, keys/tm2.key) (keys/tm2a.pub, keys/tm2a.key)
2017 Sep-03 05:24:32220802 [INFO] Initializing storage qdata/constellation2
2017 Sep-03 05:24:32276282 [INFO] Internal API listening on qdata/tm2.ipc
2017 Sep-03 05:24:32276271 [INFO] External API listening on port 9000
2017 Sep-03 05:24:32276219 [INFO] Node started
2017 Sep-03 05:24:38344272 [DEBUG] Request from : ApiUpcheck; Response: ApiUpcheckR
2017 Sep-03 05:25:09751069 [DEBUG] Request from 10.11.11.4:49940: ApiPartyInfo (PartyInfo {piUrl = "http://10.11.11.4:9000/", piRcpts = fromList [("R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY=","http://10.11.11.4:9000/")], piParties = fromList ["http://10.10.10.5:9000/"]}); Response: ApiPartyInfoR (PartyInfo {piUrl = "http://10.10.10.5:9000/", piRcpts = fromList [("QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=","http://10.10.10.5:9000/"),("R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY=","http://10.11.11.4:9000/")], piParties = fromList ["http://10.10.10.5:9000/"]})
2017 Sep-03 05:29:35051830 [DEBUG] Request from 10.10.10.5:48254: ApiPartyInfo (PartyInfo {piUrl = "http://10.10.10.5:9000/", piRcpts = fromList [("QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=","http://10.10.10.5:9000/"),("R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY=","http://10.11.11.4:9000/")], piParties = fromList ["http://10.10.10.5:9000/"]}); Response: ApiPartyInfoR (PartyInfo {piUrl = "http://10.10.10.5:9000/", piRcpts = fromList [("QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=","http://10.10.10.5:9000/"),("R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY=","http://10.11.11.4:9000/")], piParties = fromList ["http://10.10.10.5:9000/"]})
2017 Sep-03 05:30:09760130 [DEBUG] Request from 10.11.11.4:50490: ApiPartyInfo (PartyInfo {piUrl = "http://10.11.11.4:9000/", piRcpts = fromList [("QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=","http://10.10.10.5:9000/"),("R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY=","http://10.11.11.4:9000/")], piParties = fromList ["http://10.10.10.5:9000/"]}); Response: ApiPartyInfoR (PartyInfo {piUrl = "http://10.10.10.5:9000/", piRcpts = fromList [("QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=","http://10.10.10.5:9000/"),("R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY=","http://10.11.11.4:9000/")], piParties = fromList ["http://10.10.10.5:9000/"]})
2017 Sep-03 05:34:37725427 [DEBUG] Request from 10.10.10.5:49104: ApiPartyInfo (PartyInfo {piUrl = "http://10.10.10.5:9000/", piRcpts = fromList [("QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=","http://10.10.10.5:9000/"),("R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY=","http://10.11.11.4:9000/")], piParties = fromList ["http://10.10.10.5:9000/"]}); Response: ApiPartyInfoR (PartyInfo {piUrl = "http://10.10.10.5:9000/", piRcpts = fromList [("QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=","http://10.10.10.5:9000/"),("R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY=","http://10.11.11.4:9000/")], piParties = fromList ["http://10.10.10.5:9000/"]})
2017 Sep-03 05:35:09798877 [DEBUG] Request from 10.11.11.4:51044: ApiPartyInfo (PartyInfo {piUrl = "http://10.11.11.4:9000/", piRcpts = fromList [("QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=","http://10.10.10.5:9000/"),("R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY=","http://10.11.11.4:9000/")], piParties = fromList ["http://10.10.10.5:9000/"]}); Response: ApiPartyInfoR (PartyInfo {piUrl = "http://10.10.10.5:9000/", piRcpts = fromList [("QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=","http://10.10.10.5:9000/"),("R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY=","http://10.11.11.4:9000/")], piParties = fromList ["http://10.10.10.5:9000/"]})
2017 Sep-03 05:39:40398721 [DEBUG] Request from 10.10.10.5:49964: ApiPartyInfo (PartyInfo {piUrl = "http://10.10.10.5:9000/", piRcpts = fromList [("QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=","http://10.10.10.5:9000/"),("R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY=","http://10.11.11.4:9000/")], piParties = fromList ["http://10.10.10.5:9000/"]}); Response: ApiPartyInfoR (PartyInfo {piUrl = "http://10.10.10.5:9000/", piRcpts = fromList [("QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=","http://10.10.10.5:9000/"),("R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY=","http://10.11.11.4:9000/")], piParties = fromList ["http://10.10.10.5:9000/"]})
2017 Sep-03 05:40:09837242 [DEBUG] Request from 10.11.11.4:51600: ApiPartyInfo (PartyInfo {piUrl = "http://10.11.11.4:9000/", piRcpts = fromList [("QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=","http://10.10.10.5:9000/"),("R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY=","http://10.11.11.4:9000/")], piParties = fromList ["http://10.10.10.5:9000/"]}); Response: ApiPartyInfoR (PartyInfo {piUrl = "http://10.10.10.5:9000/", piRcpts = fromList [("QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=","http://10.10.10.5:9000/"),("R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY=","http://10.11.11.4:9000/")], piParties = fromList ["http://10.10.10.5:9000/"]})
2017 Sep-03 05:44:42984722 [DEBUG] Request from 10.10.10.5:50820: ApiPartyInfo (PartyInfo {piUrl = "http://10.10.10.5:9000/", piRcpts = fromList [("QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=","http://10.10.10.5:9000/"),("R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY=","http://10.11.11.4:9000/")], piParties = fromList ["http://10.10.10.5:9000/"]}); Response: ApiPartyInfoR (PartyInfo {piUrl = "http://10.10.10.5:9000/", piRcpts = fromList [("QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=","http://10.10.10.5:9000/"),("R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY=","http://10.11.11.4:9000/")], piParties = fromList ["http://10.10.10.5:9000/"]})

and node 3 constellation log shown as -

nohup: appending output to 'nohup.out'
2017 Sep-03 05:25:09524816 [INFO] Constellation initializing using config file tm5.conf
2017 Sep-03 05:25:09528401 [INFO] Log level is LevelDebug
2017 Sep-03 05:25:09528491 [INFO] Utilizing 2 core(s)
2017 Sep-03 05:25:09528924 [INFO] Constructing Enclave using keypairs (keys/tm5.pub, keys/tm5.key) (keys/tm5a.pub, keys/tm5a.key)
2017 Sep-03 05:25:09529209 [INFO] Initializing storage qdata/constellation5
2017 Sep-03 05:25:09606851 [INFO] External API listening on port 9000
2017 Sep-03 05:25:09606862 [INFO] Internal API listening on qdata/tm5.ipc
2017 Sep-03 05:25:09606804 [INFO] Node started
~

So the problem here is, I am able to deploy contract and add data using contract functions in node 1, however pointing to node3 and calling the same function on contract address of node 1 gives me back null. Transactions are not synced between the nodes I guess ?. And here I am performing public transaction so all the participating should be able to see and read the data? but thats not happening in node 3 case, can somebody please help me resolve this.

cute pic -
https://www.instagram.com/p/BYGgz26BuZA/?hl=en&taken-by=utamaruru

7nodes example raft-start.sh error

After running raft-init.sh without problems if I run raft-start.sh, it returns the following error:

./raft-start.sh
[*] Starting Constellation nodes
[*] Starting node 1 (permissioned)
[*] Starting node 2 (permissioned)
[*] Starting node 3 (permissioned)
[*] Starting node 4 (permissioned)
[*] Starting node 5 (unpermissioned)
[*] Starting node 6 (unpermissioned)
[*] Starting node 7 (unpermissioned)
[*] Waiting for nodes to start
[*] Sending first transaction
panic: MustNew error: Get http+unix://c/upcheck: dial unix: missing address

goroutine 1 [running]:
panic(0xc1f6e0, 0xc4201d8b70)
        /usr/local/go/src/runtime/panic.go:500 +0x1a1
github.com/ethereum/go-ethereum/private/constellation.MustNew(0xc42000e00f, 0x8, 0xc42000e00f)
        /quorum/build/_workspace/src/github.com/ethereum/go-ethereum/private/constellation/constellation.go:76 +0x114
github.com/ethereum/go-ethereum/private.FromEnvironmentOrNil(0xd5a99d, 0xe, 0x1538f40, 0xc420047ea0)
        /quorum/build/_workspace/src/github.com/ethereum/go-ethereum/private/private.go:19 +0x52
github.com/ethereum/go-ethereum/private.init()
        /quorum/build/_workspace/src/github.com/ethereum/go-ethereum/private/private.go:22 +0x67
github.com/ethereum/go-ethereum/core.init()
        /quorum/build/_workspace/src/github.com/ethereum/go-ethereum/core/vm_env.go:165 +0xf4
github.com/ethereum/go-ethereum/cmd/utils.init()
        /quorum/build/_workspace/src/github.com/ethereum/go-ethereum/cmd/utils/version.go:65 +0x76
main.init()
        github.com/ethereum/go-ethereum/cmd/geth/_obj/_cgo_import.go:6 +0x58

Am i missing something?

init.sh : command not found

Situation:
I'm running through the tutorial and have successfully ssh into...
ubuntu@ubuntu-xenial:~/quorum-examples/7nodes$

I'm trying to execute init.sh and start.sh

looking at list files, I see both init.sh and start.sh available in 7nodes
screen shot 2017-08-11 at 3 46 57 pm

Expected Behavior:
I expected to have the 7 nodes up and running.

Actual Behavior:

instead I received...
"ubuntu@ubuntu-xenial:~/quorum-examples/7nodes$ init.sh
init.sh: command not found"

Fatal: invalid genesis file: missing 0x prefix for hex data

This is a fresh install... do I need to install/start geth on my Ubuntu instance? I'm running Mac OS.

13inch:7nodes tom$ pwd
/Users/tom/GitHub/quorum-examples/examples/7nodes
13inch:7nodes tom$ ls
README.md raft stop.sh
bench-private-async.sh raft-init.sh tm1.conf
bench-private-sync.sh raft-start.sh tm2.conf
bench-public-sync.sh runscript.sh tm3.conf
genesis.json script1.js tm4.conf
init.sh send-private-async.lua tm5.conf
keys send-private-sync.lua tm6.conf
passwords.txt send-public-sync.lua tm7.conf
qdata start.sh
13inch:7nodes tom$ ./init.sh
[] Cleaning up temporary data directories
[
] Configuring node 1
Fatal: invalid genesis file: missing 0x prefix for hex data
13inch:7nodes tom$

vagrant up failing on OSX VBox 5.6.2

I am trying to run vagrant up it gets through most of it, however, later in the script it throws an error and exits...

    default: cp: cannot open '/vagrant/examples/7nodes/qdata/tm1.ipc' for reading: Operation not supported
    default: cp: cannot open '/vagrant/examples/7nodes/qdata/tm2.ipc' for reading: Operation not supported
    default: cp: cannot open '/vagrant/examples/7nodes/qdata/tm3.ipc' for reading: Operation not supported
    default: cp: cannot open '/vagrant/examples/7nodes/qdata/tm7.ipc' for reading: Operation not supported
    default: cp: cannot open '/vagrant/examples/7nodes/qdata/tm6.ipc' for reading: Operation not supported
    default: cp: cannot open '/vagrant/examples/7nodes/qdata/tm4.ipc' for reading: Operation not supported
    default: cp: cannot open '/vagrant/examples/7nodes/qdata/tm5.ipc' for reading: Operation not supported

bitrot in the documentation?

I only recently learned of quorum and deployed it with quorum-examples on virtualbox & vagrant. I hit some gotchas trying to reconcile the documentation to reality. These include:

  • raft-start.sh is supposed to send an initial transaction, according to README, but the last line of it echoes a command to invoke something else to send that transaction. That's easily missed. In addition, the command it says to run is runscript while the file is runscript.sh.
  • The log doesn't seem to have the address, despite 'README` saying

The address can be found in the node 1's log file 7nodes/qdata/logs/1.log, or alternatively by reading the contractAddress param after calling eth.getTransactionReceipt(txHash)

If in fact the address is there, please be clearer how to find it. The only address I found was of the wallet, but I don't think that's what you meant.

  • Finally, I didn't actually get the demo to progress as expected. When I opened the console on 1, 4, and 7, and I updated the address to reflect the new transaction I created manually with runscript.sh, I showed a balance of 42 on node 1 (correct), 0 on node 4 (correct, but you can't tell the difference between this and having a bad address), and 0 on node 7 (incorrect).

raftport querystring missing when running raft 7 nodes example

Hi,

When running ./raft-init.sh, then ./raft-start.sh the nodes fail to startup due to this error:

Fatal: raftport querystring parameter not specified in static-node enode ID: enode://ac6b1096ca56b9f6d004b779ae3728bf83f8e22453404cc3cef16a3d9b96608bc67c4b30db88e0a5a6c6390213f7acbe1153ff6d23ce57380104288ae19373ef@127.0.0.1:21000?discport=0. please check your static-nodes.json file.

From the raft docs, it looks like the static-nodes.json file is missing the raftport query string mentioned here https://github.com/jpmorganchase/quorum/blob/master/raft/doc.md#initial-configuration-and-enacting-membership-changes. Could I be running the script incorrectly and it handles adding the raftport somewhere else?

-Charles

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.