Code Monkey home page Code Monkey logo

fabric-samples's Introduction

Hyperledger Fabric Samples

Build Status

You can use Fabric samples to get started working with Hyperledger Fabric, explore important Fabric features, and learn how to build applications that can interact with blockchain networks using the Fabric SDKs. To learn more about Hyperledger Fabric, visit the Fabric documentation.

Getting started with the Fabric samples

To use the Fabric samples, you need to download the Fabric Docker images and the Fabric CLI tools. First, make sure that you have installed all of the Fabric prerequisites. You can then follow the instructions to Install the Fabric Samples, Binaries, and Docker Images in the Fabric documentation. In addition to downloading the Fabric images and tool binaries, the Fabric samples will also be cloned to your local machine.

Test network

The Fabric test network in the samples repository provides a Docker Compose based test network with two Organization peers and an ordering service node. You can use it on your local machine to run the samples listed below. You can also use it to deploy and test your own Fabric chaincodes and applications. To get started, see the test network tutorial.

The Kubernetes Test Network sample builds upon the Compose network, constructing a Fabric network with peer, orderer, and CA infrastructure nodes running on Kubernetes. In addition to providing a sample Kubernetes guide, the Kube test network can be used as a platform to author and debug cloud ready Fabric Client applications on a development or CI workstation.

Asset transfer samples and tutorials

The asset transfer series provides a series of sample smart contracts and applications to demonstrate how to store and transfer assets using Hyperledger Fabric. Each sample and associated tutorial in the series demonstrates a different core capability in Hyperledger Fabric. The Basic sample provides an introduction on how to write smart contracts and how to interact with a Fabric network using the Fabric SDKs. The Ledger queries, Private data, and State-based endorsement samples demonstrate these additional capabilities. Finally, the Secured agreement sample demonstrates how to bring all the capabilities together to securely transfer an asset in a more realistic transfer scenario.

Smart Contract Description Tutorial Smart contract languages Application languages
Basic The Basic sample smart contract that allows you to create and transfer an asset by putting data on the ledger and retrieving it. This sample is recommended for new Fabric users. Writing your first application Go, JavaScript, TypeScript, Java Go, JavaScript, TypeScript, Java
Ledger queries The ledger queries sample demonstrates range queries and transaction updates using range queries (applicable for both LevelDB and CouchDB state databases), and how to deploy an index with your chaincode to support JSON queries (applicable for CouchDB state database only). Using CouchDB Go, JavaScript Java, JavaScript
Private data This sample demonstrates the use of private data collections, how to manage private data collections with the chaincode lifecycle, and how the private data hash can be used to verify private data on the ledger. It also demonstrates how to control asset updates and transfers using client-based ownership and access control. Using Private Data Go, Java JavaScript
State-Based Endorsement This sample demonstrates how to override the chaincode-level endorsement policy to set endorsement policies at the key-level (data/asset level). Using State-based endorsement Java, TypeScript JavaScript
Secured agreement Smart contract that uses implicit private data collections, state-based endorsement, and organization-based ownership and access control to keep data private and securely transfer an asset with the consent of both the current owner and buyer. Secured asset transfer Go JavaScript
Events The events sample demonstrates how smart contracts can emit events that are read by the applications interacting with the network. README JavaScript, Java JavaScript
Attribute-based access control Demonstrates the use of attribute and identity based access control using a simple asset transfer scenario README Go None

Additional samples

Additional samples demonstrate various Fabric use cases and application patterns.

Sample Description Documentation
Off chain data Learn how to use block events to build an off-chain database for reporting and analytics. Peer channel-based event services
Token SDK Sample REST API around the Hyperledger Labs Token SDK for privacy friendly (zero knowledge proof) UTXO transactions. README
Token ERC-20 Smart contract demonstrating how to create and transfer fungible tokens using an account-based model. README
Token UTXO Smart contract demonstrating how to create and transfer fungible tokens using a UTXO (unspent transaction output) model. README
Token ERC-1155 Smart contract demonstrating how to create and transfer multiple tokens (both fungible and non-fungible) using an account based model. README
Token ERC-721 Smart contract demonstrating how to create and transfer non-fungible tokens using an account-based model. README
High throughput Learn how you can design your smart contract to avoid transaction collisions in high volume environments. README
Simple Auction Run an auction where bids are kept private until the auction is closed, after which users can reveal their bid. README
Dutch Auction Run an auction in which multiple items of the same type can be sold to more than one buyer. This example also includes the ability to add an auditor organization. README

License

Hyperledger Project source code files are made available under the Apache License, Version 2.0 (Apache-2.0), located in the LICENSE file. Hyperledger Project documentation files are made available under the Creative Commons Attribution 4.0 International License (CC-BY-4.0), available at http://creativecommons.org/licenses/by/4.0/.

fabric-samples's People

Contributors

asararatnakar avatar bestbeforetoday avatar c0rwin avatar christo4ferris avatar davidkel avatar denali49 avatar denyeart avatar dependabot[bot] avatar dereckluo avatar fravlaca avatar harrisob avatar jimthematrix avatar jkneubuh avatar jrc-ibm avatar jt-nti avatar lehors avatar lindluni avatar mastersingh24 avatar mbwhite avatar nikhil550 avatar r2roc avatar rajat-dlt avatar rameshthoomu avatar ryjones avatar sapthasurendran avatar satota2 avatar stephyee avatar wlahti avatar yacovm avatar yuki-kon avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fabric-samples's Issues

test-network-k8s: "./network chaincode deploy" crashed on installing chaincode, error on invoke

Hey,

so I installed and verified everything according to "Quickstart", but it seems that deploy command did not finish:

root@vmblockchainkubernetes:/home/michal/test-network-k8s# ./network chaincode deploy
Deploying chaincode "asset-transfer-basic":
✅ - Packaging chaincode folder chaincode/asset-transfer-basic ...
✅ - Transferring chaincode archive to org1 ...
✅ - Installing chaincode for org org1 ...
☠️

When trying to invoke the chaincode I get following message:

root@vmblockchainkubernetes:/home/michal/test-network-k8s# ./network chaincode invoke '{"Args":["CreateAsset","1","blue","35","tom","1000"]}'
Error: endorsement failure during invoke. response: status:500 message:"make sure the chaincode asset-transfer-basic has been successfully defined on channel mychannel and try again: chaincode asset-transfer-basic not found"

I verified pod's logs:

2021-12-12 11:04:01.981 UTC [core.comm] ServerHandshake -> DEBU 2e9 Server TLS handshake completed in 1.104809ms server=PeerServer remoteaddress=10.244.0.22:32786
2021-12-12 11:04:01.983 UTC [core.comm] ServerHandshake -> DEBU 2ea Server TLS handshake completed in 973.808µs server=PeerServer remoteaddress=10.244.0.22:32788
2021-12-12 11:04:01.986 UTC [endorser] ProcessProposal -> DEBU 2eb request from 10.244.0.22:32786
2021-12-12 11:04:01.987 UTC [endorser] Validate -> DEBU 2ec creator is valid channel=mychannel txID=62681fff mspID=Org1MSP
2021-12-12 11:04:01.987 UTC [endorser] Validate -> DEBU 2ed signature is valid channel=mychannel txID=62681fff mspID=Org1MSP
2021-12-12 11:04:01.987 UTC [blkstorage] retrieveTransactionByID -> DEBU 2ee retrieveTransactionByID() - txId = [62681fff23b2d68bdce353e935674f3348db980ed02a2e1624df1f9a22a11784]
2021-12-12 11:04:01.987 UTC [aclmgmt] CheckACL -> DEBU 2ef acl policy not found in config for resource peer/Propose
2021-12-12 11:04:01.987 UTC [lockbasedtxmgr] NewTxSimulator -> DEBU 2f0 constructing new tx simulator
2021-12-12 11:04:01.987 UTC [lockbasedtxmgr] newQueryExecutor -> DEBU 2f1 constructing new query executor txid = [62681fff23b2d68bdce353e935674f3348db980ed02a2e1624df1f9a22a11784]
2021-12-12 11:04:01.988 UTC [lockbasedtxmgr] newTxSimulator -> DEBU 2f2 constructing new tx simulator txid = [62681fff23b2d68bdce353e935674f3348db980ed02a2e1624df1f9a22a11784]
2021-12-12 11:04:01.988 UTC [stateleveldb] GetState -> DEBU 2f3 GetState(). ns=_lifecycle, key=namespaces/fields/asset-transfer-basic/Sequence
2021-12-12 11:04:01.988 UTC [stateleveldb] GetState -> DEBU 2f4 GetState(). ns=lscc, key=asset-transfer-basic
2021-12-12 11:04:01.988 UTC [lockbasedtxmgr] Done -> DEBU 2f5 Done with transaction simulation / query execution [62681fff23b2d68bdce353e935674f3348db980ed02a2e1624df1f9a22a11784]
2021-12-12 11:04:01.988 UTC [endorser] ProcessProposal -> WARN 2f6 Failed to invoke chaincode channel=mychannel chaincode=asset-transfer-basic error="make sure the chaincode asset-transfer-basic has been successfully defined on channel mychannel and try again: chaincode asset-transfer-basic not found"
2021-12-12 11:04:01.988 UTC [comm.grpc.server] 1 -> INFO 2f7 unary call completed grpc.service=protos.Endorser grpc.method=ProcessProposal grpc.peer_address=10.244.0.22:32786 grpc.code=OK grpc.call_duration=1.711115ms

ok, seems its not installed, but when I checked for logs after the deployment, it seems like chaincode was installed properly:

2021-12-12 11:01:52.909 UTC [core.comm] ServerHandshake -> DEBU 2ca Server TLS handshake completed in 1.343111ms server=PeerServer remoteaddress=10.244.0.22:59428
2021-12-12 11:01:52.912 UTC [core.comm] ServerHandshake -> DEBU 2cb Server TLS handshake completed in 1.101309ms server=PeerServer remoteaddress=10.244.0.22:59430
2021-12-12 11:01:52.913 UTC [endorser] ProcessProposal -> DEBU 2cc request from 10.244.0.22:59428
2021-12-12 11:01:52.913 UTC [endorser] Validate -> DEBU 2cd creator is valid channel= txID=7d94c182 mspID=Org1MSP
2021-12-12 11:01:52.913 UTC [endorser] Validate -> DEBU 2ce signature is valid channel= txID=7d94c182 mspID=Org1MSP
2021-12-12 11:01:52.913 UTC [chaincode] CheckInvocation -> DEBU 2cf [7d94c182] getting chaincode data for _lifecycle on channel
2021-12-12 11:01:52.913 UTC [chaincode] Execute -> DEBU 2d0 Entry
2021-12-12 11:01:52.914 UTC [lifecycle] InstallChaincode -> DEBU 2d1 received invocation of InstallChaincode for install package 1f8b0800a0d6b56100034bce4f49d52b492cd24baf62a015300002331313300d04e8b4...
2021-12-12 11:01:52.914 UTC [ccprovider] MetadataAsTarEntries -> DEBU 2d2 Created metadata tar
2021-12-12 11:01:52.919 UTC [chaincode.externalbuilder.ccs-builder] waitForExit -> INFO 2d3 ::Detect 002 command=detect
2021-12-12 11:01:52.919 UTC [chaincode.externalbuilder.ccs-builder] waitForExit -> INFO 2d4 ::Type detected as external command=detect
2021-12-12 11:01:52.922 UTC [chaincode.externalbuilder.ccs-builder] waitForExit -> INFO 2d5 ::Build command=build
2021-12-12 11:01:52.922 UTC [chaincode.externalbuilder.ccs-builder] waitForExit -> INFO 2d6 ::Type detected as external command=build
2021-12-12 11:01:52.925 UTC [chaincode.externalbuilder.ccs-builder] waitForExit -> INFO 2d7 ::Build command=release
2021-12-12 11:01:52.925 UTC [chaincode.externalbuilder.ccs-builder] waitForExit -> INFO 2d8 ::Type detected as external command=release
2021-12-12 11:01:52.926 UTC [lifecycle] ProcessInstallEvent -> DEBU 2d9 ProcessInstallEvent() - localChaincode = &{basic_1.0:43298a391987f3a5af7609957d1306343b9999dc32e799f95eca1525b6ef95fb external  basic_1.0}
2021-12-12 11:01:52.926 UTC [chaincode.externalbuilder] PackageMetadata -> DEBU 2da Walking package release dir '/var/hyperledger/fabric/data/org1-peer1.org1.example.com/externalbuilder/builds/basic_1.0-43298a391987f3a5af7609957d1306343b9999dc32e799f95eca1525b6ef95fb/release'
2021-12-12 11:01:52.926 UTC [chaincode.externalbuilder] func1 -> DEBU 2db Adding file '/var/hyperledger/fabric/data/org1-peer1.org1.example.com/externalbuilder/builds/basic_1.0-43298a391987f3a5af7609957d1306343b9999dc32e799f95eca1525b6ef95fb/release' to tar with header name 'META-INF/'
2021-12-12 11:01:52.926 UTC [chaincode.externalbuilder] func1 -> DEBU 2dc Adding file '/var/hyperledger/fabric/data/org1-peer1.org1.example.com/externalbuilder/builds/basic_1.0-43298a391987f3a5af7609957d1306343b9999dc32e799f95eca1525b6ef95fb/release/chaincode' to tar with header name 'META-INF/chaincode/'
2021-12-12 11:01:52.927 UTC [chaincode.externalbuilder] func1 -> DEBU 2de Adding file '/var/hyperledger/fabric/data/org1-peer1.org1.example.com/externalbuilder/builds/basic_1.0-43298a391987f3a5af7609957d1306343b9999dc32e799f95eca1525b6ef95fb/release/chaincode/server' to tar with header name 'META-INF/chaincode/server/'
2021-12-12 11:01:52.927 UTC [chaincode.externalbuilder] func1 -> DEBU 2df Adding file '/var/hyperledger/fabric/data/org1-peer1.org1.example.com/externalbuilder/builds/basic_1.0-43298a391987f3a5af7609957d1306343b9999dc32e799f95eca1525b6ef95fb/release/chaincode/server/connection.json' to tar with header name 'META-INF/chaincode/server/connection.json'
2021-12-12 11:01:52.926 UTC [lifecycle] Work -> DEBU 2dd skipping build of chaincode 'basic_1.0:43298a391987f3a5af7609957d1306343b9999dc32e799f95eca1525b6ef95fb' as it is already in progress
2021-12-12 11:01:52.927 UTC [lifecycle] InstallChaincode -> INFO 2e0 Successfully installed chaincode with package ID 'basic_1.0:43298a391987f3a5af7609957d1306343b9999dc32e799f95eca1525b6ef95fb'
2021-12-12 11:01:52.927 UTC [chaincode] handleMessage -> DEBU 2e1 [7d94c182] Fabric side handling ChaincodeMessage of type: COMPLETED in state ready
2021-12-12 11:01:52.927 UTC [chaincode] Notify -> DEBU 2e2 [7d94c182] notifying Txid:7d94c182fe8037a629a96e0aa5468b78d45446f40726d778496f890b9d29d39a, channelID:
2021-12-12 11:01:52.927 UTC [chaincode] Execute -> DEBU 2e3 Exit
2021-12-12 11:01:52.927 UTC [endorser] callChaincode -> INFO 2e4 finished chaincode: _lifecycle duration: 13ms channel= txID=7d94c182
2021-12-12 11:01:52.927 UTC [comm.grpc.server] 1 -> INFO 2e5 unary call completed grpc.service=protos.Endorser grpc.method=ProcessProposal grpc.peer_address=10.244.0.22:59428 grpc.code=OK grpc.call_duration=14.416925ms

so I checked with "peer" command if chaincode was installed and here is the result:

root@vmblockchainkubernetes:/home/michal/test-network-k8s# kubectl exec -n test-network deploy/org1-peer1 -- peer chaincode list --installed
Defaulted container "main" out of: main, fabric-ccs-builder (init)
2021-12-12 11:08:59.469 UTC [bccsp] GetDefault -> DEBU 001 Before using BCCSP, please call InitFactories(). Falling back to bootBCCSP.
2021-12-12 11:08:59.471 UTC [bccsp] GetDefault -> DEBU 002 Before using BCCSP, please call InitFactories(). Falling back to bootBCCSP.
2021-12-12 11:08:59.483 UTC [bccsp] GetDefault -> DEBU 003 Before using BCCSP, please call InitFactories(). Falling back to bootBCCSP.
2021-12-12 11:08:59.485 UTC [bccsp_sw] openKeyStore -> DEBU 004 KeyStore opened at [/var/hyperledger/fabric/organizations/peerOrganizations/org1.example.com/peers/org1-peer1.org1.example.com/msp/keystore]...done
2021-12-12 11:08:59.486 UTC [bccsp_sw] loadPrivateKey -> DEBU 005 Loading private key [ce2898fb23e9a5ef541fc1f6189f7ef506a4ed0d57d2b1675ef474c3cbd304ae] at [/var/hyperledger/fabric/organizations/peerOrganizations/org1.example.com/peers/org1-peer1.org1.example.com/msp/keystore/ce2898fb23e9a5ef541fc1f6189f7ef506a4ed0d57d2b1675ef474c3cbd304ae_sk]...
2021-12-12 11:08:59.497 UTC [comm.tls] ClientHandshake -> DEBU 006 Client TLS handshake completed in 1.818416ms remoteaddress=10.96.42.195:7051
2021-12-12 11:08:59.500 UTC [comm.tls] ClientHandshake -> DEBU 007 Client TLS handshake completed in 1.397112ms remoteaddress=10.96.42.195:7051
Error: bad response: 500 - access denied for [getinstalledchaincodes]: Failed verifying that proposal's creator satisfies local MSP principal during channelless check policy with policy [Admins]: [The identity is not an admin under this MSP [Org1MSP]: The identity does not contain OU [ADMIN], MSP: [Org1MSP]]
command terminated with exit code 1

Seems that maybe something is not ok with certificates, I had no error though and it seems all pods are deployed:

NAME                                  READY   STATUS    RESTARTS   AGE
pod/org0-admin-cli-54f58b4854-vrt5q   1/1     Running   0          51m
pod/org0-ecert-ca-794c88b566-ltxvf    1/1     Running   0          53m
pod/org0-orderer1-7c5b8ffd8-8d2hp     1/1     Running   0          53m
pod/org0-orderer2-66fbccf96-2tj8r     1/1     Running   0          53m
pod/org0-orderer3-5d66b57d64-phnsq    1/1     Running   0          53m
pod/org0-tls-ca-7b57d675f8-5nftm      1/1     Running   0          54m
pod/org1-admin-cli-78945bf58-nvb87    1/1     Running   0          51m
pod/org1-ecert-ca-566f79bc4d-4z5l9    1/1     Running   0          53m
pod/org1-peer1-54f767569c-chcsp       1/1     Running   0          53m
pod/org1-peer2-b5659c784-glx5t        1/1     Running   0          53m
pod/org1-tls-ca-c6685b4cb-k8smd       1/1     Running   0          54m
pod/org2-admin-cli-76d6784799-mdc4q   1/1     Running   0          51m
pod/org2-ecert-ca-5d55c7f974-b8gsm    1/1     Running   0          53m
pod/org2-peer1-6f97b8df86-zl9hl       1/1     Running   0          53m
pod/org2-peer2-585cc87cb5-jf64t       1/1     Running   0          53m
pod/org2-tls-ca-85959dcf9d-2hqmj      1/1     Running   0          54m

NAME                            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
service/org0-ecert-ca           ClusterIP   10.96.69.255    <none>        443/TCP                      53m
service/org0-orderer1           ClusterIP   10.96.113.30    <none>        6050/TCP,8443/TCP,9443/TCP   53m
service/org0-orderer2           ClusterIP   10.96.124.49    <none>        6050/TCP,8443/TCP,9443/TCP   53m
service/org0-orderer3           ClusterIP   10.96.145.227   <none>        6050/TCP,8443/TCP,9443/TCP   53m
service/org0-tls-ca             ClusterIP   10.96.219.138   <none>        443/TCP                      54m
service/org1-ecert-ca           ClusterIP   10.96.185.225   <none>        443/TCP                      53m
service/org1-peer-gateway-svc   ClusterIP   10.96.209.244   <none>        7051/TCP                     53m
service/org1-peer1              ClusterIP   10.96.42.195    <none>        7051/TCP,7052/TCP,9443/TCP   53m
service/org1-peer2              ClusterIP   10.96.183.96    <none>        7051/TCP,7052/TCP,9443/TCP   53m
service/org1-tls-ca             ClusterIP   10.96.111.103   <none>        443/TCP                      54m
service/org2-ecert-ca           ClusterIP   10.96.14.117    <none>        443/TCP                      53m
service/org2-peer-gateway-svc   ClusterIP   10.96.157.9     <none>        7051/TCP                     53m
service/org2-peer1              ClusterIP   10.96.232.145   <none>        7051/TCP,7052/TCP,9443/TCP   53m
service/org2-peer2              ClusterIP   10.96.197.98    <none>        7051/TCP,7052/TCP,9443/TCP   53m
service/org2-tls-ca             ClusterIP   10.96.61.214    <none>        443/TCP                      54m

To be honest I have no idea how to continue from that, would be awesome if someone could give me a hand. I am performing this install on Ubuntu 18 on Azure VM B2ms - 2vcpu, 8RAM (planning to go for AKS but for now I use local "kind" k8s).

Cheers,
Sullson

Evaluate the compose test-network using Rancher Desktop and Moby / dockerd

Run a test of the test-network (docker-compose) using a combination of Rancher Desktop and moby/containerd.

By default Rancher will get set up to run with containerd and nerdctl / nerdctl compose. There are a couple of edge conditions and differences in how nerdctl compose works that will force us to restructure the test network descriptors:

  • volume mounts with explicit file references to missing files will not mount in the container.

  • nerdctl compose does not support "overlay compose descriptors." This is used in the test network setup of couchdb.

  • ps and ls actions are not supported by nerdctl. This makes the network teardown a bit ... messy.

Instead of refactoring the compose descriptors to juggle these compatibility issues, configure Rancher to use moby/dockerd behind the scenes and see if this gets the test network up and running with minimal issue. The steps should be:

  1. Turn "off" Docker Desktop.
  2. Install rancherdesktop.
  3. Switch the Kubernetes Settings from containerd -> dockerd
  4. Start the test network (compose)

Identify any rough edges with this approach and report back to Discussions #594 on results.

Add Example of using Prometheus with a sample Graphana dashboard to test network

A useful feature for the test-network would be to be able to start a separate prometheus and grafana setup configured to collect and display metrics for the test network. It will demonstrate how to quickly setup an environment to capture metrics in real time and display them. It should

  • update the test network to generate prometheus metrics
  • add the ability to separate start a prometheus and grafana server either in the fabric docker network test or as a separate network
  • provide a sample prometheus configuration that connects to the fabric test network to collect metrics
  • provide a sample grafana dashboard to collect some of the more interesting statistics
  • provide a README explaining how to setup this environment

This issue covers test-network only and does not include the nano or K8s samples

test-network-k8s: `./network kind` error creating cluster should be reported in the terminal

I have docker and kubectl installed as snaps, and since I'm running a VPN I needed to add a manual network using brctl per docker/for-linux #1105.

My ./network kind command is only getting a little further however:

$ ./network kind
Initializing KIND cluster "kind":
✅ - Pulling docker images for Fabric 2.3.2 ...
⚠️  - Creating cluster "kind" ...
$

This is pretty, but not that helpful. At the very least the user should get a message to check out the file network-debug.log, but publishing the error message to the console would be even better. I've attached that file (network-debug.log) for reference (and maybe for help with whatever my config issue is), but here are the relevant lines from it too (which look similar to my earlier VPN issue):

Status: Downloaded newer image for hyperledger/fabric-tools:2.3.2
docker.io/hyperledger/fabric-tools:2.3.2
Deleting cluster "kind" ...
Creating cluster "kind" ...
 • Ensuring node image (kindest/node:v1.21.1) 🖼  ...
ERROR: failed to create cluster: failed to ensure docker network: command "docker network create -d=bridge -o com.docker.network.bridge.enable_ip_masquerade=true -o com.docker.network.driver.mtu=1500 --ipv6 --subnet fc00:f853:ccd:e793::/64 kind" failed with error: exit status 1

Command Output: Error response from daemon: could not find an available, non-overlapping IPv4 address pool among the defaults to assign to the network

test-network-k8s run ./network up command cannot launch the peer pod

the console log like that:
Launching network "test-network":
✅ - Creating namespace "test-network" ...
✅ - Provisioning volume storage ...
✅ - Creating fabric config maps ...
✅ - Launching TLS CAs ...
✅ - Enrolling bootstrap TLS CA users ...
✅ - Registering and enrolling ECert CA bootstrap users ...
✅ - Launching ECert CAs ...
✅ - Enrolling bootstrap ECert CA users ...
✅ - Creating local node MSP ...
✅ - Launching orderers ...
⚠️ - Launching peers ...

part of the debug log:
configmap/org1-peer1-config created
deployment.apps/org1-peer1 created
service/org1-peer1 created
configmap/org1-peer2-config created
deployment.apps/org1-peer2 created
service/org1-peer2 created
configmap/org2-peer1-config created
deployment.apps/org2-peer1 created
service/org2-peer1 created
configmap/org2-peer2-config created
deployment.apps/org2-peer2 created
service/org2-peer2 created
Waiting for deployment "org1-peer1" rollout to finish: 0 of 1 updated replicas are available...
error: deployment "org1-peer1" exceeded its progress deadline

Current stable version according to documentation

The documentation in accordance with the fabric-samples repository is way old and it is not easy to work with it. For instance the documentation refers to a script byfn.sh which does not exists in the repository. This is making it really challenging to work with the documentation provided it is the only official source available as of now.

It would be helpful if the documentation gets updated but it will take a long time so instead can somebody suggest a version which can work fine with the docuementation. And preferable how can I install that.

currentState variable is not defined in State object neither determined in it subclass CP

Hello,
I've got this error while deserializing the byte[] cp to the cp object.

org.json.JSONException: JSONObject["currentState"] not found while deserializing cp using java

And I found out that the object to deserialize (CP-->byte[]) doesn't have the variable currentState determined.

To reproduce :
Follow the cp tutorial using java untill the response of the issue tx.

test-network-k8s: Connection profile using gateway service for peer high availability

I am trying to configure the connection profile in order to connect in the gateway-svc as mentioned in HIGH_AVAILABILITY file.
I am sure that i am totally missing something because i am getting the errors described bellow.
I would appreciate if someone could give me some directions or hints in order to implement this. Thank you in advance!

Error log from the application log file that uses the connection profile options:

2022-02-08T13:19:24.972Z - error: [ServiceEndpoint]: Error: Failed to connect before the deadline on Endorser- name: org1-peer-gateway-svc, url:grpcs://org1-peer-gateway-svc:7051, connected:false, connectAttempted:true
2022-02-08T13:19:24.975Z - error: [ServiceEndpoint]: waitForReady - Failed to connect to remote gRPC server org1-peer-gateway-svc url:grpcs://org1-peer-gateway-svc:7051 timeout:100000
2022-02-08T13:19:25.034Z - info: [NetworkConfig]: buildPeer - Unable to connect to the endorser org1-peer-gateway-svc due to Error: 
Failed to connect before the deadline on Endorser- name: org1-peer-gateway-svc, url:grpcs://org1-peer-gateway-svc:7051, connected:false, connectAttempted:true
    at checkState (/app/node_modules/@grpc/grpc-js/src/client.ts:172:18)
    at Timeout._onTimeout (/app/node_modules/@grpc/grpc-js/src/channel.ts:708:9)
    at listOnTimeout (internal/timers.js:557:17)
    at processTimers (internal/timers.js:500:7) {
  connectFailed: true
}

Error log in the peers log files:

2022-02-08 13:24:51.423 UTC 0612 ERRO [core.comm] ServerHandshake -> Server TLS handshake failed in 32.620981ms with error EOF server=PeerServer remoteaddress=10.244.0.32:33702
2022-02-08 13:24:53.706 UTC 0613 ERRO [core.comm] ServerHandshake -> Server TLS handshake failed in 6.420937ms with error EOF server=PeerServer remoteaddress=10.244.0.32:33706
2022-02-08 13:24:58.274 UTC 0614 ERRO [core.comm] ServerHandshake -> Server TLS handshake failed in 7.273443ms with error EOF server=PeerServer remoteaddress=10.244.0.32:33710
2022-02-08 13:25:10.115 UTC 0615 ERRO [core.comm] ServerHandshake -> Server TLS handshake failed in 6.862989ms with error EOF server=PeerServer remoteaddress=10.244.0.32:33714

Connection profile config options regarding the organizations and peers:

"organizations": {
  "Org1": {
    "mspid": "Org1MSP",
    "peers": [
      "org1-peers"
    ],
    "certificateAuthorities": [
      "org1-ca"
    ]
  }
},
"peers": {
  "org1-peers": {
    "url": "grpcs://org1-peer-gateway-svc:7051",
    "tlsCACerts": {
      "pem": "-----BEGIN CERTIFICATE-----\nMIIBdjCC . . . L7KBr\n-----END CERTIFICATE-----\n"
    },
    "grpcOptions": {
      "ssl-target-name-override": "org1-peer-gateway-svc",
      "hostnameOverride": "org1-peer-gateway-svc"
    }
  }
},

Deploying Typescript Chaincodes Fail using DeployCC.sh In CLI Container for Test Network

Using the deployCC.sh script (https://github.com/hyperledger/fabric-samples/blob/main/test-network/scripts/deployCC.sh) from the cli container errors out because npm is not installed. Simple fix - apk add npm.

This fails at:

https://github.com/hyperledger/fabric-samples/blob/main/test-network/scripts/deployCC.sh#L82

Compiling TypeScript code into JavaScript...
/opt/gopath/src/github.com/hyperledger/fabric/peer/chaincode-ts /peer
./scripts/deployCC.sh: line 72: npm: command not found
./scripts/deployCC.sh: line 73: npm: command not found
/peer
Finished compiling TypeScript code into JavaScript

Without this the chaincode is deployed improperly and you can easily miss the error message because I believe it tries to package up an empty chaincode package. Another feature that would be useful if the compile fails, the script should halt and not try to continue to publish a bad chaincode package.

Multiple minor issues running custom chaincode image on host docker container + Workarounds

Hey Team,

I was playing around with custom image of asset-transfer-basic using this tutorial:

https://github.com/hyperledger/fabric-samples/blob/main/test-network-k8s/docs/CHAINCODE.md#build-a-chaincode-docker-image

then

https://github.com/hyperledger/fabric-samples/blob/main/test-network-k8s/docs/CHAINCODE.md#debugging-chaincode

Going with tutorial left some lack that had to be mitigated:

  1. docker run command should be executed with -d to run the container as daemon (point 3. of this section):
$ docker run -d \
  --rm \
  --name asset-transfer-basic-debug \
  -e CHAINCODE_ID \
  -e CHAINCODE_SERVER_ADDRESS=0.0.0.0:9999 \
  -p 9999:9999 \
  localhost:5000/asset-transfer-basic
  1. There is no communication with the docker host from pods using suggested DNS in connection.json - host.docker.internal:9999 does not work (this section).

As a workaround I simply modified the line to point to the docker host IP:

{
  "address": "192.168.1.10:9999",
  "dial_timeout": "10s",
  "tls_required": false
}

So my question is how does DNS config work, so that the k8s cluster can see the docker host using DNS names?

Kube test network : Start prom/grafana metrics services

@fraVlaca and @davidkel have done some great work wiring up Prometheus / Grafana services in the test-network and bare-metal networks derived from fabric-test-operator.

Tracking time-series metrics of system behavior is great. It's virtually impossible to do any useful diagnosis and/or benchmarking on a distributed system without the ability to correlate "what happened" and "when did it happen" at a system-wide level.

Carry forward the metrics work by setting up a yaml descriptor and ingress to launch prom/grafana on the kube test network. This will allow us to poke + prod Fabric networks with a consistent lens across the different runtime platforms.

Maybe:

./network viz

or

./network metrics

?

asset-transfer-ledger-queries/chaincode-javascript and chaincode/marbles02/javascript with pagination issues

The chaincodes asset-transfer-ledger-queries and marbles02 (javascript) do not send Responsemetadada data (Bookmark and fetchedRecordsCount) correctly to the application.

       results.ResponseMetadata = {
		RecordsCount: metadata.fetched_records_count,
		Bookmark: metadata.bookmark,
	};

But the correct structure of the object metadata is:

// StateQueryResponse
{
metadata: { // QueryResponseMetadata
fetchedRecordsCount: number;
bookmark: string;
};
iterator: StateQueryIterator;
}

Define best practices for handling errors in chaincode

Note: this is related to the REST sample review and recent discussions on error handling during Fabric Gateway development which have resulted in some changes to chaincode implementations, e.g. Java chaincode.

The asset transfer samples are currently the primary chaincode samples and they all fail with human readable errors if the asset exists when it shouldn't and vice versa. This kind of text error is not good practice and makes error handling in client applications more difficult. As if to prove the point, the samples use slightly different error messages in the different language implementations.

The underlaying interface allows chaincode implementations to return errors with an error code, error message, and error payload, however due to lack of support in the current chaincode and client SDK implementations, I think the error message is the only piece of information you can rely on.

Ignoring the current implementations, how should chaincode report errors, and "business logic" errors in particular?

Being able to use a domain specific error code, human readable message, and domain specific payload all seem potentially useful to me, e.g. error 2035 might indicate that an asset cannot be updated because it is currently being held for inspection, and a payload with the asset's current state, or inspection details could be useful.

One possibility is not to return application/business level "errors" as errors at all. In this scenario the response payload would include the application specific response code, plus any asset, etc. essentially layering another level of error handling on top. It would mean that the failure would be ordered and included on the blockchain for any transactions that are submitted instead of evaluated. That might be desirable for audit purposes in some situations, although it's worth noting that as far as I know there is no way to force transactions to be submitted for ordering. (Potentially client applications could decide whether or not to submit a transaction for ordering based on the response payload?)

Any comments, suggestions, or recommendations?!

See related core Fabric issue: hyperledger/fabric#3154

./network.sh up createChannel does the createChannel first so the command fails

example output

./network.sh up createChannel
Creating channel 'mychannel'.
If network is not up, starting nodes with CLI timeout of '5' tries and CLI delay of '3' seconds and using database 'leveldb
Generating channel genesis block 'mychannel.block'
/home/dave/fabric-samples/test-network/../bin/configtxgen
+ configtxgen -profile TwoOrgsApplicationGenesis -outputBlock ./channel-artifacts/mychannel.block -channelID mychannel
2022-01-27 17:51:25.262 GMT 0001 INFO [common.tools.configtxgen] main -> Loading configuration
2022-01-27 17:51:25.266 GMT 0002 INFO [common.tools.configtxgen.localconfig] completeInitialization -> orderer type: etcdraft
2022-01-27 17:51:25.267 GMT 0003 INFO [common.tools.configtxgen.localconfig] completeInitialization -> Orderer.EtcdRaft.Options unset, setting to tick_interval:"500ms" election_tick:10 heartbeat_tick:1 max_inflight_blocks:5 snapshot_interval_size:16777216
2022-01-27 17:51:25.267 GMT 0004 INFO [common.tools.configtxgen.localconfig] Load -> Loaded configuration: /home/dave/fabric-samples/test-network/configtx/configtx.yaml
2022-01-27 17:51:25.269 GMT 0005 INFO [common.tools.configtxgen] doOutputBlock -> Generating genesis block
2022-01-27 17:51:25.269 GMT 0006 INFO [common.tools.configtxgen] doOutputBlock -> Creating application channel genesis block
2022-01-27 17:51:25.269 GMT 0007 INFO [common.tools.configtxgen] doOutputBlock -> Writing genesis block
+ res=0
Creating channel mychannel
Using organization 1
+ osnadmin channel join --channelID mychannel --config-block ./channel-artifacts/mychannel.block -o localhost:7053 --ca-file /home/dave/fabric-samples/test-network/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem --client-cert /home/dave/fabric-samples/test-network/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/tls/server.crt --client-key /home/dave/fabric-samples/test-network/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/tls/server.key
+ res=1
+ osnadmin channel join --channelID mychannel --config-block ./channel-artifacts/mychannel.block -o localhost:7053 --ca-file /home/dave/fabric-samples/test-network/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem --client-cert /home/dave/fabric-samples/test-network/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/tls/server.crt --client-key /home/dave/fabric-samples/test-network/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/tls/server.key
+ res=1
+ osnadmin channel join --channelID mychannel --config-block ./channel-artifacts/mychannel.block -o localhost:7053 --ca-file /home/dave/fabric-samples/test-network/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem --client-cert /home/dave/fabric-samples/test-network/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/tls/server.crt --client-key /home/dave/fabric-samples/test-network/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/tls/server.key
+ res=1
+ osnadmin channel join --channelID mychannel --config-block ./channel-artifacts/mychannel.block -o localhost:7053 --ca-file /home/dave/fabric-samples/test-network/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem --client-cert /home/dave/fabric-samples/test-network/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/tls/server.crt --client-key /home/dave/fabric-samples/test-network/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/tls/server.key
Error: Post "https://localhost:7053/participation/v1/channels": dial tcp 127.0.0.1:7053: connect: connection refused

Channel creation failed

The reason it does this is due to the check on createChannel to see if the docker network is running which is the presence of crypto material, however there are situations where the material is present but the network is not there and would be good if the network.sh detected this and did a network up regardless.

function createChannel() {
  # Bring up the network if it is not already up.

  if [ ! -d "organizations/peerOrganizations" ]; then
    infoln "Bringing up network"
    networkUp
  fi

This is the code that would need a better detection of the network being up

test-network-in-a-box

@mbwhite showed an amazing demonstration / prototype by embedding a "zero-to-blockchain-in-one-command" example with the hlfsupport-in-a-box project. HLF-in-a-box sets up all of the project dependencies, ansible playbooks, and scripting necessary to spin up a network on a remote OCP with a single command.

This Issue / Feature describes an analogous command, but targeting the core Fabric test-network and test-network-k8s projects.

To implement this feature, bundle all of the pre-requisites, scripts, manifests, etc. etc... necessary to spin up a test network, and package everything together in a runnable Docker image. In a kubernetes cluster, the image can just "run" with a kubectl run (or job) in the cluster, inheriting the default service account and kube context. It's not immediately clear how the mechanics will work for the compose-based network, but it certainly would be convenient to run the setup with a single docker run command.

The expected usage is:

kubectl apply \
  -n my-namespace \
  -f https://raw.githubusercontent.com/hyperledger/fabric-samples/main/test-network-k8s/kube/install-test-network-job.yaml

Some really convenient extension ideas:

  • After the user runs the "in-a-box" installer, leave the MSP residue and certificates in a convenient location for local dev.

  • Have the installer set up Nginx Ingress resources for the Fabric nodes. (We could use the *.nip.io or *.vcap.me DNS host aliases to set up stable host names binding to the Nginx ingress controller on the host NIC.)

  • ???

java.lang.IllegalThreadStateException at java.lang.ThreadGroup.destroy (ThreadGroup.java:776)

All java transaction method props this exception.

[WARNING] thread Thread[pool-3-thread-1,5,org.magnetocorp.Issue] was interrupted but is still alive after waiting at least 13987msecs [WARNING] thread Thread[pool-3-thread-1,5,org.magnetocorp.Issue] will linger despite being asked to die via interruption [WARNING] thread Thread[pool-1-thread-1,5,org.magnetocorp.Issue] will linger despite being asked to die via interruption [WARNING] thread Thread[pool-1-thread-2,5,org.magnetocorp.Issue] will linger despite being asked to die via interruption [WARNING] thread Thread[pool-1-thread-3,5,org.magnetocorp.Issue] will linger despite being asked to die via interruption [WARNING] thread Thread[pool-1-thread-4,5,org.magnetocorp.Issue] will linger despite being asked to die via interruption [WARNING] thread Thread[pool-1-thread-5,5,org.magnetocorp.Issue] will linger despite being asked to die via interruption [WARNING] thread Thread[pool-1-thread-6,5,org.magnetocorp.Issue] will linger despite being asked to die via interruption [WARNING] thread Thread[pool-1-thread-7,5,org.magnetocorp.Issue] will linger despite being asked to die via interruption [WARNING] NOTE: 8 thread(s) did not finish despite being asked to via interruption. This is not a problem with exec:java, it is a problem with the running code. Although not serious, it should be remedied. [WARNING] Couldn't destroy threadgroup org.codehaus.mojo.exec.ExecJavaMojo$IsolatedThreadGroup[name=org.magnetocorp.Issue,maxpri=10] java.lang.IllegalThreadStateException at java.lang.ThreadGroup.destroy (ThreadGroup.java:776) at org.codehaus.mojo.exec.ExecJavaMojo.execute (ExecJavaMojo.java:293) at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo (DefaultBuildPluginManager.java:137) at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:210) at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:156)

Additional REST sample updates

Some follow on work for the new REST sample:

  • More description of what the sample does and why in the readme
  • Move signposting of sample structure from index.ts to readme
  • Convert function comments to tsdoc to help IDEs
  • Add REST sample to fabric-samples build (already runs lint)
  • Publish docker images
  • Update instructions to start Redis with a password
  • Archive hyperledgendary/fabric-rest-sample project

Any more?

docker tag related issues when running the network.sh script from latest MAIN branch

Hi,

When I take the latest version from git respository (MAIN branch) and try to run the network, I get the following error :

docker: Error response from daemon: manifest for hyperledger/fabric-tools:latest not found: manifest unknown: manifest unknown.

and warning :
Local fabric-ca binaries and docker images are out of sync. This may cause problems.

image

Is there something missing in the documentation ? or do I need to change something in the config files ?

It runs when I remove the latest tag and put some version for example hyperledger/fabric-tools:2.3.3

test-network-nano-bash still requires docker

test-network-nano-bash is a great way to run fabric on native platforms (mac/linux), but it still requires docker in order to run chaincode.

it would be great if that docker dependency could be removed completely by including a simple external chaincode builder and launcher that supports building chaincode as well as chaincode running as a server

Kube test network : Deploy the OSS Fabric Console

The feature/fabric-operations-console branch has some progress on an integration with the OSS Fabric Console. This work is most of the way there but it needs a little bit of work to see it out the door. Three factors complicating a landing into the Kube test network are:

  • OSS Console still has a dependency on the system channel. The test networks, on the other hand, rely on the Channel Administration SDKs, and do not bootstrap the network with a system channel / genesis block. The branch above runs through the integration by downgrading the network to bootstrap from a system channel, which is ... not ideal.

  • The Console requires a sequence of association actions to be performed immediately upon importing the test network into the GUI. While this is manageable for the "test-network" (1x orderer + 2x peers), the typing burden necessary to associate identities for 3x orderers and 4x peers on the test-network-k8s is ... not ideal.

  • Fabric console makes extensive use of a Kubernetes Ingress, made available at the CONSOLE_URL, and tunneling through an Nginx controller to route traffic into the cluster from the web GUI. For network routes coming "into" the console from the GUI, this works OK as traffic can be directed via an Ingress host alias to the appropriate service. However, in some cases the network traffic - between pods - is redirected out to the host network, and back into the Fabric network over the ingress controller / nginx. For systems that have a dual-homed, or multiple NICs, this is not a problem as the ingress controller can be bound to the external NIC and resolved via DNS. For single-NIC systems, or in virtual Kubernetes environments such as KIND, it's virtually impossible to find a stable technique to identify a single DNS host entry that successfully resolves to the port binding the ingress. The test branch relies on either hacking system DNS with dnsmasq, or clever use of wildcard DNS domains (e.g. my-server.nip.io, *.my-net.vcap.me, etc.) While this works, it doesn't work "well enough" to provide a simple route forward on all environments. This is a "non-issue" in the target k8s environments, largely based on OCP Route resources exposed to a NIC / DNS on the Internet.

Some recent developments make an integration with the OSS console viable. The target user interaction is:

  1. bring up a local test network and console running on a dev k8s:
./network up
./network channel create
./network console up
  1. user logs in to https://user:[email protected]/console
  2. user imports a zip archive generated by "console up"
  3. user can immediately work with the test network - create channels, chaincode, query blocks, etc.

Regarding the "rough edges" above:

  • The network bootstrap script can support an optional code path to either bootstrap the orderers using a system channel, or use the channel admin APIs. When OSS console supports the migration from system channel, this route can be removed. It's fine if the user has to set something in the env, e.g. TEST_NETWORK_USE_SYSTEM_CHANNEL=true.

  • The console bulk import zip files have the ability to declare associations at import time. Track down the required yaml / etc. and include these in the import archive.

  • With the option to run the kube test network on Rancher Desktop (k3s / containerd), the networking stack does NOT rely on the odd hacks embedded into KIND that are required for access to a TCP port bound to the host OS/NIC at host.docker.internal. In other words, k3s and Rancher make it really easy to set a stable property for the CONSOLE_URL, routing traffic from within the cluster out to the ingress controller.

It's not a huge push but would be a nice way to highlight the ease of use and benefits from working with the OSS Console, all within the cozy confines of a local, single-node development workstation. Couple this with Gateway and Chaincode-as-a-Service running locally within an IDE/debugger, and it's a short hop to a production blockchain.

test-network-k8s: Create channel | error getting endorser client for channel

I'm trying to setup test network locally by referring to: https://github.com/hyperledger/fabric-samples/tree/main/test-network-k8s
I'm running docker and kind in Ubuntu using Virtualbox + Vagrant on Windows 10.
I have successfully ran following commands:

./network kind
./network up

When I run ./network channel create command, I get an error.
Following are the contents from network-debug.log:

+ mkdir -p /var/hyperledger/fabric/organizations/ordererOrganizations/org0.example.com/msp/cacerts
+ cp /var/hyperledger/fabric-ca-client/org0-ecert-ca/rcaadmin/msp/cacerts/org0-ecert-ca.pem /var/hyperledger/fabric/organizations/ordererOrganizations/org0.example.com/msp/cacerts
+ mkdir -p /var/hyperledger/fabric/organizations/ordererOrganizations/org0.example.com/msp/tlscacerts
+ cp /var/hyperledger/fabric-ca-client/tls-ca/tlsadmin/msp/cacerts/org0-tls-ca.pem /var/hyperledger/fabric/organizations/ordererOrganizations/org0.example.com/msp/tlscacerts
+ echo 'NodeOUs:
    Enable: true
    ClientOUIdentifier:
      Certificate: cacerts/org0-ecert-ca.pem
      OrganizationalUnitIdentifier: client
    PeerOUIdentifier:
      Certificate: cacerts/org0-ecert-ca.pem
      OrganizationalUnitIdentifier: peer
    AdminOUIdentifier:
      Certificate: cacerts/org0-ecert-ca.pem
      OrganizationalUnitIdentifier: admin
    OrdererOUIdentifier:
      Certificate: cacerts/org0-ecert-ca.pem
      OrganizationalUnitIdentifier: orderer '
+ mkdir -p /var/hyperledger/fabric/organizations/peerOrganizations/org1.example.com/msp/cacerts
+ cp /var/hyperledger/fabric-ca-client/org1-ecert-ca/rcaadmin/msp/cacerts/org1-ecert-ca.pem /var/hyperledger/fabric/organizations/peerOrganizations/org1.example.com/msp/cacerts
+ mkdir -p /var/hyperledger/fabric/organizations/peerOrganizations/org1.example.com/msp/tlscacerts
+ cp /var/hyperledger/fabric-ca-client/tls-ca/tlsadmin/msp/cacerts/org1-tls-ca.pem /var/hyperledger/fabric/organizations/peerOrganizations/org1.example.com/msp/tlscacerts
+ echo 'NodeOUs:
    Enable: true
    ClientOUIdentifier:
      Certificate: cacerts/org1-ecert-ca.pem
      OrganizationalUnitIdentifier: client
    PeerOUIdentifier:
      Certificate: cacerts/org1-ecert-ca.pem
      OrganizationalUnitIdentifier: peer
    AdminOUIdentifier:
      Certificate: cacerts/org1-ecert-ca.pem
      OrganizationalUnitIdentifier: admin
    OrdererOUIdentifier:
      Certificate: cacerts/org1-ecert-ca.pem
      OrganizationalUnitIdentifier: orderer '
+ mkdir -p /var/hyperledger/fabric/organizations/peerOrganizations/org2.example.com/msp/cacerts
+ cp /var/hyperledger/fabric-ca-client/org2-ecert-ca/rcaadmin/msp/cacerts/org2-ecert-ca.pem /var/hyperledger/fabric/organizations/peerOrganizations/org2.example.com/msp/cacerts
+ mkdir -p /var/hyperledger/fabric/organizations/peerOrganizations/org2.example.com/msp/tlscacerts
+ cp /var/hyperledger/fabric-ca-client/tls-ca/tlsadmin/msp/cacerts/org2-tls-ca.pem /var/hyperledger/fabric/organizations/peerOrganizations/org2.example.com/msp/tlscacerts
+ echo 'NodeOUs:
    Enable: true
    ClientOUIdentifier:
      Certificate: cacerts/org2-ecert-ca.pem
      OrganizationalUnitIdentifier: client
    PeerOUIdentifier:
      Certificate: cacerts/org2-ecert-ca.pem
      OrganizationalUnitIdentifier: peer
    AdminOUIdentifier:
      Certificate: cacerts/org2-ecert-ca.pem
      OrganizationalUnitIdentifier: admin
    OrdererOUIdentifier:
      Certificate: cacerts/org2-ecert-ca.pem
      OrganizationalUnitIdentifier: orderer '
organizations/ordererOrganizations/org0.example.com/msp/
organizations/ordererOrganizations/org0.example.com/msp/tlscacerts/
organizations/ordererOrganizations/org0.example.com/msp/tlscacerts/org0-tls-ca.pem
organizations/ordererOrganizations/org0.example.com/msp/config.yaml
organizations/ordererOrganizations/org0.example.com/msp/cacerts/
organizations/ordererOrganizations/org0.example.com/msp/cacerts/org0-ecert-ca.pem
organizations/peerOrganizations/org1.example.com/msp/
organizations/peerOrganizations/org1.example.com/msp/tlscacerts/
organizations/peerOrganizations/org1.example.com/msp/tlscacerts/org1-tls-ca.pem
organizations/peerOrganizations/org1.example.com/msp/config.yaml
organizations/peerOrganizations/org1.example.com/msp/cacerts/
organizations/peerOrganizations/org1.example.com/msp/cacerts/org1-ecert-ca.pem
organizations/peerOrganizations/org2.example.com/msp/
organizations/peerOrganizations/org2.example.com/msp/tlscacerts/
organizations/peerOrganizations/org2.example.com/msp/tlscacerts/org2-tls-ca.pem
organizations/peerOrganizations/org2.example.com/msp/config.yaml
organizations/peerOrganizations/org2.example.com/msp/cacerts/
organizations/peerOrganizations/org2.example.com/msp/cacerts/org2-ecert-ca.pem
Error from server (NotFound): configmaps "msp-config" not found
configmap/msp-config created
deployment.apps/org0-admin-cli created
deployment.apps/org1-admin-cli created
deployment.apps/org2-admin-cli created
Waiting for deployment "org0-admin-cli" rollout to finish: 0 of 1 updated replicas are available...
deployment "org0-admin-cli" successfully rolled out
Waiting for deployment "org1-admin-cli" rollout to finish: 0 of 1 updated replicas are available...
deployment "org1-admin-cli" successfully rolled out
deployment "org2-admin-cli" successfully rolled out
Defaulted container "main" out of: main, msp-unfurl (init)
+ configtxgen -profile TwoOrgsApplicationGenesis -channelID mychannel -outputBlock genesis_block.pb
�[34m2021-10-28 07:51:56.831 UTC [common.tools.configtxgen] main -> INFO 001�[0m Loading configuration
�[34m2021-10-28 07:51:56.847 UTC [common.tools.configtxgen.localconfig] completeInitialization -> INFO 002�[0m orderer type: etcdraft
�[34m2021-10-28 07:51:56.848 UTC [common.tools.configtxgen.localconfig] Load -> INFO 003�[0m Loaded configuration: /var/hyperledger/fabric/config/configtx.yaml
�[34m2021-10-28 07:51:56.855 UTC [common.tools.configtxgen] doOutputBlock -> INFO 004�[0m Generating genesis block
�[34m2021-10-28 07:51:56.855 UTC [common.tools.configtxgen] doOutputBlock -> INFO 005�[0m Creating application channel genesis block
�[34m2021-10-28 07:51:56.856 UTC [common.tools.configtxgen] doOutputBlock -> INFO 006�[0m Writing genesis block
+ osnadmin channel join --orderer-address org0-orderer1:9443 --channelID mychannel --config-block genesis_block.pb
Status: 201
{
	"name": "mychannel",
	"url": "/participation/v1/channels/mychannel",
	"consensusRelation": "consenter",
	"status": "active",
	"height": 1
}

+ osnadmin channel join --orderer-address org0-orderer2:9443 --channelID mychannel --config-block genesis_block.pb
Status: 201
{
	"name": "mychannel",
	"url": "/participation/v1/channels/mychannel",
	"consensusRelation": "consenter",
	"status": "active",
	"height": 1
}

+ osnadmin channel join --orderer-address org0-orderer3:9443 --channelID mychannel --config-block genesis_block.pb
Status: 201
{
	"name": "mychannel",
	"url": "/participation/v1/channels/mychannel",
	"consensusRelation": "consenter",
	"status": "active",
	"height": 1
}

Defaulted container "main" out of: main, msp-unfurl (init)
+ peer channel fetch oldest genesis_block.pb -c mychannel -o org0-orderer1:6050 --tls --cafile /var/hyperledger/fabric/organizations/ordererOrganizations/org0.example.com/msp/tlscacerts/org0-tls-ca.pem
�[34m2021-10-28 07:52:07.278 UTC [channelCmd] InitCmdFactory -> INFO 001�[0m Endorser and orderer connections initialized
�[34m2021-10-28 07:52:07.285 UTC [cli.common] readBlock -> INFO 002�[0m Received block: 0
+ CORE_PEER_ADDRESS=org1-peer1:7051
+ peer channel join -b genesis_block.pb -o org0-orderer1:6050 --tls --cafile /var/hyperledger/fabric/organizations/ordererOrganizations/org0.example.com/msp/tlscacerts/org0-tls-ca.pem
Error: error getting endorser client for channel: endorser client failed to connect to org1-peer1:7051: failed to create new connection: connection error: desc = "transport: error while dialing: dial tcp 10.96.91.172:7051: connect: connection refused"
+ CORE_PEER_ADDRESS=org1-peer2:7051
+ peer channel join -b genesis_block.pb -o org0-orderer1:6050 --tls --cafile /var/hyperledger/fabric/organizations/ordererOrganizations/org0.example.com/msp/tlscacerts/org0-tls-ca.pem
Error: error getting endorser client for channel: endorser client failed to connect to org1-peer2:7051: failed to create new connection: connection error: desc = "transport: error while dialing: dial tcp 10.96.76.187:7051: connect: connection refused"
command terminated with exit code 1

Kube test network : Provide clear instructions for running Java chaincode on a development workstation

Hi @Nanra I agree : the documentation is both confusing and wrong. One great aspect of using the external Docker build process (and chaincode-as-a-service) is that we can write smart contracts in any language, rather than the limited set supported by the internal chaincode compiler.

To fix this issue I think the following items need to be addressed:

  • Add a reference sample / external chaincode project for asset-transfer-basic, highlighting the use of the NettyChaincodeServer and main entry point.

  • I checked yesterday that the asset-transfer-basic/chaincode-java smart contract will receive messages through the NettyChaincodeServer, but did not actually build a Docker image to verify that the Dockerfile, gradle build, etc. is set up correctly. (There may be some additional work necessary to set up the project based on the example in the hyperledgendary git.)

  • Update the Fabric docs to include both a small Java snippet for both the Netty grpc router and a pointer to the basic asset transfer example. (And update the statement that Fabric only supports Go and Node - wrong!)

Maybe one last feature request on the Java front ... If we are going to build a reference sample for a Java chaincode, let's get the logging correct! Please could you update the code in a couple of places to use an SLF4J Logger, instead of System.out, and include a logback provider in the sample? This is not just a convenience for the smart contract programmers - when running the endpoints in a production context it will be very useful to direct the chaincode logging output to a log aggregator. This is MUCH easier when the routines are sending output to an SLF4J category.

Originally posted by @jkneubuh in hyperledger/fabric#2916 (comment)

Someone can share his whole operation steps. It's better to have sample code so that I can learn from it.

Anyone help

Kube test network : Pods can not resolve external DNS host names

Pods running in the Kubernetes Test Network are no longer able to resolve external DNS entries. E.g.:

Setup:

Resolve a local DNS host / pod (OK):

root@hexapody1:~# kubectl apply -f https://k8s.io/examples/admin/dns/dnsutils.yaml
pod/dnsutils created
root@hexapody1:~# kubectl exec -i -t dnsutils -- nslookup kubernetes.default
Server:		10.96.0.10
Address:	10.96.0.10#53

Name:	kubernetes.default.svc.cluster.local
Address: 10.96.0.1

Resolve an external DNS host name (FAILS):

root@hexapody1:~# kubectl exec -i -t dnsutils -- nslookup www.ibm.com 
Server:		10.96.0.10
Address:	10.96.0.10#53

** server can't find www.ibm.com.fyre.ibm.com: SERVFAIL

command terminated with exit code 1

fabric2.4 "peer join -b $BLOCKFILE": ClientHandshake -> Client TLS handshake failed after 10.52275ms with error: tls: first record does not look like a TLS handshake remoteaddress=127.0.0.1:7051

test-network can success, but when I add the number of orderers and peers, try "peer join -b $BLOCKFILE", got this error:
when I try " peer channel join -b $BLOCKFILE" , this error comes. How to resolve it ? Is it a source error? Following config:

---
Organizations:
  - &OrdererOrg
    Name: OrdererOrg
    ID: OrdererMSP
    MSPDir: crypto-config/ordererOrganizations/trace.com/msp
    Policies:
      Readers:
        Type: Signature
        Rule: "OR('OrdererMSP.member')"
      Writers:
        Type: Signature
        Rule: "OR('OrdererMSP.member')"
      Admins:
        Type: Signature
        Rule: "OR('OrdererMSP.admin')"

    OrdererEndpoints:
      - ordererOrg1.trace.com:7050
      - ordererOrg2.trace.com:7060
      - ordererOrg3.trace.com:7070

  - &Org1
    Name: Org1MSP
    ID: Org1MSP
    MSPDir: crypto-config/peerOrganizations/org1.trace.com/msp
    Policies:
      Readers:
        Type: Signature
        Rule: "OR('Org1MSP.admin', 'Org1MSP.peer', 'Org1MSP.client')"
      Writers:
        Type: Signature
        Rule: "OR('Org1MSP.admin', 'Org1MSP.client')"
      Admins:
        Type: Signature
        Rule: "OR('Org1MSP.admin')"
      Endorsement:
        Type: Signature
        Rule: "OR('Org1MSP.peer')"
    # AnchorPeers:
    #   - Host: peer0.org1.trace.com
    #     Port: 7051

  - &Org2
    Name: Org2MSP
    ID: Org2MSP
    MSPDir: crypto-config/peerOrganizations/org2.trace.com/msp
    Policies:
      Readers:
        Type: Signature
        Rule: "OR('Org2MSP.admin', 'Org2MSP.peer', 'Org2MSP.client')"
      Writers:
        Type: Signature
        Rule: "OR('Org2MSP.admin', 'Org2MSP.client')"
      Admins:
        Type: Signature
        Rule: "OR('Org2MSP.admin')"
      Endorsement:
        Type: Signature
        Rule: "OR('Org2MSP.peer')"
    # AnchorPeers:
    #   - Host: peer0.org2.trace.com
    #     Port: 8051

  - &Org3
    Name: Org3MSP
    ID: Org3MSP
    MSPDir: crypto-config/peerOrganizations/org3.trace.com/msp
    Policies:
      Readers:
        Type: Signature
        Rule: "OR('Org3MSP.admin', 'Org3MSP.peer', 'Org3MSP.client')"
      Writers:
        Type: Signature
        Rule: "OR('Org3MSP.admin', 'Org3MSP.client')"
      Admins:
        Type: Signature
        Rule: "OR('Org3MSP.admin')"
      Endorsement:
        Type: Signature
        Rule: "OR('Org3MSP.peer')"
    # AnchorPeers:
    #   - Host: peer0.org3.trace.com
    #     Port: 9051

Capabilities:
  Channel: &ChannelCapabilities
    V2_0: true
  Orderer: &OrdererCapabilities
    V2_0: true
  Application: &ApplicationCapabilities
    V2_0: true

Application: &ApplicationDefaults

  Organizations:

  Policies:
    Readers:
      Type: ImplicitMeta
      Rule: "ANY Readers"
    Writers:
      Type: ImplicitMeta
      Rule: "ANY Writers"
    Admins:
      Type: ImplicitMeta
      Rule: "MAJORITY Admins"
    LifecycleEndorsement:
      Type: ImplicitMeta
      Rule: "MAJORITY Endorsement"
    Endorsement:
      Type: ImplicitMeta
      Rule: "MAJORITY Endorsement"

  Capabilities:
    <<: *ApplicationCapabilities

Orderer: &OrdererDefaults

  OrdererType: etcdraft

  Addresses: # orderer
    - ordererOrg1.trace.com:7050
    - ordererOrg2.trace.com:7060
    - ordererOrg3.trace.com:7070

  EtcdRaft:
    Consenters:
      - Host: ordererOrg1.trace.com
        Port: 7050
        ClientTLSCert: crypto-config/ordererOrganizations/trace.com/orderers/ordererOrg1.trace.com/tls/server.crt
        ServerTLSCert: crypto-config/ordererOrganizations/trace.com/orderers/ordererOrg1.trace.com/tls/server.crt
      - Host: ordererOrg2.trace.com
        Port: 7060
        ClientTLSCert: crypto-config/ordererOrganizations/trace.com/orderers/ordererOrg2.trace.com/tls/server.crt
        ServerTLSCert: crypto-config/ordererOrganizations/trace.com/orderers/ordererOrg2.trace.com/tls/server.crt
      - Host: ordererOrg3.trace.com
        Port: 7070
        ClientTLSCert: crypto-config/ordererOrganizations/trace.com/orderers/ordererOrg3.trace.com/tls/server.crt
        ServerTLSCert: crypto-config/ordererOrganizations/trace.com/orderers/ordererOrg3.trace.com/tls/server.crt
  BatchTimeout: 2s

  BatchSize:

    MaxMessageCount: 1000

    AbsoluteMaxBytes: 256 MB

    PreferredMaxBytes: 512 KB

  Organizations:

  Policies:
    Readers:
      Type: ImplicitMeta
      Rule: "ANY Readers"
    Writers:
      Type: ImplicitMeta
      Rule: "ANY Writers"
    Admins:
      Type: ImplicitMeta
      Rule: "MAJORITY Admins"
    BlockValidation:
      Type: ImplicitMeta
      Rule: "ANY Writers"

Channel: &ChannelDefaults

  Policies:
    # Who may invoke the 'Deliver' API
    Readers:
      Type: ImplicitMeta
      Rule: "ANY Readers"
    # Who may invoke the 'Broadcast' API
    Writers:
      Type: ImplicitMeta
      Rule: "ANY Writers"
    # By default, who may modify elements at this config level
    Admins:
      Type: ImplicitMeta
      Rule: "MAJORITY Admins"

  Capabilities:
    <<: *ChannelCapabilities

Profiles:
  SupplyChannelGenesis: 
    <<: *ChannelDefaults
    Orderer:
      <<: *OrdererDefaults
      Organizations:
        - *OrdererOrg
      Capabilities: *OrdererCapabilities
    Application:
      <<: *ApplicationDefaults
      Organizations:
        - *Org1
        - *Org2
      Capabilities: *ApplicationCapabilities

couch.yaml

version: '3.7'

networks:
  mdtnet:
    name: Fabric_net

services:
  couchdb0:
    container_name: couchdb0
    image: couchdb:3.1.1
    labels:
      service: hyperledger-fabric
    environment:
      - COUCHDB_USER=admin
      - COUCHDB_PASSWORD=adminpw
    ports:
      - "5984:5984"
    networks:
      - mdtnet

  peer0.org1.trace.com:
    environment:
      - CORE_LEDGER_STATE_STATEDATABASE=CouchDB
      - CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb0:5984
      - CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME=admin
      - CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD=adminpw
    depends_on:
      - couchdb0

image

image

image

fabric-samples needs a sample external builder implementation

  • this should support both building/launching chaincode as well as supporting external chaincode

  • ideally written in Go so that it is platform agnostic (so can support nano-bash and a windows native equivalent)

  • support TLS and non-TLS

  • support all chaincode languages: go, node, java

    • Go supports CORE_TLS_CLIENT_CERT_FILE and CORE_TLS_CLIENT_KEY_FILE for standard pem files where as node and java supports CORE_TLS_CLIENT_KEY_PATH and CORE_TLS_CLIENT_CERT_PATH which are base64 encoded pem files
  • we should then apply this to the test-network and test-network-nano-bash implementations as we want to move away from an implicit docker dependency (docker-compose and docker to run a local network is still useful but now we could also chose to use things like podman instead)

test-network-k8s

./network chaincode deploy:

Deploying chaincode "asset-transfer-basic":
✅ - Packaging chaincode folder chaincode/asset-transfer-basic ...
✅ - Transferring chaincode archive to org1 ...
✅ - Installing chaincode for org org1 ...
⚠️ - Launching chaincode container "ghcr.io/hyperledgendary/fabric-ccaas-asset-transfer-basic" ...

image

error: [Transaction]: Error: No valid responses from any peers.

I follow the guide here: https://hyperledger-fabric.readthedocs.io/en/release-2.2/write_first_app.html
When I run node app.js command, it gives me this error:

Loaded the network configuration located at /home/thangld/go/src/github.com/thangld322/fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/connection-org1.json
Built a CA Client named ca-org1
Built a file system wallet at /home/thangld/go/src/github.com/thangld322/fabric-samples/asset-transfer-basic/application-javascript/wallet
Successfully enrolled admin user and imported it into the wallet
Successfully registered and enrolled user appUser and imported it into the wallet

--> Submit Transaction: InitLedger, function creates the initial set of assets on the ledger
2021-12-20T06:58:03.931Z - error: [Transaction]: Error: No valid responses from any peers. Errors:
    peer=peer0.org2.example.com:9051, status=500, message=error in simulation: failed to execute transaction 55cca2b4b112e8feaa128091a003dfafacb45b98a11bf3729e88e34610d1bb32: could not launch chaincode basic_1.0:b359a077730d7f44d6a437ad49d1da951f6a01c6d1eed4f85b8b1f5a08617fe7: error starting container: error starting container: API error (404): network _test not found
    peer=peer0.org1.example.com:7051, status=500, message=error in simulation: failed to execute transaction 55cca2b4b112e8feaa128091a003dfafacb45b98a11bf3729e88e34610d1bb32: could not launch chaincode basic_1.0:b359a077730d7f44d6a437ad49d1da951f6a01c6d1eed4f85b8b1f5a08617fe7: error starting container: error starting container: API error (404): network _test not found
    at newEndorsementError (/home/thangld/go/src/github.com/thangld322/fabric-samples/asset-transfer-basic/application-javascript/node_modules/fabric-network/lib/transaction.js:74:12)
    at getResponsePayload (/home/thangld/go/src/github.com/thangld322/fabric-samples/asset-transfer-basic/application-javascript/node_modules/fabric-network/lib/transaction.js:41:23)
    at Transaction.submit (/home/thangld/go/src/github.com/thangld322/fabric-samples/asset-transfer-basic/application-javascript/node_modules/fabric-network/lib/transaction.js:255:28)
    at processTicksAndRejections (node:internal/process/task_queues:96:5)
    at async main (/home/thangld/go/src/github.com/thangld322/fabric-samples/asset-transfer-basic/application-javascript/app.js:118:4)
******** FAILED to run the application: Error: No valid responses from any peers. Errors:
    peer=peer0.org2.example.com:9051, status=500, message=error in simulation: failed to execute transaction 55cca2b4b112e8feaa128091a003dfafacb45b98a11bf3729e88e34610d1bb32: could not launch chaincode basic_1.0:b359a077730d7f44d6a437ad49d1da951f6a01c6d1eed4f85b8b1f5a08617fe7: error starting container: error starting container: API error (404): network _test not found
    peer=peer0.org1.example.com:7051, status=500, message=error in simulation: failed to execute transaction 55cca2b4b112e8feaa128091a003dfafacb45b98a11bf3729e88e34610d1bb32: could not launch chaincode basic_1.0:b359a077730d7f44d6a437ad49d1da951f6a01c6d1eed4f85b8b1f5a08617fe7: error starting container: error starting container: API error (404): network _test not found

Please help!

snapshot functionality

It would be useful for a guide that explains the snapshot functionality (could also benefit from operating the SDK and snapshots),

HSM Mixin and Checkpointer missing

Hi,

I want to migrate a worker I have using nodejs to go, and I was trying the library but I haven't been able to find the classes for the HSMMixin and the Checkpointer classes.

Can you tell me where to find the implementations?

Thank you.

Error: endorsement failure during invoke. response: status:500

Hi - I am following the test network from the official page. I was able to run all the steps till peer command.

When I run this peer chaincode, I am running into the below error. Can someone please guide?
peer chaincode invoke -o localhost:7050 --ordererTLSHostnameOverride orderer.example.com --tls --cafile ${PWD}/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C mychannel -n basic --peerAddresses localhost:7051 --tlsRootCertFiles ${PWD}/organizations/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt --peerAddresses localhost:9051 --tlsRootCertFiles ${PWD}/organizations/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt -c '{"function":"InitLedger","Args":[]}'

Error

[ec2-user@ip-172-31-89-114 test-network]$ peer chaincode invoke -o localhost:7050 --ordererTLSHostnameOverride orderer.example.com --tls --cafile ${PWD}/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C mychannel -n basic --peerAddresses localhost:7051 --tlsRootCertFiles ${PWD}/organizations/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt --peerAddresses localhost:9051 --tlsRootCertFiles ${PWD}/organizations/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt -c '{"function":"InitLedger","Args":[]}'
Error: endorsement failure during invoke. response: status:500 message:"error in simulation: failed to execute transaction b40c29594e4462cb6d42e92b9df8b8db0ad7b10c4f08ad71fd52afafa0090ffd: could not launch chaincode basic_1.0:4ec191e793b27e953ff2ede5a8bcc63152cecb1e4c3f301a26e22692c61967ad: error starting container: error starting container: API error (404): network _test not found"

test-network docker-compose fails to read dotenv file

There is an issue with Docker Compose after v1.28 and it fails to read the dotenv file unless explicitly specified. This poses problems when invoking chaincode such as:

Error: endorsement failure during invoke. 
response: status:500 message:"error in simulation: failed to execute transaction **HASH**: could not launch 
chaincode **CHAINCODE**: error starting container: error starting container:
API error (404): network _test not found" 

I managed to fix it by adding --env-file ./.env to the docker-compose calls inside network.sh.

start first nework failed

Testing on a server which cannot access internet. And the images (such as peer, order, ccenv etc.)have been downloaded.

Then setup the first network, But failed when instantiating the chaincode, console print:

=======================
Channel name : mychannel
Instantiating chaincode on peer0.org2...

  • peer chaincode instantiate -o orderer.example.com:7050 --tls true --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C mychannel -n mycc -l golang -v 1.0 -c '{"Args":["init","a","100","b","200"]}' -P 'AND ('''Org1MSP.peer''','''Org2MSP.peer''')'
  • res=1
  • set +x
    2021-08-30 01:39:13.416 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 001 Using default escc
    2021-08-30 01:39:13.416 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 002 Using default vscc
    Error: could not assemble transaction, err proposal response was not successful, error code 500, msg error starting container: error starting container: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on [::1]:53: read udp [::1]:46796->[::1]:53: read: connection refused
    !!!!!!!!!!!!!!! Chaincode instantiation on peer0.org2 on channel 'mychannel' failed !!!!!!!!!!!!!!!!
    ========= ERROR !!! FAILED to execute End-2-End Scenario ===========

Why does it lookup registry-1.docker.io ? All the images have been downloaded on the server via the local repository.
How can I fix it? thanks a lot

New Feature proposal - Token-ERC-721 in GO

Hello,

I've translated the code from token-erc-721 from JS (the only version available now) to golang. I got a full implementation, along with unit tests. I kept the same comments as in the JS version, and following the README.md instructions, anyone could choose whether to use JS or GO. I have the code ready and fully tested with said instructions.

I'd like to contribute this code, how can I proceed?
Thank you

peer chaincode list not showing the installed chaincode

using:

  1. ./network.sh up
  2. ./network.sh createChannel
  3. ./network.sh deployCC ....

Chaincode is working properly by invoking functions ( addasset, query, etc.). But chaincode list return empty
peer chaincode list --instantiated -C mychannel
Get instantiated chaincodes on channel mychannel:

Support rootless docker

I have no issues running the test network with regular docker.

I use another machine and try with rootless docker. Everything is fine untill I run

./network.sh deployCC -ccn basic -ccp ../asset-transfer-basic/chaincode-java -ccl java

It gives the following error

BUILD SUCCESSFUL in 1m 7s
10 actionable tasks: 10 executed
~/project/fabric-samples/test-network
Finished compiling Java code
+ peer lifecycle chaincode package basic.tar.gz --path ../asset-transfer-basic/chaincode-java/build/install/basic --lang java --label basic_1.0
+ res=0
Chaincode is packaged
Installing chaincode on peer0.org1...
Using organization 1
+ peer lifecycle chaincode install basic.tar.gz
+ res=1
Error: chaincode install failed with status: 500 - failed to invoke backing implementation of 'InstallChaincode': could not build chaincode: docker build failed: docker image inspection failed: Get "http://unix.sock/images/dev-peer0.org1.example.com-basic_1.0-6876d12ac53537c539236c077d467998cbf46c251cb6f7e040dc158f9c8f669b-2e904ec07b6bf158428a13844d5c88e8c4406124ba90c33894da4bcfae4751c5/json": dial unix /host/var/run/docker.sock: connect: permission denied
Chaincode installation on peer0.org1 has failed
Deploying chaincode failed

Question

Maybe you guys can point me out on a solution.
When I bring network.sh up from the test-network. I realized that the ports that should be open in order to create channel are not.
Maybe I am missing something...
I appreciate any tip.
Getting some logs from the journalctl I have the following,

Jan 01 17:02:04 VFIEVOX3.Router systemd[1]: docker-6145a913ddcc595378fbb4a69af72113ab1b47ed9c04f5db23fd84b52eb91653.scope: Deactivated successfully.
Jan 01 17:02:04 VFIEVOX3.Router 6145a913ddcc[12462]: 2022-01-01 17:02:04.603 UTC 0001 ERRO [main] InitCmd -> Cannot run peer because could not initialize BCCSP Factories: Failed initializing BCCSP: Could not initialize BCCSP SW [Failed to initialize software key store: open /etc/hyperledger/fabric/msp/keystore: perm>
Jan 01 17:02:04 VFIEVOX3.Router gnome-shell[8782]: Removing a network device that was not added
Jan 01 17:02:04 VFIEVOX3.Router audit[38205]: NETFILTER_CFG table=nat:491 family=2 entries=1 op=nft_unregister_rule pid=38205 subj=system_u:system_r:iptables_t:s0 comm="iptables"
Jan 01 17:02:04 VFIEVOX3.Router gnome-shell[8782]: Removing a network device that was not added
Jan 01 17:02:04 VFIEVOX3.Router NetworkManager[952]: <info>  [1641056524.5934] device (vethd0e43eb): released from master device br-fe1f0078fe48
Jan 01 17:02:04 VFIEVOX3.Router kernel: br-fe1f0078fe48: port 1(vethd0e43eb) entered disabled state
Jan 01 17:02:04 VFIEVOX3.Router kernel: device vethd0e43eb left promiscuous mode
Jan 01 17:02:04 VFIEVOX3.Router audit: ANOM_PROMISCUOUS dev=vethd0e43eb prom=0 old_prom=256 auid=4294967295 uid=0 gid=0 ses=4294967295
Jan 01 17:02:04 VFIEVOX3.Router kernel: br-fe1f0078fe48: port 1(vethd0e43eb) entered disabled state
Jan 01 17:02:04 VFIEVOX3.Router dockerd[12474]: time="2022-01-01T17:02:04.582393341Z" level=warning msg="cleanup warnings time=\"2022-01-01T17:02:04Z\" level=info msg=\"starting signal loop\" namespace=moby pid=38191\n"
Jan 01 17:02:04 VFIEVOX3.Router dockerd[12474]: time="2022-01-01T17:02:04.568648897Z" level=info msg="cleaning up dead shim"
Jan 01 17:02:04 VFIEVOX3.Router dockerd[12474]: time="2022-01-01T17:02:04.568639491Z" level=warning msg="cleaning up after shim disconnected" id=95a3e54eb173b3e5fec3ff94a5f6dd826472cdbc2ae0fe4e9e9544dbc6f9f2e6 namespace=moby
Jan 01 17:02:04 VFIEVOX3.Router dockerd[12474]: time="2022-01-01T17:02:04.568453458Z" level=info msg="shim disconnected" id=95a3e54eb173b3e5fec3ff94a5f6dd826472cdbc2ae0fe4e9e9544dbc6f9f2e6
Jan 01 17:02:04 VFIEVOX3.Router dockerd[12462]: time="2022-01-01T17:02:04.568337061Z" level=info msg="ignoring event" container=95a3e54eb173b3e5fec3ff94a5f6dd826472cdbc2ae0fe4e9e9544dbc6f9f2e6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 01 17:02:04 VFIEVOX3.Router NetworkManager[952]: <info>  [1641056524.5462] manager: (veth0d2e1ea): new Veth device (/org/freedesktop/NetworkManager/Devices/227)
Jan 01 17:02:04 VFIEVOX3.Router audit: BPF prog-id=0 op=UNLOAD
Jan 01 17:02:04 VFIEVOX3.Router systemd[1]: docker-95a3e54eb173b3e5fec3ff94a5f6dd826472cdbc2ae0fe4e9e9544dbc6f9f2e6.scope: Deactivated successfully.
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]:         /go/src/github.com/hyperledger/fabric/cmd/orderer/main.go:15 +0x25
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]: main.main()
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]:         /go/src/github.com/hyperledger/fabric/orderer/common/server/main.go:93 +0x267
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]: github.com/hyperledger/fabric/orderer/common/server.Main()
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]:         /go/src/github.com/hyperledger/fabric/orderer/common/server/main.go:763 +0x111
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]: github.com/hyperledger/fabric/orderer/common/server.loadLocalMSP(0xc0005c8000, 0xc000590870, 0x0)
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]:         /go/src/github.com/hyperledger/fabric/common/flogging/zap.go:74
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]: github.com/hyperledger/fabric/common/flogging.(*FabricLogger).Panicf(...)
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]:         /go/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:159
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]: go.uber.org/zap.(*SugaredLogger).Panicf(...)
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]:         /go/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:234 +0xf6
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]: go.uber.org/zap.(*SugaredLogger).log(0xc000574140, 0x4, 0x108c7f5, 0x22, 0xc00017f228, 0x1, 0x1, 0x0, 0x0, 0x0)
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]:         /go/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/zapcore/entry.go:234 +0x58d
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]: go.uber.org/zap/zapcore.(*CheckedEntry).Write(0xc000595bc0, 0x0, 0x0, 0x0)
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]: goroutine 1 [running]:
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]: 
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]: panic: Failed to get local msp config: could not initialize BCCSP Factories: Failed initializing BCCSP: Could not initialize BCCSP SW [Failed to initialize software key store: open /var/hyperledger/orderer/msp/keystore: permission denied]
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]: 2022-01-01 17:02:04.534 UTC 0003 PANI [orderer.common.server] loadLocalMSP -> Failed to get local msp config: could not initialize BCCSP Factories: Failed initializing BCCSP: Could not initialize BCCSP SW [Failed to initialize software key store: open /var/hyperle>
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]:         Admin.TLS.TLSHandshakeTimeShift = 0s
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]:         Admin.TLS.ClientRootCAs = [/var/hyperledger/orderer/tls/ca.crt]
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]:         Admin.TLS.ClientAuthRequired = true
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]:         Admin.TLS.RootCAs = [/var/hyperledger/orderer/tls/ca.crt]
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]:         Admin.TLS.Certificate = "/var/hyperledger/orderer/tls/server.crt"
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]:         Admin.TLS.PrivateKey = "/var/hyperledger/orderer/tls/server.key"
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]:         Admin.TLS.Enabled = true
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]:         Admin.ListenAddress = "0.0.0.0:7053"
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]:         ChannelParticipation.MaxRequestBodySize = 1048576
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]:         ChannelParticipation.Enabled = true
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]:         Metrics.Statsd.Prefix = ""
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]:         Metrics.Statsd.WriteInterval = 30s
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]:         Metrics.Statsd.Address = "127.0.0.1:8125"
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]:         Metrics.Statsd.Network = "udp"
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]:         Metrics.Provider = "prometheus"
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]:         Operations.TLS.TLSHandshakeTimeShift = 0s
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]:         Operations.TLS.ClientRootCAs = []
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]:         Operations.TLS.ClientAuthRequired = false
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]:         Operations.TLS.RootCAs = []
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]:         Operations.TLS.Certificate = ""
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]:         Operations.TLS.PrivateKey = ""
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]:         Operations.TLS.Enabled = false
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]:         Operations.ListenAddress = "orderer.example.com:9443"
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]:         Consensus = map[SnapDir:/var/hyperledger/production/orderer/etcdraft/snapshot WALDir:/var/hyperledger/production/orderer/etcdraft/wal]
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]:         Debug.DeliverTraceDir = ""
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]:         Debug.BroadcastTraceDir = ""
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]:         Kafka.Topic.ReplicationFactor = 3
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]:         Kafka.SASLPlain.Password = ""
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]:         Kafka.SASLPlain.User = ""
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]:         Kafka.SASLPlain.Enabled = false
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]:         Kafka.TLS.TLSHandshakeTimeShift = 0s
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]:         Kafka.TLS.ClientRootCAs = []
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]:         Kafka.TLS.ClientAuthRequired = false
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]:         Kafka.TLS.RootCAs = []
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]:         Kafka.TLS.Certificate = ""
Jan 01 17:02:04 VFIEVOX3.Router 95a3e54eb173[12462]:         Kafka.TLS.PrivateKey = ""

error checking context: 'no permission to read from /'<my app path>/test-network/organizations/fabric-ca/ordererOrg/msp/keystore/<key>_sk''.

Using 2.2.4/1.5.2 on Ubuntu 18.04 LTS host. I have a containerized API server (similar to asset-transfer-events sample app) that will perform transactions with the 2 peers/chaincode and 1 orderer. The API Server creates a gateway to the peers and wallets for the orgs, and tries to enroll admin and user if not in the wallet.

Step 1 - ./network.sh up createChannel -c mychannel -ca --> successful
Step 2 - ./network.sh deployCC -ccs 1 -ccv 1 -ccep "OR('Org1MSP.peer','Org2MSP.peer')" -ccl javascript -ccp ./../chaincode/myChainCode/ -ccn events --> successful
Step 3: cd to my containerized API Server
Step 4: npm start -->

docker-compose up
Building api
error checking context: 'no permission to read from 'test-network/organizations/fabric-ca/ordererOrg/msp/keystore/_sk''.

The ACLs on the key: owned by root, read and write only by root.

Question: How would I find what is checking these privileges, since the API server image has not yet created

test-network-k8s pv/pvc mis match on docker desktop k8s Mac

yuanyi@192 OpenSource % kubectl get pv -n test-network
NAME          CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
fabric-org0   2Gi        RWO            Retain           Available           standard                11m
fabric-org1   2Gi        RWO            Retain           Available           standard                11m
fabric-org2   2Gi        RWO            Retain           Available           standard                11m

yuanyi@192 OpenSource % kubectl get pvc -n test-network
NAME          STATUS    VOLUME        CAPACITY   ACCESS MODES   STORAGECLASS   AGE
fabric-org0   Pending   fabric-org0   0                         hostpath       12m
fabric-org1   Pending   fabric-org1   0                         hostpath       12m
fabric-org2   Pending   fabric-org2   0                         hostpath       12m

we can see there mis match with pvc and pv as STORAGECLASS standard and hostpath
it causes network starts fail and hang up at ca node creation phase.

Run an asset-transfer-basic test on the Azure/CI pipeline using the Kubernetes test network

Kube test network provides a good opportunity to showcase and promote the best practices of building cloud-native Fabric applications using the new Gateway and Chaincode-as-a-Service SDKs. This can be further improved by setting up an automated CI pipeline for an initial test and validation of basic-asset-transfer on Kubernetes.

The scope for this issue is to set up ONE CI test suite, using it as an opportunity to build up the framework and tooling such that it has a good chance of application to all of the Fabric samples, without forcing a refactoring of all of the sample code lines. The long-term vision is to establish a CI flow supporting a mix of remote Kubernetes (AKS, EKS, GKS, IKS, etc..), local Kubernetes (KIND, Rancher, minikube, etc...) and legacy Docker Compose test networks. This issue is NOT an opportunity to refactor the entire samples projects to align with Gateway and Kube platforms - it's just working through the mechanics of getting ONE test suite up and running, exercising the parts, and setting up for a long-term alignment with Fabric 3.

The scope of work in this issue involves:

  • Ensuring an Azure image includes necessary prerequisites for a test run (kind, kubectl, docker, jq, etc.)

  • Creating a ci/scripts/run-k8s-test-network-basic.sh script and linking into the CI / merge pipeline.

  • set up / tear down an ephemeral KIND cluster for the scope of a suite; set up / tear down a Fabric network for the scope of a test;

  • compile, build, and tag a Docker image using /asset-transfer-basic/chaincode-external (or some suitable CC dialect).

  • deploy the chaincode to Kubernetes using the "Chaincode-as-a-Service" pattern. The connection and metadata json files should be refactored from the test-network-k8s/chaincode folder over to the external chaincode folder. Each "externally built" chaincode project should contain a fully-self describing environment for building, deploying, and testing the CC in Kubernetes (or within a local IDE/debugger.)

  • deploy an ingress controller OR port-forward to expose a gateway peer to a host-local port. Some consideration may be necessary to align with DNS, or ensuring that the peer TLS certificate CSRs include a localhost, *.vcap.me, or *.nip.io host alias for the gateway peer.

  • Extract the gateway client and/or Admin certificates in a manner that is amenable to running the application on the local host OS. One challenge in this area is that ALL of the Fabric samples hard-code a path structure within /test-network/organizations/*, assuming that certificates were created using cryptogen and the Compose test-network. The plan for this issue is to overlay certificate structures from the Kube test network into the target folder structure of the test-network. While this is a bizarre technique, it postpones the need to refactor any of the client application logic as part of this PR / issue. (We will need to address this as part of a touch-up of the fabric samples when making a concerted effort to update everything to the Gateway SDKs.)

  • run a gateway client application, validating exit code and/or system output. Use @sapthasurendran 's new application-gateway-typescript as the reference gateway application. Note that the gateway client application will build and run locally on the host OS, tunnel through the k8s ingress, and connect to remote CC running adjacent to the peer as a pod in Kubernetes.

Start with just this one sample application - get it working, and review with the team on outcomes and next steps. If it pans out well then consider adding a supplemental path to the newbie app developer guide.

cc: @denyeart @bestbeforetoday @mbwhite

fabric 2.2 ./startFabric.sh java error Undefined contract called

$ ./startFabric.sh java
~/go/src/github.com/hyperledger/fabric-samples/test-network ~/go/src/github.com/hyperledger/fabric-samples/fabcar
Stopping network
[+] Running 13/13
⠿ Container ca_org1 Removed 1.0s
⠿ Container ca_orderer Removed 1.5s
⠿ Container ca_org2 Removed 1.2s
⠿ Container cli Removed 0.6s
⠿ Container orderer.example.com Removed 1.4s
⠿ Container peer0.org1.example.com Removed 1.0s
⠿ Container peer0.org2.example.com Removed 1.3s
⠿ Container couchdb0 Removed 2.9s
⠿ Container couchdb1 Removed 2.9s
⠿ Volume fabric_orderer.example.com Removed 0.0s
⠿ Volume fabric_peer0.org2.example.com Removed 0.0s
⠿ Network fabric_test Removed 0.1s
⠿ Volume fabric_peer0.org1.example.com Removed 0.0s
[+] Running 1/0
⠿ fabric Warning: No resource found to remove 0.0s
No containers available for deletion
No images available for deletion
Creating channel 'mychannel'.
If network is not up, starting nodes with CLI timeout of '5' tries and CLI delay of '3' seconds and using database 'couchdb with crypto from 'Certificate Authorities'
Bringing up network
LOCAL_VERSION=2.2.0
DOCKER_IMAGE_VERSION=2.2.0
CA_LOCAL_VERSION=1.5.3
CA_DOCKER_IMAGE_VERSION=1.5.2
Local fabric-ca binaries and docker images are out of sync. This may cause problems.
Generating certificates using Fabric CA
[+] Running 4/4
⠿ Network fabric_test Created 0.1s
⠿ Container ca_orderer Started 1.4s
⠿ Container ca_org1 Started 1.7s
⠿ Container ca_org2 Started 1.6s
Creating Org1 Identities
Enrolling the CA admin

  • fabric-ca-client enroll -u https://admin:adminpw@localhost:7054 --caname ca-org1 --tls.certfiles /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/fabric-ca/org1/tls-cert.pem
    2021/12/13 11:40:04 [INFO] Created a default configuration file at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/fabric-ca-client-config.yaml
    2021/12/13 11:40:04 [INFO] TLS Enabled
    2021/12/13 11:40:04 [INFO] generating key: &{A:ecdsa S:256}
    2021/12/13 11:40:04 [INFO] encoded CSR
    2021/12/13 11:40:04 [INFO] Stored client certificate at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/msp/signcerts/cert.pem
    2021/12/13 11:40:04 [INFO] Stored root CA certificate at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/msp/cacerts/localhost-7054-ca-org1.pem
    2021/12/13 11:40:04 [INFO] Stored Issuer public key at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/msp/IssuerPublicKey
    2021/12/13 11:40:04 [INFO] Stored Issuer revocation public key at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/msp/IssuerRevocationPublicKey
    Registering peer0
  • fabric-ca-client register --caname ca-org1 --id.name peer0 --id.secret peer0pw --id.type peer --tls.certfiles /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/fabric-ca/org1/tls-cert.pem
    2021/12/13 11:40:04 [INFO] Configuration file location: /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/fabric-ca-client-config.yaml
    2021/12/13 11:40:04 [INFO] TLS Enabled
    2021/12/13 11:40:04 [INFO] TLS Enabled
    Password: peer0pw
    Registering user
  • fabric-ca-client register --caname ca-org1 --id.name user1 --id.secret user1pw --id.type client --tls.certfiles /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/fabric-ca/org1/tls-cert.pem
    2021/12/13 11:40:04 [INFO] Configuration file location: /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/fabric-ca-client-config.yaml
    2021/12/13 11:40:04 [INFO] TLS Enabled
    2021/12/13 11:40:04 [INFO] TLS Enabled
    Password: user1pw
    Registering the org admin
  • fabric-ca-client register --caname ca-org1 --id.name org1admin --id.secret org1adminpw --id.type admin --tls.certfiles /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/fabric-ca/org1/tls-cert.pem
    2021/12/13 11:40:04 [INFO] Configuration file location: /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/fabric-ca-client-config.yaml
    2021/12/13 11:40:04 [INFO] TLS Enabled
    2021/12/13 11:40:04 [INFO] TLS Enabled
    Password: org1adminpw
    Generating the peer0 msp
  • fabric-ca-client enroll -u https://peer0:peer0pw@localhost:7054 --caname ca-org1 -M /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp --csr.hosts peer0.org1.example.com --tls.certfiles /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/fabric-ca/org1/tls-cert.pem
    2021/12/13 11:40:05 [INFO] TLS Enabled
    2021/12/13 11:40:05 [INFO] generating key: &{A:ecdsa S:256}
    2021/12/13 11:40:05 [INFO] encoded CSR
    2021/12/13 11:40:05 [INFO] Stored client certificate at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp/signcerts/cert.pem
    2021/12/13 11:40:05 [INFO] Stored root CA certificate at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp/cacerts/localhost-7054-ca-org1.pem
    2021/12/13 11:40:05 [INFO] Stored Issuer public key at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp/IssuerPublicKey
    2021/12/13 11:40:05 [INFO] Stored Issuer revocation public key at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp/IssuerRevocationPublicKey
    Generating the peer0-tls certificates
  • fabric-ca-client enroll -u https://peer0:peer0pw@localhost:7054 --caname ca-org1 -M /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls --enrollment.profile tls --csr.hosts peer0.org1.example.com --csr.hosts localhost --tls.certfiles /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/fabric-ca/org1/tls-cert.pem
    2021/12/13 11:40:05 [INFO] TLS Enabled
    2021/12/13 11:40:05 [INFO] generating key: &{A:ecdsa S:256}
    2021/12/13 11:40:05 [INFO] encoded CSR
    2021/12/13 11:40:05 [INFO] Stored client certificate at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/signcerts/cert.pem
    2021/12/13 11:40:05 [INFO] Stored TLS root CA certificate at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/tlscacerts/tls-localhost-7054-ca-org1.pem
    2021/12/13 11:40:05 [INFO] Stored Issuer public key at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/IssuerPublicKey
    2021/12/13 11:40:05 [INFO] Stored Issuer revocation public key at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/IssuerRevocationPublicKey
    Generating the user msp
  • fabric-ca-client enroll -u https://user1:user1pw@localhost:7054 --caname ca-org1 -M /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/users/[email protected]/msp --tls.certfiles /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/fabric-ca/org1/tls-cert.pem
    2021/12/13 11:40:05 [INFO] TLS Enabled
    2021/12/13 11:40:05 [INFO] generating key: &{A:ecdsa S:256}
    2021/12/13 11:40:05 [INFO] encoded CSR
    2021/12/13 11:40:05 [INFO] Stored client certificate at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/users/[email protected]/msp/signcerts/cert.pem
    2021/12/13 11:40:05 [INFO] Stored root CA certificate at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/users/[email protected]/msp/cacerts/localhost-7054-ca-org1.pem
    2021/12/13 11:40:05 [INFO] Stored Issuer public key at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/users/[email protected]/msp/IssuerPublicKey
    2021/12/13 11:40:05 [INFO] Stored Issuer revocation public key at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/users/[email protected]/msp/IssuerRevocationPublicKey
    Generating the org admin msp
  • fabric-ca-client enroll -u https://org1admin:org1adminpw@localhost:7054 --caname ca-org1 -M /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/users/[email protected]/msp --tls.certfiles /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/fabric-ca/org1/tls-cert.pem
    2021/12/13 11:40:05 [INFO] TLS Enabled
    2021/12/13 11:40:05 [INFO] generating key: &{A:ecdsa S:256}
    2021/12/13 11:40:05 [INFO] encoded CSR
    2021/12/13 11:40:06 [INFO] Stored client certificate at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/users/[email protected]/msp/signcerts/cert.pem
    2021/12/13 11:40:06 [INFO] Stored root CA certificate at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/users/[email protected]/msp/cacerts/localhost-7054-ca-org1.pem
    2021/12/13 11:40:06 [INFO] Stored Issuer public key at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/users/[email protected]/msp/IssuerPublicKey
    2021/12/13 11:40:06 [INFO] Stored Issuer revocation public key at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/users/[email protected]/msp/IssuerRevocationPublicKey
    Creating Org2 Identities
    Enrolling the CA admin
  • fabric-ca-client enroll -u https://admin:adminpw@localhost:8054 --caname ca-org2 --tls.certfiles /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/fabric-ca/org2/tls-cert.pem
    2021/12/13 11:40:06 [INFO] Created a default configuration file at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org2.example.com/fabric-ca-client-config.yaml
    2021/12/13 11:40:06 [INFO] TLS Enabled
    2021/12/13 11:40:06 [INFO] generating key: &{A:ecdsa S:256}
    2021/12/13 11:40:06 [INFO] encoded CSR
    2021/12/13 11:40:06 [INFO] Stored client certificate at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org2.example.com/msp/signcerts/cert.pem
    2021/12/13 11:40:06 [INFO] Stored root CA certificate at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org2.example.com/msp/cacerts/localhost-8054-ca-org2.pem
    2021/12/13 11:40:06 [INFO] Stored Issuer public key at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org2.example.com/msp/IssuerPublicKey
    2021/12/13 11:40:06 [INFO] Stored Issuer revocation public key at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org2.example.com/msp/IssuerRevocationPublicKey
    Registering peer0
  • fabric-ca-client register --caname ca-org2 --id.name peer0 --id.secret peer0pw --id.type peer --tls.certfiles /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/fabric-ca/org2/tls-cert.pem
    2021/12/13 11:40:06 [INFO] Configuration file location: /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org2.example.com/fabric-ca-client-config.yaml
    2021/12/13 11:40:06 [INFO] TLS Enabled
    2021/12/13 11:40:06 [INFO] TLS Enabled
    Password: peer0pw
    Registering user
  • fabric-ca-client register --caname ca-org2 --id.name user1 --id.secret user1pw --id.type client --tls.certfiles /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/fabric-ca/org2/tls-cert.pem
    2021/12/13 11:40:06 [INFO] Configuration file location: /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org2.example.com/fabric-ca-client-config.yaml
    2021/12/13 11:40:06 [INFO] TLS Enabled
    2021/12/13 11:40:06 [INFO] TLS Enabled
    Password: user1pw
    Registering the org admin
  • fabric-ca-client register --caname ca-org2 --id.name org2admin --id.secret org2adminpw --id.type admin --tls.certfiles /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/fabric-ca/org2/tls-cert.pem
    2021/12/13 11:40:07 [INFO] Configuration file location: /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org2.example.com/fabric-ca-client-config.yaml
    2021/12/13 11:40:07 [INFO] TLS Enabled
    2021/12/13 11:40:07 [INFO] TLS Enabled
    Password: org2adminpw
    Generating the peer0 msp
  • fabric-ca-client enroll -u https://peer0:peer0pw@localhost:8054 --caname ca-org2 -M /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/msp --csr.hosts peer0.org2.example.com --tls.certfiles /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/fabric-ca/org2/tls-cert.pem
    2021/12/13 11:40:07 [INFO] TLS Enabled
    2021/12/13 11:40:07 [INFO] generating key: &{A:ecdsa S:256}
    2021/12/13 11:40:07 [INFO] encoded CSR
    2021/12/13 11:40:07 [INFO] Stored client certificate at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/msp/signcerts/cert.pem
    2021/12/13 11:40:07 [INFO] Stored root CA certificate at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/msp/cacerts/localhost-8054-ca-org2.pem
    2021/12/13 11:40:07 [INFO] Stored Issuer public key at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/msp/IssuerPublicKey
    2021/12/13 11:40:07 [INFO] Stored Issuer revocation public key at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/msp/IssuerRevocationPublicKey
    Generating the peer0-tls certificates
  • fabric-ca-client enroll -u https://peer0:peer0pw@localhost:8054 --caname ca-org2 -M /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls --enrollment.profile tls --csr.hosts peer0.org2.example.com --csr.hosts localhost --tls.certfiles /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/fabric-ca/org2/tls-cert.pem
    2021/12/13 11:40:07 [INFO] TLS Enabled
    2021/12/13 11:40:07 [INFO] generating key: &{A:ecdsa S:256}
    2021/12/13 11:40:07 [INFO] encoded CSR
    2021/12/13 11:40:07 [INFO] Stored client certificate at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/signcerts/cert.pem
    2021/12/13 11:40:07 [INFO] Stored TLS root CA certificate at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/tlscacerts/tls-localhost-8054-ca-org2.pem
    2021/12/13 11:40:07 [INFO] Stored Issuer public key at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/IssuerPublicKey
    2021/12/13 11:40:07 [INFO] Stored Issuer revocation public key at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/IssuerRevocationPublicKey
    Generating the user msp
  • fabric-ca-client enroll -u https://user1:user1pw@localhost:8054 --caname ca-org2 -M /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org2.example.com/users/[email protected]/msp --tls.certfiles /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/fabric-ca/org2/tls-cert.pem
    2021/12/13 11:40:07 [INFO] TLS Enabled
    2021/12/13 11:40:07 [INFO] generating key: &{A:ecdsa S:256}
    2021/12/13 11:40:07 [INFO] encoded CSR
    2021/12/13 11:40:07 [INFO] Stored client certificate at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org2.example.com/users/[email protected]/msp/signcerts/cert.pem
    2021/12/13 11:40:07 [INFO] Stored root CA certificate at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org2.example.com/users/[email protected]/msp/cacerts/localhost-8054-ca-org2.pem
    2021/12/13 11:40:07 [INFO] Stored Issuer public key at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org2.example.com/users/[email protected]/msp/IssuerPublicKey
    2021/12/13 11:40:07 [INFO] Stored Issuer revocation public key at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org2.example.com/users/[email protected]/msp/IssuerRevocationPublicKey
    Generating the org admin msp
  • fabric-ca-client enroll -u https://org2admin:org2adminpw@localhost:8054 --caname ca-org2 -M /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org2.example.com/users/[email protected]/msp --tls.certfiles /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/fabric-ca/org2/tls-cert.pem
    2021/12/13 11:40:08 [INFO] TLS Enabled
    2021/12/13 11:40:08 [INFO] generating key: &{A:ecdsa S:256}
    2021/12/13 11:40:08 [INFO] encoded CSR
    2021/12/13 11:40:08 [INFO] Stored client certificate at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org2.example.com/users/[email protected]/msp/signcerts/cert.pem
    2021/12/13 11:40:08 [INFO] Stored root CA certificate at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org2.example.com/users/[email protected]/msp/cacerts/localhost-8054-ca-org2.pem
    2021/12/13 11:40:08 [INFO] Stored Issuer public key at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org2.example.com/users/[email protected]/msp/IssuerPublicKey
    2021/12/13 11:40:08 [INFO] Stored Issuer revocation public key at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org2.example.com/users/[email protected]/msp/IssuerRevocationPublicKey
    Creating Orderer Org Identities
    Enrolling the CA admin
  • fabric-ca-client enroll -u https://admin:adminpw@localhost:9054 --caname ca-orderer --tls.certfiles /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/fabric-ca/ordererOrg/tls-cert.pem
    2021/12/13 11:40:08 [INFO] Created a default configuration file at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/ordererOrganizations/example.com/fabric-ca-client-config.yaml
    2021/12/13 11:40:08 [INFO] TLS Enabled
    2021/12/13 11:40:08 [INFO] generating key: &{A:ecdsa S:256}
    2021/12/13 11:40:08 [INFO] encoded CSR
    2021/12/13 11:40:08 [INFO] Stored client certificate at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/ordererOrganizations/example.com/msp/signcerts/cert.pem
    2021/12/13 11:40:08 [INFO] Stored root CA certificate at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/ordererOrganizations/example.com/msp/cacerts/localhost-9054-ca-orderer.pem
    2021/12/13 11:40:08 [INFO] Stored Issuer public key at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/ordererOrganizations/example.com/msp/IssuerPublicKey
    2021/12/13 11:40:08 [INFO] Stored Issuer revocation public key at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/ordererOrganizations/example.com/msp/IssuerRevocationPublicKey
    Registering orderer
  • fabric-ca-client register --caname ca-orderer --id.name orderer --id.secret ordererpw --id.type orderer --tls.certfiles /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/fabric-ca/ordererOrg/tls-cert.pem
    2021/12/13 11:40:08 [INFO] Configuration file location: /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/ordererOrganizations/example.com/fabric-ca-client-config.yaml
    2021/12/13 11:40:08 [INFO] TLS Enabled
    2021/12/13 11:40:08 [INFO] TLS Enabled
    Password: ordererpw
    Registering the orderer admin
  • fabric-ca-client register --caname ca-orderer --id.name ordererAdmin --id.secret ordererAdminpw --id.type admin --tls.certfiles /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/fabric-ca/ordererOrg/tls-cert.pem
    2021/12/13 11:40:08 [INFO] Configuration file location: /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/ordererOrganizations/example.com/fabric-ca-client-config.yaml
    2021/12/13 11:40:08 [INFO] TLS Enabled
    2021/12/13 11:40:08 [INFO] TLS Enabled
    Password: ordererAdminpw
    Generating the orderer msp
  • fabric-ca-client enroll -u https://orderer:ordererpw@localhost:9054 --caname ca-orderer -M /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/msp --csr.hosts orderer.example.com --csr.hosts localhost --tls.certfiles /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/fabric-ca/ordererOrg/tls-cert.pem
    2021/12/13 11:40:09 [INFO] TLS Enabled
    2021/12/13 11:40:09 [INFO] generating key: &{A:ecdsa S:256}
    2021/12/13 11:40:09 [INFO] encoded CSR
    2021/12/13 11:40:09 [INFO] Stored client certificate at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/msp/signcerts/cert.pem
    2021/12/13 11:40:09 [INFO] Stored root CA certificate at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/msp/cacerts/localhost-9054-ca-orderer.pem
    2021/12/13 11:40:09 [INFO] Stored Issuer public key at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/msp/IssuerPublicKey
    2021/12/13 11:40:09 [INFO] Stored Issuer revocation public key at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/msp/IssuerRevocationPublicKey
    Generating the orderer-tls certificates
  • fabric-ca-client enroll -u https://orderer:ordererpw@localhost:9054 --caname ca-orderer -M /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/tls --enrollment.profile tls --csr.hosts orderer.example.com --csr.hosts localhost --tls.certfiles /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/fabric-ca/ordererOrg/tls-cert.pem
    2021/12/13 11:40:09 [INFO] TLS Enabled
    2021/12/13 11:40:09 [INFO] generating key: &{A:ecdsa S:256}
    2021/12/13 11:40:09 [INFO] encoded CSR
    2021/12/13 11:40:09 [INFO] Stored client certificate at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/tls/signcerts/cert.pem
    2021/12/13 11:40:09 [INFO] Stored TLS root CA certificate at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/tls/tlscacerts/tls-localhost-9054-ca-orderer.pem
    2021/12/13 11:40:09 [INFO] Stored Issuer public key at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/tls/IssuerPublicKey
    2021/12/13 11:40:09 [INFO] Stored Issuer revocation public key at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/tls/IssuerRevocationPublicKey
    Generating the admin msp
  • fabric-ca-client enroll -u https://ordererAdmin:ordererAdminpw@localhost:9054 --caname ca-orderer -M /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/ordererOrganizations/example.com/users/[email protected]/msp --tls.certfiles /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/fabric-ca/ordererOrg/tls-cert.pem
    2021/12/13 11:40:09 [INFO] TLS Enabled
    2021/12/13 11:40:09 [INFO] generating key: &{A:ecdsa S:256}
    2021/12/13 11:40:09 [INFO] encoded CSR
    2021/12/13 11:40:09 [INFO] Stored client certificate at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/ordererOrganizations/example.com/users/[email protected]/msp/signcerts/cert.pem
    2021/12/13 11:40:09 [INFO] Stored root CA certificate at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/ordererOrganizations/example.com/users/[email protected]/msp/cacerts/localhost-9054-ca-orderer.pem
    2021/12/13 11:40:09 [INFO] Stored Issuer public key at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/ordererOrganizations/example.com/users/[email protected]/msp/IssuerPublicKey
    2021/12/13 11:40:09 [INFO] Stored Issuer revocation public key at /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/ordererOrganizations/example.com/users/[email protected]/msp/IssuerRevocationPublicKey
    Generating CCP files for Org1 and Org2
    /Users/binny/myscripts/configtxgen
    Generating Orderer Genesis block
  • configtxgen -profile TwoOrgsOrdererGenesis -channelID system-channel -outputBlock ./system-genesis-block/genesis.block
    2021-12-13 11:40:09.918 CST [common.tools.configtxgen] main -> INFO 001 Loading configuration
    2021-12-13 11:40:09.936 CST [common.tools.configtxgen.localconfig] completeInitialization -> INFO 002 orderer type: etcdraft
    2021-12-13 11:40:09.936 CST [common.tools.configtxgen.localconfig] completeInitialization -> INFO 003 Orderer.EtcdRaft.Options unset, setting to tick_interval:"500ms" election_tick:10 heartbeat_tick:1 max_inflight_blocks:5 snapshot_interval_size:16777216
    2021-12-13 11:40:09.936 CST [common.tools.configtxgen.localconfig] Load -> INFO 004 Loaded configuration: /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/configtx/configtx.yaml
    2021-12-13 11:40:09.938 CST [common.tools.configtxgen] doOutputBlock -> INFO 005 Generating genesis block
    2021-12-13 11:40:09.939 CST [common.tools.configtxgen] doOutputBlock -> INFO 006 Writing genesis block
  • res=0
    WARN[0000] Found orphan containers ([ca_org1 ca_org2 ca_orderer]) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
    [+] Running 9/9
    ⠿ Volume "fabric_orderer.example.com" Created 0.0s
    ⠿ Volume "fabric_peer0.org1.example.com" Created 0.0s
    ⠿ Volume "fabric_peer0.org2.example.com" Created 0.0s
    ⠿ Container couchdb0 Started 2.0s
    ⠿ Container couchdb1 Started 1.8s
    ⠿ Container orderer.example.com Started 2.0s
    ⠿ Container peer0.org1.example.com Started 3.5s
    ⠿ Container peer0.org2.example.com Started 3.2s
    ⠿ Container cli Started 4.8s
    CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
    41bde207e47c hyperledger/fabric-tools:latest "/bin/bash" 5 seconds ago Up Less than a second cli
    38d199a5c1e0 hyperledger/fabric-peer:latest "peer node start" 5 seconds ago Up 1 second 0.0.0.0:7051->7051/tcp peer0.org1.example.com
    60a308a2f71c hyperledger/fabric-peer:latest "peer node start" 5 seconds ago Up 2 seconds 7051/tcp, 0.0.0.0:9051->9051/tcp peer0.org2.example.com
    ccb526589f7b hyperledger/fabric-orderer:latest "orderer" 5 seconds ago Up 3 seconds 0.0.0.0:7050->7050/tcp orderer.example.com
    40ae50a9efd8 couchdb:3.1.1 "tini -- /docker-ent…" 5 seconds ago Up 3 seconds 4369/tcp, 9100/tcp, 0.0.0.0:7984->5984/tcp couchdb1
    9f90817ce123 couchdb:3.1.1 "tini -- /docker-ent…" 5 seconds ago Up 3 seconds 4369/tcp, 9100/tcp, 0.0.0.0:5984->5984/tcp couchdb0
    eb37ba9c2000 hyperledger/fabric-ca:latest "sh -c 'fabric-ca-se…" 15 seconds ago Up 13 seconds 0.0.0.0:7054->7054/tcp ca_org1
    52a646207329 hyperledger/fabric-ca:latest "sh -c 'fabric-ca-se…" 15 seconds ago Up 14 seconds 7054/tcp, 0.0.0.0:8054->8054/tcp ca_org2
    8a43157ed39c hyperledger/fabric-ca:latest "sh -c 'fabric-ca-se…" 15 seconds ago Up 14 seconds 7054/tcp, 0.0.0.0:9054->9054/tcp ca_orderer
    efe4075b0c80 busybox "sh -c 'cd /data && …" 2 days ago Created stoic_keldysh
    当前位置 :/Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network
    ORDERER_CA : /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
    Generating channel create transaction 'mychannel.tx'
  • configtxgen -profile TwoOrgsChannel -outputCreateChannelTx ./channel-artifacts/mychannel.tx -channelID mychannel
    2021-12-13 11:40:16.065 CST [common.tools.configtxgen] main -> INFO 001 Loading configuration
    2021-12-13 11:40:16.090 CST [common.tools.configtxgen.localconfig] Load -> INFO 002 Loaded configuration: /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/configtx/configtx.yaml
    2021-12-13 11:40:16.090 CST [common.tools.configtxgen] doOutputChannelCreateTx -> INFO 003 Generating new channel configtx
    2021-12-13 11:40:16.096 CST [common.tools.configtxgen] doOutputChannelCreateTx -> INFO 004 Writing new channel tx
  • res=0
    Creating channel mychannel
    Using organization 1
  • peer channel create -o localhost:7050 -c mychannel --ordererTLSHostnameOverride orderer.example.com -f ./channel-artifacts/mychannel.tx --outputBlock ./channel-artifacts/mychannel.block --tls --cafile /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
  • res=0
    2021-12-13 11:40:19.180 CST [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
    2021-12-13 11:40:19.234 CST [cli.common] readBlock -> INFO 002 Expect block, but got status: &{NOT_FOUND}
    2021-12-13 11:40:19.245 CST [channelCmd] InitCmdFactory -> INFO 003 Endorser and orderer connections initialized
    2021-12-13 11:40:19.450 CST [cli.common] readBlock -> INFO 004 Expect block, but got status: &{SERVICE_UNAVAILABLE}
    2021-12-13 11:40:19.457 CST [channelCmd] InitCmdFactory -> INFO 005 Endorser and orderer connections initialized
    2021-12-13 11:40:19.670 CST [cli.common] readBlock -> INFO 006 Expect block, but got status: &{SERVICE_UNAVAILABLE}
    2021-12-13 11:40:19.681 CST [channelCmd] InitCmdFactory -> INFO 007 Endorser and orderer connections initialized
    2021-12-13 11:40:19.888 CST [cli.common] readBlock -> INFO 008 Expect block, but got status: &{SERVICE_UNAVAILABLE}
    2021-12-13 11:40:19.899 CST [channelCmd] InitCmdFactory -> INFO 009 Endorser and orderer connections initialized
    2021-12-13 11:40:20.109 CST [cli.common] readBlock -> INFO 00a Expect block, but got status: &{SERVICE_UNAVAILABLE}
    2021-12-13 11:40:20.122 CST [channelCmd] InitCmdFactory -> INFO 00b Endorser and orderer connections initialized
    2021-12-13 11:40:20.343 CST [cli.common] readBlock -> INFO 00c Received block: 0
    Channel 'mychannel' created
    Joining org1 peer to the channel...
    Using organization 1
  • peer channel join -b ./channel-artifacts/mychannel.block
  • res=0
    2021-12-13 11:40:23.427 CST [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
    2021-12-13 11:40:23.745 CST [channelCmd] executeJoin -> INFO 002 Successfully submitted proposal to join channel
    Joining org2 peer to the channel...
    Using organization 2
  • peer channel join -b ./channel-artifacts/mychannel.block
  • res=0
    2021-12-13 11:40:26.820 CST [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
    2021-12-13 11:40:27.120 CST [channelCmd] executeJoin -> INFO 002 Successfully submitted proposal to join channel
    Setting anchor peer for org1...
    当前位置 :/opt/gopath/src/github.com/hyperledger/fabric/peer
    ORDERER_CA : /opt/gopath/src/github.com/hyperledger/fabric/peer/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
    当前位置 :/opt/gopath/src/github.com/hyperledger/fabric/peer
    ORDERER_CA : /opt/gopath/src/github.com/hyperledger/fabric/peer/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
    Using organization 1
    Fetching channel config for channel mychannel
    Using organization 1
    Fetching the most recent configuration block for the channel
  • peer channel fetch config config_block.pb -o orderer.example.com:7050 --ordererTLSHostnameOverride orderer.example.com -c mychannel --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
    2021-12-13 03:40:27.661 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
    2021-12-13 03:40:27.673 UTC [cli.common] readBlock -> INFO 002 Received block: 0
    2021-12-13 03:40:27.673 UTC [channelCmd] fetch -> INFO 003 Retrieving last config block: 0
    2021-12-13 03:40:27.680 UTC [cli.common] readBlock -> INFO 004 Received block: 0
    Decoding config block to JSON and isolating config to Org1MSPconfig.json
  • jq '.data.data[0].payload.data.config'
  • configtxlator proto_decode --input config_block.pb --type common.Block
    Generating anchor peer update transaction for Org1 on channel mychannel
  • jq '.channel_group.groups.Application.groups.Org1MSP.values += {"AnchorPeers":{"mod_policy": "Admins","value":{"anchor_peers": [{"host": "peer0.org1.example.com","port": 7051}]},"version": "0"}}' Org1MSPconfig.json
  • configtxlator proto_encode --input Org1MSPconfig.json --type common.Config
  • configtxlator proto_encode --input Org1MSPmodified_config.json --type common.Config
  • configtxlator compute_update --channel_id mychannel --original original_config.pb --updated modified_config.pb
  • configtxlator proto_decode --input config_update.pb --type common.ConfigUpdate
  • jq .
    ++ cat config_update.json
  • echo '{"payload":{"header":{"channel_header":{"channel_id":"mychannel", "type":2}},"data":{"config_update":{' '"channel_id":' '"mychannel",' '"isolated_data":' '{},' '"read_set":' '{' '"groups":' '{' '"Application":' '{' '"groups":' '{' '"Org1MSP":' '{' '"groups":' '{},' '"mod_policy":' '"",' '"policies":' '{' '"Admins":' '{' '"mod_policy":' '"",' '"policy":' null, '"version":' '"0"' '},' '"Endorsement":' '{' '"mod_policy":' '"",' '"policy":' null, '"version":' '"0"' '},' '"Readers":' '{' '"mod_policy":' '"",' '"policy":' null, '"version":' '"0"' '},' '"Writers":' '{' '"mod_policy":' '"",' '"policy":' null, '"version":' '"0"' '}' '},' '"values":' '{' '"MSP":' '{' '"mod_policy":' '"",' '"value":' null, '"version":' '"0"' '}' '},' '"version":' '"0"' '}' '},' '"mod_policy":' '"",' '"policies":' '{},' '"values":' '{},' '"version":' '"1"' '}' '},' '"mod_policy":' '"",' '"policies":' '{},' '"values":' '{},' '"version":' '"0"' '},' '"write_set":' '{' '"groups":' '{' '"Application":' '{' '"groups":' '{' '"Org1MSP":' '{' '"groups":' '{},' '"mod_policy":' '"Admins",' '"policies":' '{' '"Admins":' '{' '"mod_policy":' '"",' '"policy":' null, '"version":' '"0"' '},' '"Endorsement":' '{' '"mod_policy":' '"",' '"policy":' null, '"version":' '"0"' '},' '"Readers":' '{' '"mod_policy":' '"",' '"policy":' null, '"version":' '"0"' '},' '"Writers":' '{' '"mod_policy":' '"",' '"policy":' null, '"version":' '"0"' '}' '},' '"values":' '{' '"AnchorPeers":' '{' '"mod_policy":' '"Admins",' '"value":' '{' '"anchor_peers":' '[' '{' '"host":' '"peer0.org1.example.com",' '"port":' 7051 '}' ']' '},' '"version":' '"0"' '},' '"MSP":' '{' '"mod_policy":' '"",' '"value":' null, '"version":' '"0"' '}' '},' '"version":' '"1"' '}' '},' '"mod_policy":' '"",' '"policies":' '{},' '"values":' '{},' '"version":' '"1"' '}' '},' '"mod_policy":' '"",' '"policies":' '{},' '"values":' '{},' '"version":' '"0"' '}' '}}}}'
  • configtxlator proto_encode --input config_update_in_envelope.json --type common.Envelope
    2021-12-13 03:40:28.066 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
    2021-12-13 03:40:28.102 UTC [channelCmd] update -> INFO 002 Successfully submitted channel update
    Anchor peer set for org 'Org1MSP' on channel 'mychannel'
    Setting anchor peer for org2...
    当前位置 :/opt/gopath/src/github.com/hyperledger/fabric/peer
    ORDERER_CA : /opt/gopath/src/github.com/hyperledger/fabric/peer/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
    当前位置 :/opt/gopath/src/github.com/hyperledger/fabric/peer
    ORDERER_CA : /opt/gopath/src/github.com/hyperledger/fabric/peer/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
    Using organization 2
    Fetching channel config for channel mychannel
    Using organization 2
    Fetching the most recent configuration block for the channel
  • peer channel fetch config config_block.pb -o orderer.example.com:7050 --ordererTLSHostnameOverride orderer.example.com -c mychannel --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
    2021-12-13 03:40:28.666 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
    2021-12-13 03:40:28.674 UTC [cli.common] readBlock -> INFO 002 Received block: 1
    2021-12-13 03:40:28.675 UTC [channelCmd] fetch -> INFO 003 Retrieving last config block: 1
    2021-12-13 03:40:28.694 UTC [cli.common] readBlock -> INFO 004 Received block: 1
    Decoding config block to JSON and isolating config to Org2MSPconfig.json
  • jq '.data.data[0].payload.data.config'
  • configtxlator proto_decode --input config_block.pb --type common.Block
  • jq '.channel_group.groups.Application.groups.Org2MSP.values += {"AnchorPeers":{"mod_policy": "Admins","value":{"anchor_peers": [{"host": "peer0.org2.example.com","port": 9051}]},"version": "0"}}' Org2MSPconfig.json
    Generating anchor peer update transaction for Org2 on channel mychannel
  • configtxlator proto_encode --input Org2MSPconfig.json --type common.Config
  • configtxlator proto_encode --input Org2MSPmodified_config.json --type common.Config
  • configtxlator compute_update --channel_id mychannel --original original_config.pb --updated modified_config.pb
  • configtxlator proto_decode --input config_update.pb --type common.ConfigUpdate
  • jq .
    ++ cat config_update.json
  • echo '{"payload":{"header":{"channel_header":{"channel_id":"mychannel", "type":2}},"data":{"config_update":{' '"channel_id":' '"mychannel",' '"isolated_data":' '{},' '"read_set":' '{' '"groups":' '{' '"Application":' '{' '"groups":' '{' '"Org2MSP":' '{' '"groups":' '{},' '"mod_policy":' '"",' '"policies":' '{' '"Admins":' '{' '"mod_policy":' '"",' '"policy":' null, '"version":' '"0"' '},' '"Endorsement":' '{' '"mod_policy":' '"",' '"policy":' null, '"version":' '"0"' '},' '"Readers":' '{' '"mod_policy":' '"",' '"policy":' null, '"version":' '"0"' '},' '"Writers":' '{' '"mod_policy":' '"",' '"policy":' null, '"version":' '"0"' '}' '},' '"values":' '{' '"MSP":' '{' '"mod_policy":' '"",' '"value":' null, '"version":' '"0"' '}' '},' '"version":' '"0"' '}' '},' '"mod_policy":' '"",' '"policies":' '{},' '"values":' '{},' '"version":' '"1"' '}' '},' '"mod_policy":' '"",' '"policies":' '{},' '"values":' '{},' '"version":' '"0"' '},' '"write_set":' '{' '"groups":' '{' '"Application":' '{' '"groups":' '{' '"Org2MSP":' '{' '"groups":' '{},' '"mod_policy":' '"Admins",' '"policies":' '{' '"Admins":' '{' '"mod_policy":' '"",' '"policy":' null, '"version":' '"0"' '},' '"Endorsement":' '{' '"mod_policy":' '"",' '"policy":' null, '"version":' '"0"' '},' '"Readers":' '{' '"mod_policy":' '"",' '"policy":' null, '"version":' '"0"' '},' '"Writers":' '{' '"mod_policy":' '"",' '"policy":' null, '"version":' '"0"' '}' '},' '"values":' '{' '"AnchorPeers":' '{' '"mod_policy":' '"Admins",' '"value":' '{' '"anchor_peers":' '[' '{' '"host":' '"peer0.org2.example.com",' '"port":' 9051 '}' ']' '},' '"version":' '"0"' '},' '"MSP":' '{' '"mod_policy":' '"",' '"value":' null, '"version":' '"0"' '}' '},' '"version":' '"1"' '}' '},' '"mod_policy":' '"",' '"policies":' '{},' '"values":' '{},' '"version":' '"1"' '}' '},' '"mod_policy":' '"",' '"policies":' '{},' '"values":' '{},' '"version":' '"0"' '}' '}}}}'
  • configtxlator proto_encode --input config_update_in_envelope.json --type common.Envelope
    2021-12-13 03:40:29.075 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
    2021-12-13 03:40:29.118 UTC [channelCmd] update -> INFO 002 Successfully submitted channel update
    Anchor peer set for org 'Org2MSP' on channel 'mychannel'
    Channel 'mychannel' joined
    deploying chaincode on channel 'mychannel'
    executing with the following
  • CHANNEL_NAME: mychannel
  • CC_NAME: fabcar
  • CC_SRC_PATH: ../chaincode/fabcar/java/
  • CC_SRC_LANGUAGE: java
  • CC_VERSION: 1
  • CC_SEQUENCE: 1
  • CC_END_POLICY: NA
  • CC_COLL_CONFIG: NA
  • CC_INIT_FCN: initLedger
  • DELAY: 3
  • MAX_RETRY: 5
  • VERBOSE: false
    Compiling Java code...
    源码路径:../chaincode/fabcar/java/
    ~/go/src/github.com/hyperledger/fabric-samples/chaincode/fabcar/java ~/go/src/github.com/hyperledger/fabric-samples/test-network

BUILD SUCCESSFUL in 1s
10 actionable tasks: 1 executed, 9 up-to-date
~/go/src/github.com/hyperledger/fabric-samples/test-network
Finished compiling Java code
当前位置 /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network
导入环境变量
当前位置 :/Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network
ORDERER_CA : /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
打包链码

  • peer lifecycle chaincode package fabcar.tar.gz --path ../chaincode/fabcar/java/build/install/fabcar --lang java --label fabcar_1
  • res=0
    Chaincode is packaged
    Installing chaincode on peer0.org1...
    installChaincode PEER ORG
    Using organization 1
  • peer lifecycle chaincode install fabcar.tar.gz
  • res=0
    2021-12-13 11:40:33.626 CST [cli.lifecycle.chaincode] submitInstallProposal -> INFO 001 Installed remotely: response:<status:200 payload:"\nIfabcar_1:533fca92a26ff2840de9f034b73ea905c6514249e85646f352ae01eb0031273b\022\010fabcar_1" >
    2021-12-13 11:40:33.626 CST [cli.lifecycle.chaincode] submitInstallProposal -> INFO 002 Chaincode code package identifier: fabcar_1:533fca92a26ff2840de9f034b73ea905c6514249e85646f352ae01eb0031273b
    Chaincode is installed on peer0.org1
    Install chaincode on peer0.org2...
    installChaincode PEER ORG
    Using organization 2
  • peer lifecycle chaincode install fabcar.tar.gz
  • res=0
    2021-12-13 11:40:35.557 CST [cli.lifecycle.chaincode] submitInstallProposal -> INFO 001 Installed remotely: response:<status:200 payload:"\nIfabcar_1:533fca92a26ff2840de9f034b73ea905c6514249e85646f352ae01eb0031273b\022\010fabcar_1" >
    2021-12-13 11:40:35.557 CST [cli.lifecycle.chaincode] submitInstallProposal -> INFO 002 Chaincode code package identifier: fabcar_1:533fca92a26ff2840de9f034b73ea905c6514249e85646f352ae01eb0031273b
    Chaincode is installed on peer0.org2

查询安装情况

Using organization 1

  • peer lifecycle chaincode queryinstalled
  • res=0
    Installed chaincodes on peer:
    Package ID: fabcar_1:533fca92a26ff2840de9f034b73ea905c6514249e85646f352ae01eb0031273b, Label: fabcar_1
    Query installed successful on peer0.org1 on channel
    Using organization 1
  • peer lifecycle chaincode approveformyorg -o localhost:7050 --ordererTLSHostnameOverride orderer.example.com --tls --cafile /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem --channelID mychannel --name fabcar --version 1 --package-id fabcar_1:533fca92a26ff2840de9f034b73ea905c6514249e85646f352ae01eb0031273b --sequence 1 --init-required
  • res=0
    2021-12-13 11:40:38.146 CST [chaincodeCmd] ClientWait -> INFO 001 txid [4dab80548009604bf09b2c2372d27597837e57d4b01076700a4798c40c36f0e1] committed with status (VALID) at
    Chaincode definition approved on peer0.org1 on channel 'mychannel'
    Using organization 1
    Checking the commit readiness of the chaincode definition on peer0.org1 on channel 'mychannel'...
    Attempting to check the commit readiness of the chaincode definition on peer0.org1, Retry after 3 seconds.
  • peer lifecycle chaincode checkcommitreadiness --channelID mychannel --name fabcar --version 1 --sequence 1 --init-required --output json
  • res=0
    {
    "approvals": {
    "Org1MSP": true,
    "Org2MSP": false
    }
    }
    Checking the commit readiness of the chaincode definition successful on peer0.org1 on channel 'mychannel'
    Using organization 2
    Checking the commit readiness of the chaincode definition on peer0.org2 on channel 'mychannel'...
    Attempting to check the commit readiness of the chaincode definition on peer0.org2, Retry after 3 seconds.
  • peer lifecycle chaincode checkcommitreadiness --channelID mychannel --name fabcar --version 1 --sequence 1 --init-required --output json
  • res=0
    {
    "approvals": {
    "Org1MSP": true,
    "Org2MSP": false
    }
    }
    Checking the commit readiness of the chaincode definition successful on peer0.org2 on channel 'mychannel'
    Using organization 2
  • peer lifecycle chaincode approveformyorg -o localhost:7050 --ordererTLSHostnameOverride orderer.example.com --tls --cafile /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem --channelID mychannel --name fabcar --version 1 --package-id fabcar_1:533fca92a26ff2840de9f034b73ea905c6514249e85646f352ae01eb0031273b --sequence 1 --init-required
  • res=0
    2021-12-13 11:40:46.902 CST [chaincodeCmd] ClientWait -> INFO 001 txid [b23d89381fa38f5e180f4fce198268bda9acb842ae0c5344fd9dc69d281010af] committed with status (VALID) at
    Chaincode definition approved on peer0.org2 on channel 'mychannel'
    Using organization 1
    Checking the commit readiness of the chaincode definition on peer0.org1 on channel 'mychannel'...
    Attempting to check the commit readiness of the chaincode definition on peer0.org1, Retry after 3 seconds.
  • peer lifecycle chaincode checkcommitreadiness --channelID mychannel --name fabcar --version 1 --sequence 1 --init-required --output json
  • res=0
    {
    "approvals": {
    "Org1MSP": true,
    "Org2MSP": true
    }
    }
    Checking the commit readiness of the chaincode definition successful on peer0.org1 on channel 'mychannel'
    Using organization 2
    Checking the commit readiness of the chaincode definition on peer0.org2 on channel 'mychannel'...
    Attempting to check the commit readiness of the chaincode definition on peer0.org2, Retry after 3 seconds.
  • peer lifecycle chaincode checkcommitreadiness --channelID mychannel --name fabcar --version 1 --sequence 1 --init-required --output json
  • res=0
    {
    "approvals": {
    "Org1MSP": true,
    "Org2MSP": true
    }
    }
    Checking the commit readiness of the chaincode definition successful on peer0.org2 on channel 'mychannel'
    Using organization 1
    Using organization 2
  • peer lifecycle chaincode commit -o localhost:7050 --ordererTLSHostnameOverride orderer.example.com --tls --cafile /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem --channelID mychannel --name fabcar --peerAddresses localhost:7051 --tlsRootCertFiles /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt --peerAddresses localhost:9051 --tlsRootCertFiles /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt --version 1 --sequence 1 --init-required
  • res=0
    2021-12-13 11:40:55.691 CST [chaincodeCmd] ClientWait -> INFO 001 txid [b415c39071beab197531beb75ed68917a6be8cc6cd93fa34ba0cf4e20865fd03] committed with status (VALID) at localhost:7051
    2021-12-13 11:40:55.714 CST [chaincodeCmd] ClientWait -> INFO 002 txid [b415c39071beab197531beb75ed68917a6be8cc6cd93fa34ba0cf4e20865fd03] committed with status (VALID) at localhost:9051
    Chaincode definition committed on channel 'mychannel'
    Using organization 1
    Querying chaincode definition on peer0.org1 on channel 'mychannel'...
    Attempting to Query committed status on peer0.org1, Retry after 3 seconds.
  • peer lifecycle chaincode querycommitted --channelID mychannel --name fabcar
  • res=0
    Committed chaincode definition for chaincode 'fabcar' on channel 'mychannel':
    Version: 1, Sequence: 1, Endorsement Plugin: escc, Validation Plugin: vscc, Approvals: [Org1MSP: true, Org2MSP: true]
    Query chaincode definition successful on peer0.org1 on channel 'mychannel'
    Using organization 2
    Querying chaincode definition on peer0.org2 on channel 'mychannel'...
    Attempting to Query committed status on peer0.org2, Retry after 3 seconds.
  • peer lifecycle chaincode querycommitted --channelID mychannel --name fabcar
  • res=0
    Committed chaincode definition for chaincode 'fabcar' on channel 'mychannel':
    Version: 1, Sequence: 1, Endorsement Plugin: escc, Validation Plugin: vscc, Approvals: [Org1MSP: true, Org2MSP: true]
    Query chaincode definition successful on peer0.org2 on channel 'mychannel'
    chaincodeInvokeInit....
    Using organization 1
    Using organization 2
  • fcn_call='{"function":"initLedger","Args":[]}'
  • infoln 'invoke fcn call:{"function":"initLedger","Args":[]}'
  • println '\033[0;34minvoke fcn call:{"function":"initLedger","Args":[]}\033[0m'
  • echo -e '\033[0;34minvoke fcn call:{"function":"initLedger","Args":[]}\033[0m'
    invoke fcn call:{"function":"initLedger","Args":[]}
  • peer chaincode invoke -o localhost:7050 --ordererTLSHostnameOverride orderer.example.com --tls --cafile /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C mychannel -n fabcar --peerAddresses localhost:7051 --tlsRootCertFiles /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt --peerAddresses localhost:9051 --tlsRootCertFiles /Users/binny/go/src/github.com/hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt --isInit -c '{"function":"initLedger","Args":[]}'
  • res=1
    Error: endorsement failure during invoke. response: status:500 message:"error in simulation: transaction returned with failure: Undefined contract called"
    Invoke execution on peer0.org1 peer0.org2 failed
    Deploying chaincode failed

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.