Code Monkey home page Code Monkey logo

falcon's Introduction

Falcon: Hyperledger Fabric Deployment Helper for Kubernetes

NPCIOSS Lifecycle HELM VERSION

HLF DEPLOYMENT HELPER

Falcon - the hyperledger fabric deployment helper for Kubernetes is an open-source project designed to streamline the deployment and management of Hyperledger Fabric based blockchain networks on Kubernetes clusters. This tool simplifies the complex process of setting up, configuring, and maintaining Fabric nodes, peers, orderers, and channels within a Kubernetes environment. With templatised helm charts and customizable configuration options, the project empowers developers and administrators to effortlessly deploy and scale secure and robust Hyperledger Fabric networks, leveraging the flexibility and scalability of Kubernetes orchestration.

Whether you're a blockchain enthusiast, developer, or enterprise seeking to harness the power of distributed ledger technology using Hyperledger Fabric, the deployment helper for Kubernetes is your go-to solution for efficient, reliable, and automated Fabric network deployment.

We are open for contributions, upgrades and features, please feel free to pickup any "good first issues" or propose new features to make the utility more powerful.

Features

  • CA Management (Root CA, TLS CA & Intermediate CAs)
  • Peer Creation, Cert renewal
  • Orderer Creation, Addition, Cert renewal
  • Channel Management
  • Chaincode Lifecycle Management (Install, Approve, Commit and CC Upgrades)
  • Cryptographic operations support and certification management
  • Domain Name support and SNI Based Routing
  • Ingress resource provisioning
  • File Registry support for centralised config files
  • Support for Hyperledger Fabric 2.3+
  • Multi-zone, Multi-DC, Private Network (On-prem DCs) deployment support
  • Multi-channel support

Roadmap

  • Automatic certificate renewal
  • GUI based deployment support
  • Optional Fabric Explorer
  • Optional Fabric Operations Console
  • Observability stack
  • Key Management using HSM / Vault

Releases

Samples

Please refer our examples for running a complete blockchain network using the deployment helper.

Production Readiness

Falcon is utilised across multiple blockchain projects within NPCI.

Disclaimer

This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with this program. If not, see https://www.gnu.org/licenses/.

License

Falcon source code files are made available under the GNU General Public License, Version 3.0 (GPL-3.0), located in the LICENSE file.

falcon's People

Contributors

jithindevasia avatar runitmisra avatar tittuvarghese avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

falcon's Issues

Integration with cert-manager for CAs

Current Status

  1. Certificates are currently issued using native fabric-ca provided mechanisms.

Expected Solution

  1. Integration with cert-manager and issue certificates using that
  2. Automatic renewal of the certificate resources

Orderer data pvc global variable name mismatch

Variable name to handle orderer data storage pvc size (dataStorageSize) and global variable defined are different. Both variable should be the identical.

Current one:
storage: {{ .dataStorageSize | default ($.Values.global.storageSize | default "5Gi") }}
Should be changed to:
storage: {{ .dataStorageSize | default ($.Values.global.dataStorageSize | default "5Gi") }}

Multi-channel support

Right now falcon supports only one app channel creation during the initial network setup. If new channel needs to be added, then we will have to deploy the channel creation job as a new helm release. This will end-up in creating many channel creation helm releases for those who wants to handle many channels. A new feature to be added which should be capable of the following;

  • The channel creation job should support channel list from an array of helm values. Whenever user wants to create new channel, it should be do-able with a simple helm upgrade.
  • All fabric-ops jobs should support the multi channel functionalities if possible. Eg; peer join, chaincode operations, orderer operations.

Add Job for new peers to join channels with the help of ledger snapshots

If administrators decide to add new peers to established channels, it can take a long time for a peer to catch up to the current ledger height and build the ledger from world state. To make this process easier, we can use Snapshots to capture the minimum required data for the peer to join the channel without it having to do the long process of building the ledger from world state.

Detailed info of this feature can be found here: https://hyperledger-fabric.readthedocs.io/en/latest/peer_ledger_snapshot.html

Create a Job that can take a snapshot of the existing ledger in a channel and can add a peer to the channel using that snapshot (This can be a separate job)

Registry for Chaincode Lifecycle Management

It is hard to manage chaincode lifecycle in purely distributed environment. It is good to have a central registry and automated jobs to pull the latest version, and perform installation & approval of the chaincode at every peer. (Installation on every peer, approval at org level).

Following are the features to be considered,

  1. Central registry / can leverage filestore itself
  2. Dashboard to manage lifecycle and upload chaincodes (Falcon ops dashboard)
  3. Job associated with every peer to pull and install new chaincode
  4. Job to approve chaincode if new version is installed. (Based on the version)
  5. Feedback mechanism to central dashboard for job status. (For tracking)
  6. Chaincode commit and rollback mechanism from dashboard (Falcon ops dashboard)

Sample payload which can be pulled by JOB to perform the cc lifecycle management jobs.

{
    "myproject": {
        "version": "1",
        "cc_name": "chaincode-test",
        "channel_name": "mychannel",
        "cc_file_name": "chaincode_10.tar.gz",
        "cc_checksum": "e7cd90cfac934c0f6f44a5ee5db98c10da7fdafe5e9ed969d1f60a9b93809e15",
        "cc_collection_config": "true",
        "cc_collection_config_file_name": "collection-config-sample.json",
        "cc_collection_config_checksum": "7bc74976fde72bcbed113eeebcdb173f3e33a4df014589b7ed5ec20b89a1f61b",
        "registry": "http://cc-central-registry.myhlfdomain.com/registry",
        "base_path": "myproject",
        "package_id": "chaincode_10:e7cd90cfac934c0f6f44a5ee5db98c10da7fdafe5e9ed969d1f60a9b93809e15"
    }
}

Add option to take credentials from kubernetes secrets

Many credentials are being used in the deployments like username and password/identity secrets for peers, orderers, CAs, etc. Along with passing these values as plaintext in helm values file, there should be an option to read these values from kubernetes secrets (like many vendor helm charts offer).

The creation of these secrets is up to the user, weather they create them manually or via some operator like External Secrets Operator.

This will directly help towards the "Manage secrets with secrets management services like Vault, AWS Secrets manager, etc" feature goal.

Example:
The Values file will look something like this:

identity_name: xyz
identity_secret: abc
# Name of the secret resource to take the above two values from. The keys have to be exactly same as above.
identityCredsFromSecret: identity_secret #<--- This is the name of the secret resource where the identity_name and identity_secret is mentioned

Add support to use default service dns instead of custom dns and ingress controller

Users should have the option to not use ingress to pass traffic through, utilizing the default service dns instead of a custom domain. This would help remove dependency on passing traffic through ingress controller removing possible bottlenecks in single cluster environments.

  • Add a host <service name>.<namespace> in the CSR sent to CAs on every enroll so that default dns can be used for tls
  • Wherever possible, add option for the user to provide default dns instead of hlf domain, eg. - CORE_PEER_ADDRESS_EXTERNALENDPOINT, in cryptogen configtx file, etc. Basically HLF Domain should be optional wherever possible.

Fabric-orderer feature request

  • An option to override command and args. In case if want to do some troubleshooting.
  • An option to chose any hlf_domian for the orderer in the same release. If not provided, then default to Values.hlf_domain

Affected version fabric-orderer 1.1.0

Optimize Config Org job

Revisit and optimise the new org addition job and see if the new org credentials are actually required while adding the org by the channel admin.

CC Container or pod service not launching (dind)

After following the documentation, Chaincode container were not able to start after the commit procedure. Stuck with troubleshooting the current issue..
Could you share/upload the Dockerfile for dind image that used in the peers.
Do i need modify the core.yaml for the external chaincode container instantiation ?

Ref Error:

Error: endorsement failure during invoke. response: status:500 message:"error in simulation: failed to execute transaction 70fbfded9b3aab3c5d9fff59b489375e8c31cd0050f6f33a1bc592832bc95f4c: could not launch chaincode mycc:15aeafc10114a909dcfb5963d50b5a5b371fdf2f9a71afbd966f7896dc92e770: error starting container: error starting container: API error (400): failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: \"chaincode\": executable file not found in $PATH: unknown"

No PVC is getting provisioned for peer certs when require_certs_dir_persistence is true

When .Values.global.require_certs_dir_persistence : true the peer sts is supposed to come-up with a volumeclaim template as part of peer sts. But the chart renders pvc claim and an emptydir volume too. When deployed, only the emptydir is getting created and cert pvc is ignored.

Expected:

When require_certs_dir_persistence is set to true, the chart should not create an emptydir instead a pvc should be provisioned.

[DX] Adding automated releases and release notes

Description

DX (Developer Experience)

Publishing releases (bumping up versions) and release notes are super important for any OSS project and we can do that automatically based on the Pull requests that are merged.
Here's what I offer. We can use Release Drafter, a popular Opensource action that automatically creates Draft releases for you, along with proper semantic versioning.

ย 

How it works ?

  • We can add up a .github/workflows/release-drafter.yml which will be responsible for drafting your releases based on the branches.
  • Then if you want "well-formatted" releases you can have .github/release-drafter.yml which will contain a template file and it will decide how the versioning is based upon labels bug, feature, and so on.
  • Also it will help to format the labels with custom titles as per your choice.
    ย 

Workflow and Demo

If you wish for a demo, head over to Milan's Releases where we draft a release along with proper release notes as soon as we squash-merge a change to the beta branch. We publish a release every week when we merge our beta branch to the main branch.

It is fully based on the labels I add to the PR. If I add a bug it will be considered a patch, for feature it goes under the minor changes, and so on. Learn more about Semantic versioning here

Screenshots

This is what draft changes look like ๐Ÿ‘‡๐Ÿป

image

This is how published changes look like ๐Ÿ‘‡๐Ÿป

image

Checklist

  • I have checked the existing issues

Orderer addition job issues

  • The job should get exited immediately if any step fails.
  • A new check should be added in the job beginning itself to fetch config blocks for all listed channels and check if there is an entry for this new orderer exists already. If an entry entry exists, then it should not even try anything and rather print a message to user for a manual removal of this orderer entry from the channel and re-run.
  • Add strict validation and error handling for tls cert tar generation and genesis block upload to filestore. Because if TLS certs are gone, then there is no way to recover it again.
  • There are chances that, while adding an orderer to multiple channels one may success and other one may fail. In this case, re-running the same job will create another set of key pairs and update second channel. So the same orderer will be having two sets of TLS cert entries in two channels which SHOULD NOT happen. Job should handle this as well.

Affected version Fabric-ops 1.1.0

[Bug] Orderer servicemonitor adds multiple targets to Prometheus

Orderer servicemonitor adds multiple targets to Prometheus because port name is not mentioned in the servicemonitor, hence it considers both Orderer services - operations and orderer service as targets. As orderer service doesn't expose any metrics, Pormetheus alerts the target is down perpetually.

NodePort service

Do I need to specify a new service for enabling the nodeports. ?? Because when I tried to connect with the endpoint with the curl command it couldn't connect to the pod. My ingress server is up and running. Do I need to change the nodeport service in the services.yaml file?. For establishing this connection The ingress service should be exposed by two Nodeports for Ports: 80/TCP, 443/TCP. Eg; NodePort 30000 => 443/TCP (Ingress service).

If I don't establish the nodeport it is not connecting to the domain using the curl command as mentioned in the falcon docs.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.