Code Monkey home page Code Monkey logo

von's Introduction

img License: CC BY 4.0

VON

The Verifiable Organizations Network (VON)

The goal of the Verifiable Organizations Network (VON) project is to explore the design of an open, unified and trusted network of organizational data for use by people and services across British Columbia. The network can reduce the effort put forward by people looking to use services requiring data about their organization since it is readily available from trusted sources.

VON offers government digital service providers the possibility of dramatically improving their users' service experience. This is accomplished through open and simple access to authenticated organizational data so their users don't need to find and re-enter data the government already holds. Data is simply shared when its open, or shared with authorized consent.

Most up to date website vonx.io

Objectives

The overall goal of the project is to understand, design and de-risk the business and technology elements of the proposed Verifiable Organizations Network concept. One key objective of the work is to design and deliver a proof of concept reference Verifiable Organizations Network implementation.

Success includes creating the business and technological conditions for the creation of a unified, open and trusted network of organizational data. This network can have the following types of benefits to its members.

Organizational data is

  • available to users so they won't need to find it and re-enter it
  • shared when its open, or shared otherwise with consent
  • a data platform available to foster innovation

Government digital services will have

  • the ability to simplify their users' experience through access to a network of unified data
  • decreased time to market and decreased costs as they won't have to re-create costly capabilities related to data collection and management

People representing organizations will

  • experience less friction as the interact with government digital services
  • will be in control of their digital relationship with government

Government will gain

  • experience based knowledge of distributed ledger, graph databases and related technologies
  • improve its sophistication as a consumer
  • improve its capacity as regulator of digital economy
  • insight to the operations of the economy through data

Documentation and Other Works in Progress

As we build out our understanding of what the Verifiable Organizations Network could/should be we are sharing the work products via a Google Drive (https://drive.google.com/open?id=0B4DUXk_qFFhvb0otNlNHMVNvUzg). You are free to browse the documents and contact us if you have comments, contributions or questions. A GitHub Issue would be fine, as would GitHub pull requests to the VON repo, or comments in the Google Docs referenced above.

There is also another GitHub repo where we are working on a specific component of the VON codenamed "TheOrgBook (and now called "OrgBook"). You can read about it in the Google Docs above and checkout its repo - https://github.com/bcgov/TheOrgBook.

About BC DevEx and Devops

The BC Developers' Exchange and DevOps Branch (BC DevEx and DevOps) is growing an ecosystem of co-creation, commercialization and rapid adoption of innovation between the British Columbia's technology industry and the B.C. public sector.

It aims to foster growth in the British Columbia (B.C.) technology industry and the commercialization of it products and services by enabling:

  • BC's technology industry to use the digital resources (code, data, application programming interfaces) of the B.C. public sector in their own commercial developments.
  • BC's public sector organizations to become 'first customers' of B.C.'s technology industry by rapidly adopting industry-developed innovations for service and operational improvement.
  • BC's technology industry and B.C.'s public sector to collaborate, co-create and co-develop new products and services export based market opportunities for industry, and service improvements for the public sector.

The BC DevEx and DevOps team is creating a lightweight set of enabling policies, processes and technical artifacts. The current releases of these are available at:

The BC DevEx and DevOps team uses lean-startup theory and has adopted the agile methodology. Much of the teams' work is being done in the open, on the Internet. As the BC DevEx and DevOps team gains exposure to the needs of the BC Public Service it is recognizing the opportunity to assist in the design and delivery of reusable technical artifacts. These reusable artifacts could take a variety of forms including shared open code, shared libraries, RESTful Microservices or similar reusable technical artifacts. The BC DevEx and DevOps technical artifacts, as well as the tools and techniques the team is using are being evolved in a dynamic, learn-as-you-go approach.

In line with the objectives of the BC DevEx and DevOps goals, the Verifiable Organizations Network is a concept aimed at exploring business and technology approaches needed to solve the long-standing problem of simplifying business users digital experience for government services.

von's People

Contributors

andrewwhitehead avatar carolhoward avatar darrellodonnell avatar esune avatar ianco avatar infominer33 avatar jljordan42 avatar jtcchan avatar lmullane avatar nrempel avatar peacekeeper avatar repo-mountie[bot] avatar swcurran avatar wadebarnes avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

von's Issues

indy.error.IndyError: ErrorCode.CommonInvalidStructure

This issue was encountered during load testing; #20.

Summary from @ianco;

ErrorLog3.txt - the error is thrown because the sdk can't de-serialize the claim into json. Possibilities are:

the claim is getting over-written in memory (python threading problem)
or something is getting over-written in the wallet (wallet can't handle multiple concurrent claim requests for the same issuer)

WARNING 2018-04-25 21:54:26,184 libindy 27 140010019198720 _indy_loop_callback: Function returned error 113 --- Logging error --- Traceback (most recent call last): File "/opt/app-root/src/api/claimProcesser.py", line 284, in __StoreClaim await holder.store_claim(claim) File "/opt/app-root/src/.local/lib/python3.5/site-packages/von_agent/agents.py", line 1139, in store_claim None) # rev_reg_json - TODO: revocation File "/opt/app-root/src/.local/lib/python3.5/site-packages/indy/anoncreds.py", line 472, in prover_store_claim prover_store_claim.cb) File "/usr/lib/python3.5/asyncio/futures.py", line 361, in iter yield self # This tells Task to wait for completion. File "/usr/lib/python3.5/asyncio/tasks.py", line 296, in _wakeup future.result() File "/usr/lib/python3.5/asyncio/futures.py", line 274, in result raise self._exception indy.error.IndyError: ErrorCode.CommonInvalidStructure

Note for the above - claim offer and claim request meta-data are stored in the holder's wallet (not a virtual wallet, because we can't determine the legal entity id), so there is only one per issuer. However the error is thrown on the de-serialization of the claim (to json) not any crypto validation.

Sprint 18.8 (17) DickyDoo Fish

dace fish

This is a class of fish that are not of interest to sports or commercial fisheries.

Here is an example of research being done on these fish

2010-2011 - Instream flow needs for federally endangered Noosack Dace (Department of Fisheries and Oceans SARA Recovery Funding, Habitat Conservation Trust Fund; $80,000 over 2 years)

Nooksack dace are a federally listed species endemic to a small number of streams in the lower Fraser Valley. Although critical habitat has been defined for these species, summer low flows have been identified as a threat to species persistence and recovery. The objectives of this project are to quantify the relationship between flow and dace abundance, with the goal of identifying minimum instream flow needs and establishing guidelines for Nooksack dace habitat management and recovery.

An Ontario version of this type of fish it might be http://www.dfo-mpo.gc.ca/species-especes/profiles-profils/sand-darter-dard-sable-o-eng.html

Automate `von-image` builds and publication into our official image repository

The von-image source repository is defined here; von-image

We have been, and still are using, Andrew's Docker Hub repository for some of the development images; andrewwhitehead/von-image

The main images have been migrated over to what is to become our official repository bcgovimages/von-image

Options

1. Scripted pipeline process in OpenShift/Jenkins

At first this appeared to be the best approach. Allowing a scripted process to dynamically build and publish images based on the content and updates to the image source repository. There is even a Docker Pipeline Plug-in that would make building and publishing the images easy.

This turned out to be infeasible. Although the docker client could be installed easily dynamically the build process still requires access a running instance of the docker daemon. In the container world this is typically done by mounting the host's /var/run/docker.sock into the running container. This unfortunately requires privileged execution (apparently) and moreover in OpenShift required a hostPath mount which is a privileged operation that is blocked for user containers.

FYI - The OpenShift Docker build container uses the /var/run/docker.sock mounting approach and runs as a privileged container under the close scrutiny and control of OpenShift.

2. OpenShift Pipeline process using Build Configurations

Managing the builds with static build configurations alone is not desirable as we have an ever increasing set of images we're building and publishing. A more dynamic approach is required.

Build configurations allow you to define docker build arguments that get injected into the docker build process. Similar to the how the make_image.py injects the build arguments into the docker build to control which versions of the dependencies and libraries get included in the resulting image.

If it's possible to dynamically inject these arguments into the build via pipeline scripts (like it is to inject environment variables) this would allow us to have a generic build configuration that is capable of being used to build all of our various image versions. Top level control would be via a Jenkin pipeline script that would contain the necessary logic to inject the appropriate arguments into the builds to produce the desired images.

The pipeline would be triggered via web-hook. The process would require a set of Docker Hub credentials that allow images to be pushed into the repository.

3. Docker Hub Automated Builds

These are exceptionally rudimentary builds. They require a static Dockerfile reference which is linked to a static output tag. Changes to the GitHub repository trigger all of the builds.
The downside to this approach is maintainability and scalability. A static Dockerfile must be defined for each image along with a build entry in the Docker Hub project. It's easy enough to generate the Dockerfiles (Andrew has already added such support to the scripts), but each build must be manually configured.

indy.error.IndyError: ErrorCode.CommonInvalidState

This issue was encountered during load testing; #20.

Summary from @ianco;

Re: CommonInvalidState - this seems to be related to the _seed_to_did() calculation, where we create a throw-away wallet just to convert the provided seed to a did. (There is no public sdk api to fetch this value once it is saved, and no public method to do the conversion, other than to create the did in the wallet.)

I suggest we clean up the code to do the did calculation once, on startup, and then we can remove the "per request" calculation.

At the same time I suggest making sure we create the wallet, master secret, etc on startup and not per-request. If we can reduce the redundant "per request" code we might reduce these kinds of errors.

Another observation - the hack I put in to fool indy-sdk into letting us open the same wallet multiple times is also fooling von-agent into trying to create the wallet multiple times. In the wallet this is a "no-op", but in von-agent it's triggering the seed2did code on every request.

I suggest either:

  • A single agent and/or wallet instance, shared by all the threads (would need to verify that everything is thread safe.
  • Or a "pool" of agents and/or wallets, created on startup, to eliminate the per-request logic.

VON-Anchor: Add support for new "Get Credentials for Proof" process

@swcurran commented on Wed Jul 18 2018

In release of Indy up through 1.5, the "get credentials for Proof" process took as an argument a Proof Request and returned a structure that an array of credentials per claim that could be used for constructing the proof.

In Indy 1.6 it appears that model has been changed to give the caller more control. In particular, the initial call to get credentials for a proof request returns a search handle for the wallet, not the credentials, and then the caller can use subsequent calls to add credentials to the search handle result to be used in the Proof. I think there is a final call that creates the proof from that.

This task is to figure out that is supposed to work, and add support for that model in VON-Anchor. This will be subsequently used to solve the TOB "many credentials of the same type" issue.

Add documentation site and getting started with VON guide

Add a getting started site to provide a roadmap for different types of users coming to VON, including:

  • VON Developers
  • TheOrgBook operators, including foundational credential issuers
  • Permit and License Issuers - in the Verifiable Credentials world - Issuers/Verifiers

Consider user stories and approaches to the bootstrapping Organizations controlling their own wallets

We need to think about how an Organization can bootstrap it's own wallet based on the actions of a person acting on it's behalf. Credentials issued to an Organization should be stored in a wallet controlled by the Organization, and should have mechanisms that support a Service requesting the Organization for Verifiable Credentials - not the person with whom that they are currently interacting. How will the bootstrapping occur? How will the Organizational wallet know whether it should reply to a Proof Request?

For small business, that Organizational Wallet will probably be held by the Owner. Will the Organizational Wallet be separate from the Owner's wallet? Can that mechanism be used to enable a transition to an Organizational Wallet?

So many questions. Now for some tentative answers...

Add support for generating RTD-style documentation for repos

Add support for https://readthedocs.org/ style documentation using sphinx and extensions. Via experiments we are planning the following approach:

[] Dockerfile to build the docker image with necessary/desired extensions - start from https://github.com/swcurran/docker-sphinx
[] Build process to support pushing image to bcgovimages home - currently hub.docker.com
[] Create "rtdgen" command in openshift tools repo that is a "manage"-type script with various commands for invoking the docker image to run "typical" commands.
[] Add a how to document to openshift tools repo (??) or somewhere to determine how to get started.

Once completed/in parallel we'll add the pieces for RTD documentation for each repo and a build step to regen the documentation on each commit - perhaps hosting it on OepnShift, perhaps on readthedocs.org.

Create up-to-date infrastructure model diagram

With the number of people collaborating on these projects, it would be beneficial to a have an up-to-date infrastructure diagram of how BC implements various components in its infrastructure. Especially relating to how BC Registries data is transformed and sent to von-x to enter the network.

What do you think @swcurran @jljordan42?

Update von-image to at least 1.4 and ideally simplify the process for update/test/releases

von-image is proving valuable to us and others, but is not keeping pace with the updates of products upon which it is based - Indy-SDK, Indy-node and von-agent. Please update von-image to at least Indy-SDK 1.4 and related projects. As discussed, see if you can come up with a relatively easy way to test the embedded products in the image, so that future updates are made easier - ideally self-contained.

Define a simple backup strategy for the VON/Any Ghost-based Blog site.

It's probably sufficient to just make periodic (e.g. daily) backups so that we have a way to survive corruption of the database. Retain some number (30?) copies of the backup to gluster storage after confirming that the gluster storage is treated by ESIT as backed up based on Tier 2 rules...e.g. incremental, daily, weekly and monthly backups.

Worst case if ESIT is not backing up data (which we really need to know...) - come up with a way to periodically export database to some other place outside of OpenShift.

Develop a Docker Hub-hosted Indy-SDK/Node and VON image to use as a base for all projects

Create an optimized image that can be used as the basis for all images that need access to HL-Indy (SDK and Node) and VON (Agent and X) capabilities.

A separate pipeline will be created to drive the build of the image and pushing to Docker Hub (or some other public repository).

The existing pipelines that require Indy/VON will be updated to use this image as the base.

Add support in VON-Anchor for tagging Credentials and searching tags.

This is an eplic covering Credential tagging and searching. This ticket describes the overall goal of the initial feature based on features added to Indy-SDK for the 1.6 release. We need that support for the initial rollouts of TOB instances (BC, Ontario and SRI).

The code is in Master (or at least "master-625" branch) per this comment from JIRA IS-790 Update:

    [ https://jira.hyperledger.org/browse/IS-790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=47447#comment-47447 ] 

Artem Ivanov edited comment on IS-790 at 7/18/18 5:37 AM:
----------------------------------------------------------

Implemented in PR: https://github.com/hyperledger/indy-sdk/pull/940
Build version: master-625
 Changes: 
 * `indy_prover_store_credential` API call was updated to create tags for a stored credential object.
 * Added two chains of APIs related to credentials search that allows fetching records by batches:
 ** Simple credentials search - `indy_prover_search_credentials` - `indy_prover_fetch_credentials` - `indy_prover_close_credentials_search`
 ** Search credentials for proof request - `indy_prover_search_credentials_for_proof_req` - `indy_prover_fetch_credentials_for_proof_req` - `indy_prover_close_credentials_search_for_proof_req`
 * All search functions support wql queries


was (Author: artemkaaas):

What we have tentatively planned for the use of tags and wallet filtering in TOB to be as follows. Suggestions/improvements welcome:

Storing Credentials:

  1. TOB receives a credential to be stored in it's wallet.
  2. VON-Anchor gives it a unique Credential ID (mechanism - TBD - perhaps there already?) and on storing the credential in the wallet, adds a CredentialID tag to the Credential in the wallet.
  3. VON-Anchor returns to its caller (TOB) the CredentialID. TOB stores the CredentialID in the search database.

For Proofs:

  1. A Verifier figures out the Credential ID(s) needed for a Proof Request using the TOB Search API.
  2. The Verifier inserts them into the Proof Request - format TBD.
  3. TOB/VON-X receives the Proof Request, extracts the Credential IDs (if any) and creates a Proof Request copy with that Credential IDs removed (e.g. becomes a Proof Request expected by anoncreds)
  4. TOB/VON-X submits to VON-Anchor the Proof Request plus Wallet Query Language (WQL) JSON for the exact Credential ID(s) to be used for the Proof. VON-Anchor needs to be updated to support this.
  5. VON-Anchor uses the (newly extnded) anoncreds searching with WQL support to create the Proof using the Proof Request JSON and WQL JSON.

The first thing that is needed is to look at what was implemented for IS-790 to make sure that:

  1. We can add the tags we want to a credential via VON-Anchor using the IS-790 implementation.
  2. We can submit WQL on a Proof via VON-Anchor using the IS-790 implementation.

Note that adding a "CredentialID" is a specific use that VON-X will make to VON-Anchor as directed by TOB. Other VON-X Overlords may want to put other tags on credentials as they are stored and we'd like that supported as well.

Upgrade TEST environments

The time has come to promote our configurations and code to the TEST environment.

Affected:

VON | Review repos and OpenShift projects and reorganize as necessary and define homes for select repos

We'll start with a whiteboard (of some type) session that shows what we are doing today, and then move from there to future plans.

Google Doc about this is here, with action items that stem from this: https://drive.google.com/open?id=1ITKyjMpwzp5NdOON2xP_3xsF4AuyPXK1mOIaQMqeVuU

Meeting held on April 19th to discuss and evolve the layout and for the most part, we have consensus. A bit tricky to define the relationships, but we're close.

Refactor the TOB Database and API

Redesign and implement a new design for the TOB Data Model and revise the TOB API to align with the new Data Model. The goal of the new Data Model is to be less tied with Registries Credentials and more aligned with search - Names, Locations, Credential Types, People and Contacts.

VON-Anchor: Add support for WQL from caller to be used during the anoncreds call to get credentials for a proof request

Add the ability for the caller requesting the creation of a Proof to include WQL in the call so that they can filter on the specific credentials wanted to be used in constructing the Proof.

Some design may be needed to support the situation where multiple credentials will contribute to the proof. An evaluation of the Indy-SDK functionality is required to determine the best approach.

Pass through some readmes in (at least) TOB, von-network, permitify, permitify-x to get them up to date

A bunch of the repo documentation for running things is now outdated as we've evolved. The repos contain cross repo references that change, making steps obsolete. For example, the quick start guide in TOB talks about permitify which is deprecated. We/I need to take a pass through those readmes and other MD files to make them reference the dependent pieces without going into them. E.g. von-network is only about itself. TOB is about itself and references von-network. Permitify is about itself, references TOB and von-network. BC-Reg is about itself, references TOB and von-network.

As well, a couple of the utilities might need to be updated - e.g. a script to only create a DIDs for a component. I suggest we also add documentation steps (echo, pause) in the scripts that remind users of dependencies are running (e.g. TOB notifies user that von-network should be running). Those could be automated of course.

A good thing to keep in mind moving forward - monitor over documenting at risk of their becoming obsolete and taking people down the wrong rabbit hole. I think indy itself is suffering from that - way too many people struggling to get started because of two many options (vagrant, native, docker, etc.), changing steps vs. having a single way to get started that is maintained. It's a tricky challenge, made a little easier if we're always thinking that way.

indy.error.IndyError: ErrorCode.PoolLedgerInvalidPoolHandle

This issue was encountered during load testing; #20.

Summary from @ianco;

Re: PoolLedgerInvalidPoolHandle - no idea, other than a potential threading issue. There is a potential race condition between creating and opening a wallet, if two threads are sharing the same object. The pool handle is created when the wallet is created, and then used when the wallet is opened (or when there are any sdk requests, such as saving a claim).

VON-Anchor Add support for requesting tags be added to a credential to be stored

Enable a caller with a Credential to be stored in the wallet to be able to specify what tags should be associated with the Credential. The tags might be of either type supported by the Indy-SDK wallet implementation - encrypted tags (supporting "=" and "IN" searching) and unecrypted tags supporting richer searching.

The mechanism should support the caller specifying the tag name and value. In theory, the caller could also specify a claim from the credential and VON-Anchor use that as the tag name, and get the value from the credential, but that is not sufficient, so I'd say leave it all up to the caller to get the name/value pair.

VON-Anchor - Enable tagging credentials with a Credential ID

Provide a capability (perhaps optional, perhaps automatic - your thoughts?) to generate a CredentialID for all credentials put into a wallet, tag the Credential with the ID and return the ID to the caller the ID of the newly stored credential.

Implement DID-Auth for Services accessing TOB

Use the approach from the @peacekeeper examples to implement DID-Auth between the VON-X services and TheOrgBook. Within TOB use the Django administration capabilities to manage the authenticated users that represent the VON-X Services.

Once implement, protect some of the TOB API endpoints using Django admin permissions associated with the Service authenticated users.

Please create a list of the public endpoints/API of VON-X

We'd like that list to use in discussions with other organizations about what their agents are doing and how they communicate. We have real live working examples of code and that's a big benefit to others that have ideas. Ideal would be a list that can be referenced on readthedocs to be able to find the docs about each one.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.