Code Monkey home page Code Monkey logo

bcgov / theorgbook Goto Github PK

View Code? Open in Web Editor NEW
78.0 31.0 66.0 22.91 MB

A public repository of verifiable claims about organizations. A key component of the Verifiable Organization Network.

Home Page: http://von.pathfinder.gov.bc.ca

License: Apache License 2.0

Python 41.26% Shell 10.45% HTML 14.63% TypeScript 26.55% JavaScript 2.09% Dockerfile 0.33% SCSS 4.69%
von verifiable-organizations-network hyperledger-indy hyperledger verifiable-credentials citz

theorgbook's Introduction

License

SonarQube Results:

Bugs Vulnerabilities Code smells Coverage Duplication Lines of code

Zap Results:

Bugs Vulnerabilities Code smells

TheOrgBook is now Aries Verifiable Credential Registry

OrgBook BC is a deployment of an underlying software component called a Verifiable Credential Registry (VCR). A VCR is more general component that can drive OrgBooks (repositories of information about registered organizations), and other repositories of verifiable information across a variety of use cases, including education, government services, public works projects and many more. The first generation of OrgBook BC was built on top of the software whose source code is in this repository. The current iteration of OrgBook BC is powered by the Aries Verifiable Credential Registry (VCR) (Aries VCR). TheOrgBook was implemented using custom protocols defined locally by the Verifiable Organizations Network (VON) team here in BC, Aries VCR is based on Hyperledger Aries protocols defined by a global community at the Linux Foundation.

If you are interested in deploying your own OrgBook (perhaps for another jurisdiction), or learning about the internals of Verifiable Credential Registries, please start with the latest and greatest code in the Aries VCR open source repository.

If you are just interested in running the Greenlight demo to get a feel of how OrgBooks work at the user interface level, feel free to use this repository—instructions are below. We recommend that you don't build on top of the code in this repo. Stick to Aries VCR.

TheOrgBook

TheOrgBook is a Credential Registry of verifiable credentials about entities. A public instance of TheOrgBook, such as BC's OrgBook contains a verifiable credentials about organizations (incorporations, professionals, etc.) issued by trusted public services such as Corporate Registries, regulatory agencies, permitting services, licencing services, procurement services and the like.

The Verifiable Organizations Network (VON) envisions the possibility of a number of public repositories of Verifiable Claims as a way of bootstrapping a trusted digital ecosystem.

TheOrgBook is being developed as part of the Verifiable Organizations Network (VON). For more information on VON see https://vonx.io. Even, better - join in with what we are doing and contribute to VON and the Indy community.

Quick Start Guide

The best way to get started with a new project is by working with a running instance. The VON Quick Start Guide will get you started with an Indy Network, an instance of TheOrgBook (this repo) and an instance of GreenLight running on your local machine in Docker. Give it a try!

OrgBook provides a set of RESTful web services you can use to query data from your third-party application, an introduction to use of these API's is available here.

Running TheOrgBook on OpenShift

To deploy TheOrgBook on a local instance of OpenShift, refer to Running TheOrgBook Locally on OpenShift. These instructions, apart from the steps that are specific to setting up your local environment, can also be used to get the project deployed to a production OpenShift environment.

Running TheOrgBook on Docker

The project can also be run locally using Docker and Docker Compose. Refer to Running TheOrgBook with Docker Compose for instructions.

Resetting the Ledger

For information on the process of resetting the ledger and wallets refer to the Resetting the Ledger and Wallets documentation.

theorgbook's People

Contributors

andrewwhitehead avatar asanchezr avatar dependabot[bot] avatar esune avatar ianco avatar jljordan42 avatar mitovskaol avatar nrempel avatar repo-mountie[bot] avatar seanadipose avatar swcurran avatar wadebarnes avatar weiiv avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

theorgbook's Issues

Enhancements to Hyperledger Indy SDK (wallet) and implementation of a large scale claims store

Value: $50,000.00Closes: Wednesday, January 31, 2018Location: Victoria In-person work NOT required

Opportunity Description

This is part of an effort to understand the applicability of emerging technologies to solve some challenge problems in public sector. We are working in the open source community, in particular, the Hyperledger Indy community to explore challenges in the service to business domain. We want to find ways to improve the experience of BC businesses in their interactions with government. This work falls into a new area that is known in the industry as "self-soveriegn identity" or decentralized identity. 

In the self-sovereign identity (SSI) world, “wallet” is the go to term for the digital equivalent to the physical place you keep your data. The type of data held in the wallet depends somewhat on the role of the identity in a self-sovereign ecosystem - a claims issuer, holder (aka prover) or verifier (aka inspector). Any particular identity might take on multiple roles and so have any and all types of SSI data. We expect that for most "typical" use cases for wallets, the major implementations will be provided by 3rd parties in the market. For example, there are software vendors working on mobile wallets targeting consumers, whose primary role is as a holder of claims about themselves (they are the subject of the claims). As well, we understand software vendors are working on "Enterprise Agents" - applications that have wallets for the purpose of primarily operating as either or (more commonly, we think) both claims issuers and claim verifiers.

TheOrgBook is somewhat different from both of those "typical" models. It is primarily a claims holder, but quite a different one from a consumer-type claims holder. It will only be holding public claims about organizations. Notably, it will hold large scale data volumes, and the claims that it holds are about many subjects (organizational entities) - not about itself. As such the requirements of its wallet are quite different from those of a typical consumer wallet.

A claims holder wallet (consumer or TheOrgBook) would be expected to contain the following:

  • DIDs (references to decentralized IDs) that make up the wallet owner's identity, and context of those DIDs - e.g. for pair-wise DIDs, the connection to which that DID applies.
  • The DIDs of the owner’s connections (e.g. banks, stores, government services and so on)
  • Verifiable credentials issued to the owner
  • Cryptographic materials - public and private keys associated with your DIDs, and possibly other materials such as in the Hyperledger Indy implementation your Master Secret for requesting claims (to issuers) and providing proofs (to verifiers)

Note that the private keys may or may not (by implementation) go in the wallet - they may deliberately be kept separate to prevent the theft of a wallet giving the thief access to the use and all the information in the wallet.

While we expect the TheOrgBook wallet to hold those same pieces of data to be used for the operations as a consumer wallet, there are several requirements that are quite different:

  • The volume of claims will be much higher than a typical consumer holder wallet - on the order of millions.
  • The number of claims associated with specific Schema/Issuers will be much higher than a typical consumer wallet. For example, in loading the full history of the BC Registry set of "Certificate of Incorporation" and related claims (annual reports, name changes, amalgamations, etc), there will be millions claims loaded. In a typical consumer/small business scenario there would likely be 1 or at most a small number of "Certificate of Incorporation" claims.
  • Claims are about different subjects, with the unique identifier of the claim subject embedded in a field in the claim.

Large Scale Persistence

TheOrgBook currently uses the default Hyperledger Indy wallet implementation based on an encrypted version of SQL-Lite (SQLCipher). To both handle the volume of claims that TheOrgBook will need to support and to provide more robust database administration handling, we want to update the wallet implementation to use (likely) PostgreSQL for persistence. TheOrgBook runs on the BC Government's private Red Hat OpenShift Platform as a Service implementation, and for relational databases, PostgreSQL is the preferred choice. Note that if the developers feel that a noSQL-based wallet solution would be better, we would like go to MongoDB (although Redis is available out of the box with OpenShift).

Getting Claims for Proof Requests

The current Hyperledger Indy wallet interface has a call that given a Proof Request (an array of claim names, each from a possibly different credential associated with one or more schema and/or issuers) returns all of the credentials in the wallet that could satisfy each claim in the Proof Request. Since, as noted above, the wallet of TheOrgBook holds the same claim for (literally) millions of subjects, Proof Requests will return from the default wallet API call millions of credentials - likely causing a significant performance issue. To prevent a performance impact we have proposed that a Proof Request can include an optional filter condition that allows the call to the wallet to filter based on claim values the credentials of interest for a given Proof Request. In case of TheOrgBook, the filter condition will usually simply be the unique identifier of the Organization of interest - the subject of the claim. Our proposal for an update to the Hyperledger Indy Proof Request format to support this functionality can be found in the Hyperledger Indy JIRA system (IS-486).

While we think that change could be used to resolve this issue, we are open to other proposals. In addition, we think (but again, could be wrong) that because of the data volumes in TheOrgBook, the wallet implementation will need to do this credential filtering at as low a level as possible - ideally at the database level - to prevent the manipulation of large volumes of data for each Proof Request. There may be challenges with this depending on how the existing wallet implementation stores the data.

Opportunity

This is an opportunity to work with the Verifiable Organizations Network team to deliver enhancements to both a Hyperledger Project and BC Government projects.

The fixed-price reward is for a potential total of $CDN 50,000* for satisfaction of the Acceptance Criteria below, per this payment schedule:

  • Completion of Phase 1 - $10,000
  • Completion of Phase 2 - $20,000
  • Completion of Phase 3 - $20,000

* The full amount is dependent on an evaluation of outputs at the end of Phase 1, during which Verifiable Organizations Network team will determine whether or not work will continue into phases 2 and 3.

Acceptance Criteria

  • Phase 1 - Agree upon database backend technology, technical approach, implementation details, testing plan, and interaction flow diagrams (for all phases). Develop materials necessary to share the implementation proposal with the Hyperledger Indy SDK community, share with the community and incorporate feedback to increase potential for implementation to be accepted as a pull request to Hyperledger Indy SDK. Outcomes of this phase could impact the activities of Phase 2 and 3.
  • Phase 2 - Implement a large scale wallet solution using the agreed upon technology in Phase 1. The wallet implementation should have the same features as the existing HL-Indy wallet implementation and include a thorough test suite. Test suite should exercise implementation for load and volume of claims issued and stored. Test scenario should exceed 10 million claims of a variety of claim types to be stored. Implementation must also test proof request/response throughputs. Existing users should be able to easily switch to the new implementation by providing a database connection string or similar. Implementation will conform with coding standards of the Hyperledger Indy SDK to minimize effort of integration into main implementation of Indy-SDK. This phase will not be gated by acceptance into main Indy-SDK GitHub repo but must be merged into a forked copy of the main Indy-SDK repo in the BCGov GitHub Organization.
  • Phase 3 - Based on the agreed upon approach in phase 1, implement the ability to filter claims returned by the holder’s wallet at the database level. This should be implemented for the new implementation and the existing implementation. The implemented feature must include a thorough test suite. Test suite should exercise implementation for load and volume of claims proven. More information regarding the requirement is available here: https://jira.hyperledger.org/projects/IS/issues/IS-486. Implementation will conform with coding standards of the Hyperledger Indy SDK to minimize effort of integration into main implementation of Indy-SDK. This phase will not be gated by acceptance into main Indy-SDK GitHub repo but must be merged into a forked copy of the main Indy-SDK repo in the BCGov GitHub Organization.

How to Apply

Go to the Opportunity Page, click the Apply button above and submit your proposal by 16:00 PST on Wednesday, January 31, 2018.

We plan to assign this opportunity by Friday, February 2, 2018 with work to start on Monday, February 5, 2018.

Proposal Evaluation Criteria

  1. Your approach to completing the Acceptance Criteria in a short proposal (1-2 pages) which includes evidence to support the criteria outlined in items 2-6 below. Evidence can include GitHub IDs, projects for example. (20 points)
  2. Your prior experience contributing to community-based open source projects. (10 points)
  3. Your experience in identity, self-sovereign identity. (20 points)
  4. Your experience in verifiable claims, decentralized identifiers, distributed ledger/blockchain or other relevant technologies. (30 points)
  5. Your expertise in large scale services design and implementation. (20 points)
  6. Your ability to satisfy the Acceptance Criteria on or before 31 March 2018 (10 points)

Develop Framework for Event Processing Stages

Event processing framework should:

  • Provide an overall framework and approach to event processing
  • Allow implementor to add stages to the event processing queue
  • Provide management functions, including a UI to monitor event execution

Currently reviewing https://github.com/mara/data-integration - this provides the basic functionality, however the UI is based on Flask and Mara Application, which is not consistent with the remainder of the VON applications (majority are Django based)

Issues/Questions/Concerns with recent update to Scripts

Scanning the recent pull request updates to the scripts. Thinking of questions in Slack, but that seemed too limited, so adding here.

Looking good!

General concern - I'm a little worried there are places too many levels of indirection in places on the variables/files involved. I think there is a balance between being DRY and being too abstract so as to be confusing. An example - the inclusion of the commonFunctions calling loadComponentSettings which includes settings.sh. It requires the user dig to figure out what is going on. More obvious just to have the check and load visible in the script. Second example mentioned below - the genParams code. Too hard to remember what variable has what value when and where.

That aside, assuming these are to be used by anyone other than you and I, there needs to be a map
as part of the documentation of the scripts, and especially the files that can be customized at the project and at the local (.gitignored) level. I was going to do that, but I think with the changes you have made, you need to do that - I'm not sure what is where now.

Code Comments

  • prefer the consistent ".local" (or some other term) for everything that is .gitignored by these script families - e.g. don't add the _DeploymentConfig.json convention (or add it to but include .local)
  • Remove message to console when loading .settings and in general - don't overdo the messages output by the running scripts - especially when underlying actions also generate comments. Add/Prune comments in testing. Since we can always use the -x (debug) option, not needed as much. Still have to know what's going on, but no more than is needed.
  • Personal preference - the usage function is at very top of every script so easy see with "more" (e.g. nothing before it)
  • Numerous examples of error messages with "!" - tsk, tsk :-)
  • Personal concern - the amount of redirection on genParms is really hard to follow. OK if it works, but I'd hate to have to debug it. I struggled with that with the per component "Config*" scripts.
  • The Postgres "POSTGRESQL_PASSWORD" value should not have any special characters - please remove until the Schema Spy issue is fixed. Should only be: [a-zA-Z0-9_]{16}

Required modification of new TOB data model

After some discussion this morning, it looks like the originally proposed data model will not be flexible enough to accommodate all issuers.

For example, bcreg-x (https://github.com/ianco/von-bc-registries-agent/blob/master/bcreg-x/config/services.yml) is configured to send 3 credential types:

  • Incorporation (name)
  • Doing Business As (name)
  • Corporate Address (address)

Since address comes across in a separate credential, we currently have no way of creating a link between the 2 credentials. We must be able to traverse the relationships in order to look up an address from a name. Currently, they are linked through the credential entity which, in this case, has separate records for name and address.

The current proposed solution is to add an additional entity called "Topic" which adds some additional hierarchy to the current model.

IMG_6649.png

In addition to source_claim (likely renamed for clarity in this case), the issuer will need to specify topic_claim. Then, the issuer will send a 'topic' id in every credential. This will allow the issuer to form a relationship between address and name with different credentials.

All data previously stored in the name entity will now exist in topic and name will be removed. This removes the need for the is_legal flag since there is no ambiguity. Additionally, language makes more sense at this level since all credentials for a topic will have the same language.

There are still some questions that remain. Should a topic have an end_date? What happens to all children in the case of end_dated topic?

@swcurran @CyWolf @ianco Can you try and poke holes in this approach? Will it work? Can you think of something better?

TOB V2

As the data model is revised (Issue #25) revise the API.

As with the TOB DB Model change, care must be taken to not break the existing Test, nor to have a long lived dev branch while this is done. Perhaps a v2 api could be used?

A big part of this API revision is the elimination of many of the current API endpoints - there are WAY too many. NTH would be a monitor for API usage so that we could see current traffic on all endpoints so that we can easily the see the ones not being used at all (and just eliminate them), see the ones that we want to eliminate but are being used (get the callers to change). If we could add this as a permanent feature, future API updates could be made much easier.

We do want to retain a Swagger definition of the API.

Refactor the TOB Database Design

Update the TOB Postgres database to focus on Search and Credential Processing:

  • Legal Entities - subjects
  • Names and their type and source
  • Locations and their type and source
  • Claim Types held by Entities
  • Currency of data - current vs historical records

A mechanism should also support decoratively processing Credentials - mapping fields in a claim to the search data in TOB.

Leave the Credentials (VON-X) API in place (or at least, evolving as defined by the VON-X Devs) and add Basic Auth for any non-public APIs.

Proposed design: https://docs.google.com/document/d/10mlNNiP1hHl82kPhKW007cB_B-acAGl6-sT7so5j0SM/edit?usp=sharing

Create an extensible mechanism for managing Configuration Flags for tuning different instances of TOB

Enable a "permissions"-like, extensible set of configuration flags for TOB that allows different underlying behaviour for different instances of TOB - for example, BC and Ontario instances.

Example requirements:

  • Configure the Web API such that in some cases only the Municipal component of the address is returned (city or town) and in other cases, the full address is returned. In parallel, enable the frontend to present the data received appropriately - e.g. hide the labels for the Address fields not included.
  • Configure Web API search such that, for some TOB instances, when a Name search is done, Legal Names and DBAs are returned as separate, unlinked entities, while in other instances, Legal Names and DBAs are returned as a single entity.

Make sure that settings are available to all components of TOB - especially the server and frontend - so that the desired behaviour is applied consistently across all components. For example, the settings could be instantiated in the Server and made available to the frontend via an API call.

Refactor the TOB Database and API

@swcurran commented on Tue Apr 24 2018

Redesign and implement a new design for the TOB Data Model and revise the TOB API to align with the new Data Model. The goal of the new Data Model is to be less tied with Registries Credentials and more aligned with search - Names, Locations, Credential Types, People and Contacts.

Plan for Multilingual support from Data model up

From @CyWolf:

here's at least 3 aspects to this that I can think of:

A table of simple labels (label ID, language code, label value -- primary key label ID, language) for field names and general text. This could be served in entirety or one language at a time to web clients.

A table, or multiple tables, for record short and long names - translations of things like the location types names and organization type names. If one table then the columns might be (record type, record ID, language code, name, description). This would probably be joined against when the related types are requested. Records may still store their 'primary' (ie. English) names.

Search - I'm not sure if this would actually be impacted, as most of the interesting fields for searching wouldn't have translations.

Initial Reference Implementation of Decentralized Authentication (DID-Auth) and Authorization Mechanisms

Value: $50,000.00Closes: Wednesday, January 31, 2018Location: Victoria In-person work NOT required

Opportunity Description

One of the key promises of decentralized identifiers (DIDs) and self-sovereign identity (SSI) is the potential for eliminating password-based authentication, particularly on websites, and the resulting improvement in online security. The human element tends to be the weakest point of any online security system and, to put it bluntly, passwords are the worst. As such, improving the human components of the online security infrastructure will have the greatest impact on improving overall online security for our citizens.

A working group in the Decentralized Identity Foundation (DIF) has been discussing the use of DIDs for authentication. A preliminary concept paper has been created stemming from discussions at the Rebooting the Web of Trust 2017 Conference as the starting point for the discussion amongst the DIF group. It appears based on the ideas outlined by the members of the DIF group, progress on the DID Spec and on the Universal Resolver for DIDs, we are at a point where a working implementation of DID-Auth would be useful.

We propose an implementation of Decentralized Identifier Authentication (DID-Auth) which we can use with TheOrgBook (described here, development instance here) website. The current and primary features of TheOrgBook are public, including searching for/discovery of organizations, displaying organizational informational, and viewing the details of an organization's verifiable claims. We also propose some initial work on Authorization using Verifiable Claims. There are several use cases related to TheOrgBook for DID-Auth, and Verifiable Claims Authorization:

Authentication Scenarios

  1. System to System: Services that generate verifiable claims about organizations and write them to TheOrgBook need to be authenticated. Such Services must have a DID known to TheOrgBook and use that DID to access the Issuer API for writing verifiable claims to TheOrgBook. This would be a generic authentication method between two services at the API level.
  2. Administrative: There is an administrative element of TheOrgBook that should be limited to authorized users who would have a DID.

Authorization Scenarios

  1. Claim Your Claims: There is a long term desire to support verifying Organization Owners using some (likely offline) process, providing them with a Verifiable Claim that enables them to "Claim your Claims" on TheOrgBook, and thus, giving them the ability to extend their organization's TheOrgBook page (add accreditations such as the BBB, ratings, product / service information, etc.).
  2. Delegation: Further, since an Organization's owner is not likely the only person associated with an organization that will need access to TheOrgBook, there will be a need to extend the authentication process to support organizational owners delegating access to their TheOrgBook page as they see fit.

We would like to explore implementing at least the first three of these scenarios using DID-Auth, and ideally all four.

For the system to system and administrative use cases, no verifiable claims are necessary for authorizations - the API and website manage authorizations of users granted access to specific capabilities. The website would maintain a user table of registered DIDs that are permitted access to the write verifiable claims and administrative features of the TheOrgBook, and a process is executed to authenticate access requests to the site. The administrative implementation must consider performance and usability for both the initial authentication process (is the process as fast/easy as passwords?) and ongoing verification (e.g. session renewal/expiration).

The "Claim your Claims" use case extends the administrative use case by adding authorization via Verifiable Claims. DID-Auth will be executed and on success, a Verifiable Claim proof process executed to determine the resources (Organizational Page and related data) to which the user will be granted access. Again, performance and usability of the process will be paramount - the user must find the process no harder than using traditional approaches. Further, the overhead on the website side - the limited per user information to be maintained - will be of interest. A goal will be demonstrating how websites can operate effectively without collecting and maintaining private information about users.

The delegation of authority use case extends the "Claim your Claims" authorization functionality by providing the Organizational Owner with the ability to delegate their access to others under their control - e.g. without having to log into TheOrgBook to record the delegation or to revoke that access. Although this use case could get into a lot of client side functionality - for example, how the Owner manages their delegations - our focus will be on the DID-Auth part of the of process and assume that the client side challenges will be handled by others.

Additional Considerations

While we know that the implementation will not be the last word in creating DID-Auth, but an initial set of steps. We would like the following considered for this implementation:

  • The flows described in the docs linked in the introduction match what we are assuming will be used here. While these will be the starting point, we expect some collaboration with the community to evolve these flows, balancing the need for creating working code against the goal of creating an “ideal implementation”.
  • The Universal Resolver will ideally be used in going from a DID to the authentication services associated with that DID.
  • Ideally, a Universal Authenticator capability would be an outcome of this effort, allowing (at least for non-Verifiable Claim-based authentication) execution work across DID method implementations.
  • The API/website functionality should have a minimum impact to current authentication mechanisms so as to reduce adoption effort. Ideally, the solution would be a compatible enhancement to commonly used authentication mechanisms. For example, is there a common JS library used for authentication that could be built upon to add DID-Auth functionality?
    • Note: This is just a consideration for this effort. There will not be a requirement for submitting an update to an existing library/component for this need.

Assumptions

  • The implementation need not be concerned with no-Auth-related capabilities of the components. For example, although some sort of DID-capable client will be needed for demonstrating the login process, issues like general usability, scalability, and key management are not a part of this effort.
  • We have full control over the code and deployment of TheOrgBook and so can be expedient in making changes to the application as necessary to support the work defined here.
  • Nice to have, but not required, is an implementation of an effective “old and new” user experience. That is, the creation of a user experience that supports both password-style and DID-Auth login to the site. It is sufficient for the purposes of this implementation that TheOrgBook support only DID-Auth.

Opportunity

This is an opportunity to work with the Verifiable Organizations Network team to deliver enhancements to both a BC Government project and potentialyl an open standard.

The fixed-price reward is for a potential total of $CDN 50,000* for satisfaction of the Acceptance Criteria below, per this payment schedule:

  • Completion of Phase 1 - $10,000
  • Completion of Phase 2 - $5,000
  • Completion of Phase 3 - $10,000
  • Completion of Phase 4 - $5,000
  • Completion of Phase 5 - $10,000
  • Completion of Phase 6 - $10,000

* The full amount is dependent on an evaluation of outputs at the end of Phase 1, during which Verifiable Organizations Network team will determine whether or not work will continue into phases 2 and 3.

Acceptance Criteria

  1. Agreed upon technical approach, implementation components, and interaction flow diagrams for the authentication only use cases - System to System and Administrative. This work needs to be shared with relevant communities with could include W3C, Decentralized Identity Foundation, Rebooting the Web of Trust or others as appropriate.
  2. Working code implementing the System to System use case to protect the TheOrgBook API endpoints related to Services issuing claims to TheOrgBook. Requires code for both TheOrgBook and the VON-Connector code that Services use for issuing claims to TheOrgBook.
  3. Working code implementing the Administrative use case supporting a browser-based user logging into a web session on TheOrgBook as an identified user. Requires code for both TheOrgBook and a Web App accessing TheOrgBook.
  4. An agreed upon technical approach and interaction flow diagrams extending the authentication only use cases to include authorization via Verifiable Claims. This work needs to be shared with relevant communities with could include W3C, Decentralized Identity Foundation, Rebooting the Web of Trust or others as appropriate.
  5. Working code implementing the “Claim your Claims” use case supporting a browser-based user logging into a web session with access to specific resources based on a Verifiable Claim held by the user. Requires code for both TheOrgBook and a Web App accessing TheOrgBook.
  6. A final write up summarizing the work done, including two key elements:
    1. Post-implementation recommendations for changes/improvements to the delivered code (e.g. “what could we have done better?”).
    2. A proposed technical approach and interaction flow diagrams for the Delegation Use Case.

How to Apply

Go to the Opportunity Page, click the Apply button above and submit your proposal by 16:00 PST on Wednesday, January 31, 2018.

We plan to assign this opportunity by Friday, February 2, 2018 with work to start on Monday, February 5, 2018.

Proposal Evaluation Criteria

  1. Your approach to completing the Acceptance Criteria in a short proposal (1-2 pages) which includes evidence to support the criteria outlined in items 2-5 below. Evidence can include GitHub IDs, projects for example. (20 points)
  2. Your prior experience with open source projects in identity, self-sovereign identity, verifiable claims, decentralized identifiers, distributed ledger/blockchain or other relevant technologies. (30 points)
  3. Your expertise in authentication and authorization protocol design and implementation. (30 points)
  4. Your experience in contributing to open standards and/or open source communities (20 points)
  5. Your ability to satisfy the Acceptance Criteria on or before 31 March 2018 (10 points)

Update the TOB Data Model based on agreed to design

Update the TOB Data Model based on the design in this Google Doc. As necessary, evolve the design and if notify/consult the team if the changes are significant.

Note that the "set of credentials" fields needs to be expanded into a one-to-many join table so that a (for example) a "Name" row can be linked to many rows in the "Credential" table.

Care should be taken in rolling this out to prevent breaking the Test instance of TOB. Perhaps doing this with a feature flag could be used to keep the branches short lived and we can run with old and new at the same time. Obviously, we can manage the transition - we don't have a 24x7 uptime requirement - but we don't want to have it be a month where this lives in a non-merged branch.

Clean up logging

TheOrgBook makes heavy use of logging. The logging approach currently is inconsistent.

I think we should do a pass and clean up logging using warn/debug/trace log levels more appropriately so that the default amount of logs is more usable.

Location Based Search

X (in/near/around/by) Y: Location based search

Be able to support search with a "type" and "location" ... for example, "water taking permits near 100 mile house".

Type would typically be a claim type (e.g. liquor licence, business licence).
Location could be any means of identifying a physical location, street address, long/lat, city, postal code, etc.

An example google search restaurants in 100 mile house

Odd link in Readme.md

Hi There - just wondering why the link to more information lead back too the BCGov GitHub org.

TheOrgBook Credentials Processor

Define and implement an (ideally) configurable mechanism for processing Credentials coming into TheOrgBook to update the TOB Search Database. The basic idea is that each Issuer should be able to decoratively define a configuration for processing the types of Credentials they issue so that the TOB Search database is appropriately updated.

Refactor ToB User Interface

Post the refactoring of the ToB Data Model and API, the User Interface for ToB will need some attention to bring it to Beta Launch quality.

Add authentication to TOB and protect some endpoints

Add authentication to TOB for services connecting that will publishing issue using DID-Auth.

  • Eliminate the User/Roles/Permissions tables and related api's added to the project at start up time via the Swagger process.
  • Use the Django user capabilities for managing users
  • Add an "Issuer" group and relates permissions so that Issuer's DIDs can be added to the system
    • For now, these do not have to be protected. We'll add that protection later.
  • Create and use permissions associated with the "Issuer" group to protect the "Register Issuer" endpoints and the BCovrin endpoints, so that only whitelisted issuers can use those endpoints - those that have been added to the Users table and authenticated via the pyDID-Auth process.

ON | Load test data in to ON-TOB

All the pieces are in place for @olenamitovska1 to load the data into DEV. Once that's all working we should promote and load data there too.

./genBuilds.sh throws errors

nbrempel@Nicholass-MacBook-Pro ~/P/T/openshift> ./genBuilds.sh

Deploying build configuration for tob-db into the devex-von-tools project ...


Loading component specific settings from /Users/nbrempel/Projects/TheOrgBook/tob-db/openshift/settings.sh ...

../../openshift/compBuilds.sh: line 29: realpath: command not found
No Jenkinsfile (../Jenkinsfile) found for ., so no pipeline created.


Deploying build configuration for tob-solr into the devex-von-tools project ...

../../openshift/compBuilds.sh: line 29: realpath: command not found
Processing build configuration; solr-build...

imagestream "solr" created
buildconfig "solr" created
Processing build configuration; solr-base-build...

imagestream "solr-base" created
buildconfig "solr-base" created
Generating Jenkins Pipeline for component .
buildconfig "solr-pipeline" created

Deploying build configuration for tob-api into the devex-von-tools project ...

../../openshift/compBuilds.sh: line 29: realpath: command not found
Processing build configuration; schema-spy/schema-spy-build...

imagestream "schema-spy" created
buildconfig "schema-spy" created
Processing build configuration; lib-indy/lib-indy-build...

imagestream "lib-indy" created
buildconfig "lib-indy" created
Processing build configuration; django/django-build...

imagestream "django" created
buildconfig "django" created
Generating Jenkins Pipeline for component .
buildconfig "django-pipeline" created

Deploying build configuration for tob-web into the devex-von-tools project ...

../../openshift/compBuilds.sh: line 29: realpath: command not found
Processing build configuration; angular-on-nginx/angular-on-nginx-build...

imagestream "angular-on-nginx" created
buildconfig "angular-on-nginx-build" created
Processing build configuration; nginx-runtime/nginx-runtime...

imagestream "nginx-runtime" created
buildconfig "nginx-runtime" created
Processing build configuration; angular-builder/angular-builder...

imagestream "angular-builder" created
buildconfig "angular-builder" created
Generating Jenkins Pipeline for component .
buildconfig "angular-pipeline" created

Builds created. Use the OpenShift Console to monitor the progress in the devex-von-tools project.

Pause here until the auto triggered builds complete, and then hit a key to continue the script.
Press a key to continue...

Liquor License Fails / Proof requests don't work as expected when there are multiple credentials of the same type

When an entity requests a Proof Request and there are multiple credentials of the same type for the same subject, the Proof Request process fails. This is manifest in two places right now:

  1. A Proof Request from Permitify to support getting a subsequent permit.
  2. When a Proof Request is processed by the "Verify Credential" button.

Likely, we need to add a way to invoke one of three variations on the Proof Request within TOB:

  • getting the latest active instance of a Credential (default)
  • specifying a specific instance of a Credential
  • getting all of the active Credentials of a give type

At minimum, the default should be implemented as soon as possible to fix our hanging issues in demoing TOB.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.