Code Monkey home page Code Monkey logo

did-rubric's Introduction

W3C Logo

DID Method Rubric v1.0

This is the repository of the W3C’s note on DID Method Rubric v1.0, developed by the DID Working Group. The editors’ draft of the specification can also be read directly.

Contributing to the Repository

Use the standard fork, branch, and pull request workflow to propose changes to the specification. Please make branch names informative—by including the issue or bug number for example.

Editorial changes that improve the readability of the spec or correct spelling or grammatical mistakes are welcome.

Please read CONTRIBUTING.md, about licensing contributions.

Code of Conduct

W3C functions under a code of conduct.

DID Working Group Repositories

did-rubric's People

Contributors

brentzundel avatar cihanss avatar decentralgabe avatar dhh1128 avatar iherman avatar jandrieu avatar rhiaro avatar rxgrant avatar tallted avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

did-rubric's Issues

Consider removing Critera-26 Future Proofing

https://www.w3.org/TR/did-rubric/#criteria-26

I'm not sure there are any useful distinction here. I don't know of any system that can satisfy answer (A):

A. Any user of the system can easily upgrade their crypto at any time

Also, I'm not convinced the two examples are legit. did:ipid does need an implementation upgrade, as @dhh1128 mentions in the notes. did:peer doesn't define any particular hash algorithm, but all members of the peer would, in fact, have to update their implementations. So, while a peer group would be easier to update than an entire network like BTC or ETH, the code-free idea of answers A/B make no sense for any system we know of today.

I'm not sure this is a legitimate criteria.

Clean up format / structure

@iherman suggested in #49 (review)

Wouldn't it be cleaner if all current entries in §3 followed exactly the same structure (in terms of subtitles) as the entries listed in §2.2? This is mostly done, but the name, id, and version, are never made explicit in §3 (I realize those items are all trivially there, but it would probably be better to make it 100% explicit).

Agreed. We'll revisit this as an issue to see if we can get a more consistent format.

Reconsider deletability criteria

Criteria 36 got rewritten in PR #38, which adds a nice distinction, but loses the point of the original, which was addressing how the desire for actual deletion is addressed, not just the regulatory compliance.

        <section>
          <h5>Question</h5>
          <p>
            How are mistakes corrected, and how is the right to be forgotten
            addressed? (Note how this creates a tension with immutability,
            which may be valuable in 2.7.1 and elsewhere.)
          </p>
        </section>
        <section>
          <h5>Responses</h5>
          <ol type="A">
            <li>
              Provable deletion of data is fully supported. Once something is
              deleted, nothing internal to the system allows it to be recovered.
            </li>
            <li>
              True, provable deletion is not supported, but deleted data can be
              suppressed in jurisdictions where this is required.
            </li>
            <li>
              The VDR is a permanent, immutable record with no support for deletion of any kind.
            </li>
          </ol>
        </section>
        <section>
          <h5>Relevance</h5>
          <p>
            ?
          </p>
        </section>

Make evaluation references more open

Currently, the Rubric reads as if all of the examples form a single evaluation. However, that is already out of date, as @dhh1128 has also added several example evaluations on his own judgment.

We need to add @dhh1128 to the evaluator's list, add an entry in a evaluation sources list, and add citations to the examples from that evaluation.

This should also make it easier for new PRs to come in from other evaluators, without losing the origin of said evaluations.

rename master branch

Summary

We should rename the branch master to main and use that going forward for our work.

From Problematic Terminology in Open-Source on master/slave terminology in software:

Use of this term is problematic. It references slavery to convey meaning about the relationship between two entities.

Removing this terminology from our workflow is a small gesture to include more people from marginalized groups in this project.

(I’m open to names other than main)

Technical Steps

  • create main branch from master
  • make main the default GitHub branch
  • modify github/central to use main for release notes reloading
  • redirect PRs to main in w3c/did-rubric
  • move branch protections from master to main
  • modify docs to reference main instead of master
  • delete master branch to avoid confusion?

Feedback?

each criteria should link to its acceptance-discussion thread

All rubric criteria should have public discussion threads available as a historical record, where subject matter experts can offer the best references to support the best arguments.

These issues are expected to be active discussions while the criteria is new, and closed when the criteria's pull request is approved.

This issue will be complete when:

  • the WG has actually accepted this idea;
  • instructions to criteria authors are clear;
  • a template for linking to the issues is available; and
  • all existing critera have such links, even if the issues are closed (but probably leave them open until the Rubric reaches 1.0).

Ways forward for this rubric

Hi! I just wonder what the ways forward for this rubric is?
There is a lot of new DIDs and the openness of them, based on the criterias are worth maybe pointing out.
How does this work progress forward?

It has gotten shed new light on it, as DIF has launched a new FAQ with a question "How to choose a did method" where this spec is referenced.

Then I wonder how can this work progress?

Update abstract

The abstract, unfortunately, still reads as if we are only addressing decentralization. Update to clarify extended scope.

More decentralization to pull

In section 1.2

Finally, note that this particular rubric is about decentralization. It doesn't cover all of the other criteria that might be relevant to evaluating a given DID Method. There are security, privacy, and economic concerns that should be considered. We look forward to working with the community to develop additional rubrics for these other areas and encourage Evaluators to use this rubric as a starting point for their own work rather than the final say in the merit of any given Method.

In short, use this rubric to help understand if a given DID Method is decentralized enough for your needs.

address all TAG/EWP

This issue will be complete once all TAG/EWP principles are treated equally in the Rubric. This is required because the principles are not force-rankable while seeking consensus in this working group, as discovered here.

[pull request forthcoming]

No section on Environmental criteria

There are no environmental criteria specified nor a section for such criteria in the rubric.

Initial PR to add such criteria to the "other" section was #51 which was closed due to impending restructure and removal of the "other" section

PR #59 adds a section to which those criteria can be added in conformance with the updated approach for criteria

Need to update incomplete criteria

Several of the most recent criteria, including, for example, Section 2.7.5 which is missing several elements. It lacks any responses, has no relevance section, and had a fake Example.

We editors should do a quality control pass and make a call about which of these are worth updating, which should simply be marked "provisional" and which should just be deleted.

Example Evaluations don't meet registry process spec

Currently, we specify that example evaluations MUST include a use case link.
https://w3c.github.io/did-rubric/#criteria-examples-notes

The label text itself must be a hyperlink to the entry in the Use Cases Referenced section, for example [UC.1] where “UC.1” links to “#useCase-1” in the current document.

It probably makes more sense to require that evaluations listed in the Evaluation Cited section list the use case there. That would be the simpler update.

Number existing criteria

Add canonical numbering to all existing criteria as preparation for potential registry-ization.

Restructure with added category "Design"

The evaluation for Veres One split the Rulemaking section to add a Design category. Before adding the V1-inspired criteria, let's reorganize the existing criteria.

Prepare for TR

Create a static publication by downloading the respec rendering as HTML and placing it in a clearly addressable location within the repo, e.g., FPWD/2021-08-09 where the date is the anticipated publication date. Note that a Thursday publication date makes the most sense for Ivan, AND, the WG will need time to sign off on publication (which is likely a Tuesday call).

Other criteria section

@iherman suggested in #49 (review)

I would think that §3.9 should either be removed or, rather, refer to the registration process.

Agreed. We should also review the "additional criteria" section as it no longer fits the registry paradigm.

the Rubric has moved beyond decentralization

While this rubric allows the evaluation of many aspects of decentralization of a DID Method, it is not exhaustive, and does not cover other factors that may affect selection or adoption of a particular Method, such as privacy, security, reliability, or efficiency.

This issue asks the editors/WG to acknowledge an expansion of scope.

The Rubric is a collection of criteria for creating Evaluation Reports that assist end users to choose DID Methods. The criteria are refereed by the Rubric editor, but evaluation invites opinions from anyone.

It definitely covers privacy and security, while reliability is contemplated, and efficiency is in the works due to the discovery of more intractable differences.

In order for Evaluation Reports to work well, the criteria will have to grow to be fairly exhaustive.

see pull #55

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.