Code Monkey home page Code Monkey logo

chip-bcmr's People

Contributors

bitjson avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

chip-bcmr's Issues

drop the ipfs-centric BCMR OP_RETURN form

It introduces a dependency. We can get rid of dependency at the cost of some redundancy, but the benefit is a consistent method of validating the .json.

Propose only 1 form:

OP_RETURN <'BCMR'> <sha256(content)> <content_uri>

And if there's no uri: prefix, then infer https://. This way we can support any current and future content delivery protocol.

Examples:

OP_RETURN <'BCMR'> <sha256(content)> <'ipfs://bafybeibdkyv3mly2rxxeqiydl5ntbmvqlbghd2cwjgyjmeiuo73ctjgf2y'>

OP_RETURN <'BCMR'> <sha256(content)> <'https://my.registry/registry.json'>

OP_RETURN <'BCMR'> <sha256(content)> <'magnet:?xt=urn:btih:ABC123...321CBA'>

If wallets don't want IPFS dependencies, this way they could just use some https gateway, and they'd know it isn't lying to them because they also have the sha256. Those that do want IPFS could fetch it directly because the ipfs:// uri encodes the CID, of whatever version.

This way, clients can obtain the content from anywhere, or consistently query other clients for content by the sha256(content). The last push can be thought of as a hint of where to find one copy, while it could actually be mirrored in many places, and the sha256 would be a consistent way of checking you got the right content.

Make the 'fields' key of nfts optional in the bcmr schema

This was discussed in private and it was concluded that the 'fields' key of nfts being required was a small oversight.
The schema becomes easier to digest when you can filter out all the optional fields so that why this change is relevant, to simplify the standard further.
Documenting it here so it isn't forgotten about.

idea; split this chip into "what happens on-chain" and the rest.

The CHIP is a bit hard to grasp due to the fact that it goes from on-chain details and outputs etc,
then a layer up is the meta-data as an identity itself.
Then a layer up you have distribution through registries.

I'm even simplifying here.

So, at a moment I started thinking about statements (like snapshots) what layer they belong to. If this is a JSON document, or an op-return. You see?

So, my thinking is towards publishing separate technical-specifications, at least one specific and alone about the on-chain formats and validation of such. And naturally to avoid, in that document, to refer to what its being used for at higher levels. Marrying the different layers should happen outside the technical specs.

Thoughts?

DNS-resolved metadata registries

I think the commonly accepted term for this is "domain name".

The usage of 'dns' here is implied and hidden by http. Usage of that term at this level is confusing and may lead people to think you are thinking about IP addresses somehow (which is what DNS resolves).

Connect p2pkh key to website-login / payment-protocol

Follow up from the tech-talk where I also went over this idea.

As the authentication chain uses normal bitcoin payments, of which p2pkh is the most common, it is not a stretch to consider that a service can use the auth-head to get out the public-key-hash and use that completely separately from the bitcoin-cash primitives. A PKH is, afterall, based on basic cryptographic primitives.

So, imagine the first (or second) output of your auth-head pushing a public-key-hash and this is used by some login-service as a way to allow you to prove you are you. Simply by signing a message from them.

Second usecase of this same concept, when I issue a payment via a new payment-protocol, I'm excited that I can include a authentication-id (and optionally the entire proof). But anyone could do that with a payment, so you need to prove you actually own the identity as part of that payment request.
Signing a core part of the payment request with the public key matching the public-key-hash used in a standardized part of the auth-head would accomplish that.

Identity-holders could trivially revoke such a key by switching to a new private key, I'd suggest wallets that hold an identity use the HD-wallet style to accompish using a new key every single update.

zeroth-Descendant Transaction Chain and BIP69

Normal wallets should basically always apply bip69 for transactions they create.

The downside of this here is that OP_RETURN outputs would be the zero'th output and you'd burn your identity.

I suggest a special casing here to avoid accidentally losing your identity, when the zero'th output is an op-return, use the second one.

extend the standard with ability to publish the json directly

There was some talk in Tg channel initiated by Calin, asking why not extend the standard with ability to just put the data for simplest tokens directly on-chain, something like:

Current:

  • OP_RETURN <'BCMR'> <hash> [url]

Proposed:

  • OP_RETURN <'BCMR'> <<hash> [url] | <blob>>

So, one could just do OP_RETURN <'BCMR'> <raw content of some file.json> and for small registries it could maybe fit

Rationale: if people want the data on-chain, they will find a way to have it on-chain. If I really wanted to dump my registry on-chain I could go the round-about way and using the current specification:

I publish a TX with the blob in OP_RETURN and then I publish a BCMR TX with OP_RETURN <'BCMR'> <hash> <'myopreturnsite.com?txid=123123...123&output_index=1'> then I make a site that pulls from chain and serves the data stored in the referenced txid:vout.

Or, store it in the same TX's input script: OP_RETURN <'BCMR'> <hash> <'myopreturnsite.com?input_index=1'> mirroring ordinals inscription approach, and wallets could just be clever and skip the site entirely and read the input directly

Add DNSLink/DNSAddress spec to optional domain.

Hey Jason, thank you for taking the time and effort to work out this CHIP! I was wondering who would put a proposal up first along these lines ๐Ÿ‘

At the moment, as far as I can tell, the spec has two modes of fetching the registry:

  1. IPFS HTTP Gateway.
  2. Optional domain + well-known URI.

Unfortunately, these two methods do not mesh well and the latter is faster and more reliable if correctly defined, so in practice, I expect the true default to be the optional domain + well known URI despite IPFS being simpler to publish (not requiring server, reverse proxy, ssl cert, etc.)

Instead, I'd like to put forward a suggestion to resolve via IPFS using a fast, direct and reliable method without the DHT (similar to the domain + well-known URI) and falling back on the DHT when that isn't available, thus reducing the likelihood that centralization would be preferred for it's speed/reliability (when design may have been the only bottleneck)

There are two concepts I know of already defined in the IPFS spec for this exact purpose, DNSLink and DNSAddr. These specs were built to allow IPFS to make direct connections when possible and avoid using the DHT except when necessary.

By preferring DNSLink and DNSAddr TXT records (using the same optional domain notation in the CHIP), it will in practice turning the 2 options into 4 transparently to the end-user since both methods are not "competing" (in a sense).

  1. Cached locally by HTTP Gateway.
  2. Cached by an already connected peer of an HTTP Gateway.
  3. Optional bootstrap (using the optionally defined domain + TXT records using DNS over HTTP) - Similar to domain + well-known URI
  4. DHT Fallback

The above wouldn't require a server, reverse proxy, ssl cert, or anything except a domain and TXT records but to get literally the same effect as the domain + well known URI, the publisher could:

  • run IPFS on their server (equivalent to http server)
  • put the IP address and public key (multiaddr) in the DNS TXT record. (equivalent to domain name)

One subtle added benefit is that HTTPS isn't required except to be accessible in a browser, since the DNSLink method uses e2e encrypted under the hood regardless using a public key defined in the TXT record. OTOH, HTTPS is required even for backend use when using the well-known URI method. Which hurts decentralization long term.

That's my 2c, Jason. Hope you will consider it! Let me know if you have any questions.

Thanks again!

drop 'mutable NFT' marker for reserved supply on authHead

I'd like to suggest dropping the 'mutable NFT' marker requirement for keeping the reserved supply on authHead.

If additional fungible tokens of a category may be needed in the future, token issuers should initially mint an excess supply (e.g. the maximum supply of 9223372036854775807) and hold the unissued tokens in the identity output with a mutable token (using any commitment value) to indicate they are part of the Unissued/Reserved Supply.

The idea of keeping the reserved supply at the authHead is great, the additional need for a mutable NFT marker however is unnecessary.
The AuthHead needs to be under control of the issuer anyways to be able to do future metadata updates, and by being the authHead it serves as a marker already.

This is already how the current CashTokens ecosystem is following the BCMR standard as implemented by the CashTokens-marketcap website TokenStork with multiple token projects following this convention: FURU, DOGECASH, XRBF & KOK

For complex multithreaded applications using mutable NFTs as markers, they could still use a marker at the authHead or alternatively the mutable NFTs could be markers in addition to the authHead.

It is great that the spec has thought of the advanced multithreaded covenant use-cases but it should not come at the cost of the simple straightforward usecase for centrally managed token projects.

This change to the reserved supply standard is also what I suggest in my AuthGuard standard ๐Ÿ˜„ ๐Ÿ‘

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.