Code Monkey home page Code Monkey logo

dip's Introduction

Note to readers: Silvergate Capital Corporation announced in January 2022 that it acquired intellectual property and other technology assets related to running a blockchain-based payment network from Diem, further investing in its platform and enhancing its existing stablecoin infrastructure.

dip's People

Contributors

aching avatar capcap avatar dahliamalkhi avatar davidiw avatar davidldill avatar dependabot[bot] avatar dimroc avatar dpim avatar emmazzz avatar ericnakagawa avatar gdanezis avatar gregnazario avatar joelmarcey avatar junkil-park avatar kphfb avatar mickayz avatar n4ss avatar njess avatar oriooctopus avatar prasincs avatar sblackshear avatar sunmilee avatar tb-silvergate avatar valerini avatar viktorbunin avatar xli avatar yinonfirstdag avatar zekun000 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dip's Issues

Support nested directories in the `lip` directory

Right now, we cannot have nested directories in the lip directory.

8:54:41 AM: > node scripts/createDocsDir.js && docusaurus build
8:54:43 AM: Creating an optimized production build...
8:54:43 AM: Error: EISDIR: illegal operation on a directory, read
8:54:43 AM: npm
8:54:43 AM:  ERR! code ELIFECYCLE
8:54:43 AM: npm
8:54:43 AM:  ERR! errno 1
8:54:43 AM: npm ERR! [email protected] build: `node scripts/createDocsDir.js && docusaurus build`
8:54:43 AM: npm ERR! Exit status 1

See commits of #7 for an example.

P2M DIP

This should describe the following advance payment flows to support P2M scenarios -

  • Charge
  • Auth/Capture
  • Void

DIP: Multi-agent Transactions

Currently in the Diem Framework, a transaction acts on behalf of a single on-chain account. However, there is no mechanism for multiple on-chain accounts to agree on a single atomic transaction. This DIP presents a new scheme of transactions--multi-agent transactions--which act on behalf of multiple on-chain accounts. Multi-agent transactions leverage Move’s signer type to allow essentially any arbitrary atomic actions in one transaction involving multiple on-chain accounts.

DIP PR here: #159

Proposal: On-chain Peer to Merchant (P2M) Configuration

As described in DIP 158, the P2M flow involves redirecting a customer from the merchant site to their customer VASP for payment approval. Recap:

The redirect URL is likely to be used in the Web Redirect flow which means that the customer will be redirected from the merchant's checkout page to the wallet's website to review and approve the payment.

The table below specifies the fields that should be encoded into a series of URL parameters appended to the query string.

Field Type Required? Description
vasp_address str Y Address of receiving VASP. The address is encoded using bech32. For Diem addresses format, refer to the "account identifiers" section in DIP-5.
reference_id str Y A unique identifier of this payment. It should be a UUID according to RFC4122 with "-"'s included.
redirect_url str N Encoded URL used by the wallet to redirect the customer back to the merchant

Dip 158 Appendix A

Customer VASP configuration (like redirect_url) does not exist yet

The redirect_url, and other Customer VASP configuration, is never set, and never stored. This onboarding of Customer VASPs via a registration step is required for any future P2M flows.

Solution

A DIP describing a Customer VASP registration step that sets its configuration on-chain within Move, so that Merchant VASPs are able to retrieve and send said P2M configuration data for the Web Redirect Flow. This includes but is not limited to redirect_url, image_url.

Should HTTP end point URL contain local/remote address (for LIP-1 and beyond) ?

HTTP end point in LIP-1 specifies that HTTP urls has the following format:

https://<hostname>:<port>/<protocol_version>/<LocalVASPAddress>/<RemoteVASPAddress>/command

It has been causing some confusion of what LocalVASPAddress and RemoteVASPAddress means and it is sort of easy to mix them up. As this URL will also be used in other off chain related LIPs, it's worth discussing whether we want to keep them in the URL.

Pros:

  1. easier (VASPs/DDs) for users to do load balancing: users can forward the request to right tiers according to from address and/or to address. Without this, users need to parse the JWS message to get these info.

Cons:

  1. easy to mistake one with the other
  2. makes the URL longer

I also think it's worthwhile discussing whether we put protocol_version in the front or rear (i.e. after command). Because as we add protocols in the future, more keywords will be introduced (e.g. pay in Libra ID). Logically we want them each to have a version (easier for iterating).

cc @kphfb , @gdanezis , @xli , @sunmilee , @andll , @bmaurer

Off-chain API: ping command

Author: Xiao Li (@xli)
Status: Idea (for discussion)

To improve the ability of off-chain service handling network problems and protocol errors, we’d like to introduce a ping command for client to:

  1. detect dead connections to a remote off-chain service.
  2. measure the latency to a remote off-chain service.
  3. detect off-chain protocol errors to a remote off-chain service, for example: when a remote off-chain service rotates their compliance key, and client used old compliance public key to verify the response message, the ping command can be used to debug the error and verify the fix.

Specification

The PingCommand object definition:

Field Type Required? Description
_ObjectType str Y The fixed string PingCommand

An example CommandRequestObject JSON message with PingCommand:

{
    "_ObjectType": "CommandRequestObject",
    "command_type": "PingCommand",
    "command": {
        "_ObjectType": "PingCommand",
    },
    "cid": "12ce83f6-6d18-0d6e-08b6-c00fdbbf085a",
}

And the response of the PingCommand:

{
    "_ObjectType": "CommandResponseObject",
    “status”: “success”,
    "cid": "12ce83f6-6d18-0d6e-08b6-c00fdbbf085a"
}

Note we don’t need a random message body in PingCommand for the server to response, because the off-chain API protocol already have a cid field defined in the CommandRequestObject and CommandResponseObject to serve the purpose of distinguish different command instance.

There is no command_error for this command. Server may respond with protocol_error (with CommandResponseObject.status==“failure”) defined in DIP-1 when processing an invalid HTTP headers, JWS message or CommandRequestObject payload.

State Sync V2

This issue is to track the development of the DIP for state sync v2. A pull request with a draft DIP will follow shortly. The TL;DR for those interested:

  • Problem: State sync v1.1 [1] only offers a single primitive strategy for keeping nodes up-to-date. This strategy is slow (i.e., it requires syncing and executing all transactions from genesis), expensive (i.e., it requires downloading, executing and storing the entire transaction history), inflexible (i.e., it is unable to adapt to different use-cases and requirements) and brittle (i.e., the implementation tightly couples Diem components and Diem nodes).
  • Solution: We propose a new state sync framework (state sync v2) that supports multiple state sync strategies based on use-cases. The strategies trade-off security, performance and resources. Strategies include: syncing from genesis, syncing directly from a waypoint and syncing without executing. The new framework decouples intra-node components (e.g., state sync and storage) and removes coupling between Diem nodes (e.g., allowing additional strategies to be introduced without breaking backward compatibility).

[1] diem/diem#6795

LIP-1 - Error handling

  1. "Note: Additional information detailing all errors will be added." - I saw protocol & command errors do exist in reference implementation. Maybe they should be added to the document as well?
  2. It is not clear from the description of how error handling should be handled at the HTTP level. Are protocol errors should be returned with 200OK?

A Minimal Trusted Computing Base (TCB)

A Minimal Trusted Computing Base (TCB)

Authors: Joshua Lind (@JoshLind), David Wong (@mimoo)
Status: Rough draft (for discussion)

1. Goals of this Document:

  • The goals of this document are:
    • To reason (informally) about the security of the TCB and analyze the trade-offs in design of different approaches.
    • To highlight the challenges one needs to address when designing a TCB for Diem.
    • To propose a set of improvements for the TCB today and identify potential areas for exploration in the future.
  • The non-goals of this document are:
    • To perform an in-depth survey of TCBs in the context of blockchains. This requires its own document.
    • To discuss specific hardware and software implementations of isolated execution environments (e.g., TPMs, TEEs, VMs, cloud isolation mechanisms, etc.). This also requires its own document.

2. Preliminary Reading:

TCB Overview

  • What is a TCB?
    • At a high-level, a trusted computing base (TCB) is a set of components (e.g., hardware and software) that together ensure a set of security properties for a system. For an attacker to violate security properties, one or more components in the TCB need to be subverted.
  • Do we need a TCB?
    • Every system that ensures some security property has a TCB (explicit or not). In the worst case, the TCB comprises the entire system. Well designed systems, however, will make the TCB explicit and as small as possible (e.g., by separating it out as its own sub-system and/or running it in an isolated execution environment). This makes it easier to reason about security. Moreover, it allows the system to better adhere to well known security principles (e.g., the principle of least privilege and defense in depth).

Securing TCBs

  • What needs to be considered when securing a TCB?
    • Size: The size of the TCB (in lines of code, number of components, dependencies and whatever else you deem a good metric). In general, the more that goes into a TCB, the greater the likelihood of bugs, which could lead to compromises. Simplicity/minimality are key.
    • Engineering quality: The engineering quality of the code and components in the TCB. This includes: (i) the maturity, readability, simplicity and auditability of the code/components; (ii) the programming languages used and classes of bugs possible; and (iii) any formal or informal efforts to reason about or verify the security of the code/components, e.g., security audits, static/dynamic analysis, verification, proofs, hardening etc. Efforts also need to be made to ensure that quality is maintained going forward (e.g., a high bar for code reviews and third-party dependency audits).
    • Interface: The interfaces exposed by the TCB, e.g., the interface with the outside world (i.e., the non-TCB world) and the interfaces between components. These could be function calls, RPC calls, explicit APIs or even hardware interfaces.
    • Execution isolation/protection: As the TCB is essentially the “security kernel” of the system, it makes sense for it to execute in an isolated/protected environment. This includes trusted hardware (e.g., TPMs and HSMs), trusted execution environments (e.g., Intel SGX, Arm Trustzone, AMD SEV), containers and on-premises deployments.
    • Layering (defense in depth): TCBs themselves provide an additional layer of defense for a system. If an adversary compromises the outside world, but not the TCB, they gain less than if they compromised the TCB. This is defense in depth. Along the same lines then, the TCB itself might be layered, so that partial compromises of the TCB can still provide some security guarantees.

3. Assumptions and Validator Component Abstraction (VCA)

To reason about the Diem TCB, we first make several assumptions about validators and their components in a blockchain.

Assumptions about validators:
Note: We consider it future work to challenge these assumptions (see the bottom of this document).

  • Failures/compromises are independent across validators. If one validator crashes or is compromised due to a bug, that same bug can’t be used to target all validators.
    • This assumption can be naive when the validators share exactly the same code and deployment architectures. For this reason, it makes sense to explore alternate implementations of validators in the future.
  • Failures/compromises are independent across isolated execution environments. If two isolated execution environments (e.g., containers, HSMs, TEEs) are used by the TCB, their failures should be independent.
    • This assumption can also be naive (e.g., if two containers are running on the same machine, or if two HSMs share the same power source). For this reason, it makes sense to explore alternate environments in the future.
  • Validators are queried by/communicate with different types of clients. These clients include:
    • Non-verifying clients: these clients blindly trust whatever is returned by the validator (e.g., JSON RPC).
    • Verifying clients: these clients verify both validator signatures and re-execute transactions to ensure that the **** data returned by the validator is correct (i.e., for both consensus and execution). This is what validator fullnodes do today.
    • Verifying but non-executing clients, these clients verify only validator signatures, but do not re-execute transactions themselves (i.e., they trust the validators to execute transactions correctly). This is what fullnodes will likely do in the future.

Assumed components in a validator:
Next, we assume a simple validator component abstraction (VCA):

  • Consensus: Consensus is responsible for reaching agreement on a sequence of values between a set of validators.
    • Consensus is assumed to operate using unique cryptographic keys held by each validator, which integrity protect and authenticate votes in the protocol (i.e., consensus keys). We assume validators know the public exponents of other validator consensus keys via the blockchain. The specific consensus protocol is immaterial.
  • Execution: Execution is responsible for calculating the next consensus value to agree upon between validators.
    • The consensus value is assumed to be the new state (S’) computed based on a previous state (S) and a transaction to execute (T). Each validator will independently calculate S’ using S and T.
  • Storage: Storage is used to persist data held by each validator, this includes: cryptographic keys (e.g., the consensus key), blockchain states (e.g., S and S’) and transactions (Ts). We assume that storage is:
    • Persistent for ease of validator operation as computers often crash, reboot, need to be upgraded, and restarted. Solutions for non-persistent storage may exist (e.g., in-memory only), but this is often impractical.
    • Readily accessible by the validator ** (i.e., storage cannot be “cold”, e.g., a physical safe that requires human intervention to access). This is impractical from a performance/operation perspective.

4. Security Formalization

In order to analyze the security benefits of a TCB, we propose the following (informal) security definitions:

Types of compromise:

  • Shallow compromise: When an adversary compromises all components in the system, excluding the TCB.
  • Deep compromise: When an adversary performs a shallow compromise as well as compromises at least one component in the TCB (but not all components in the TCB).
  • Complete compromise: When an adversary has managed to compromise all components in the system, including the TCB.

Types of security impact:

  • Local impact: The impact of an attack on a single validator or fullnode. This generally affects the local clients and other nodes connected to that validator or fullnode (e.g., by deceiving them or performing eclipse attacks).
  • Global impact: The impact of an attack on the blockchain as a whole, e.g., a global attack could lead to an invalid commit being performed by all validators (for some definition of invalid, see below).

The Adversary model:
Consensus assumes that f validators are byzantine and colluding (i.e., completely compromised). We therefore consider the TCB interesting if it can still provide security properties when h additional compromises occur (shallow or deep). We consider two adversary models:

  • Non-Quorum. This means: f byzantine validators and h<=f shallow or deep compromises.
  • Quorum. This means: f byzantine validators and h>f shallow or deep compromises.

Types of Attacks:
We consider three high-level types of attacks:

  • Safety. Safety attacks mean that a fork can happen (i.e. two contradicting states can be committed across different validators).
  • Correctness. Correctness attacks extend the blockchain in a way that violates the semantics of the protocol (i.e. the correct execution of transactions and extension of the state). We define semantic extension as: given a committed state S, we get the next committed state via execution(S, transaction), i.e, this is the property we expect from the blockchain. Using this definition, correctness attacks can be divided into two categories:
    • Non-semantic extension, where malicious validators extend the chain by committing an arbitrary state. One example of non-semantic extension is: given a committed state S, we can extend it to a state S', where S' is not the result of execution(S, transaction), for any S and transaction. In this attack, honest verifying clients (verifying full nodes and validators) will become stuck and unable to reach the arbitrary state.
    • Non-semantic fork, where malicious validators commit a non-semantic extension but continue to behave honestly from the point of view of honest validators. In this attack, honest verifying clients will not be aware that a fork (committing to an arbitrary state) has been created.
    • Note: A correctness attack can only impact verifying but non-executing clients, as well as non-verifying clients.
  • Liveness. We consider liveness attacks out of scope, e.g., if f validators are completely compromised, any further compromises will violate liveness globally.

5. The Incremental TCB:

To begin reasoning about the TCB in Diem, we take a step by step approach to building a TCB based on the VCA above. For each step, we reason about the security guarantees of the design.

Step 1: TCB = { Consensus key }

To begin, we move only the consensus key into the TCB and propose that consensus asks the TCB to sign data (e.g., votes). Reasoning about security, we see:

  • Shallow compromise: Moving the consensus key into the TCB doesn’t provide much under shallow compromise: it is almost identical to a consensus key compromise as the attacker can ask the TCB to sign anything. In the non-quorum and quorum adversary models, safety can be violated, as there are more than f byzantine nodes. In the quorum adversary model, semantic correctness can also be violated (an attacker can arbitrarily extend the state).
  • Deep compromise: Deep compromise is similar to shallow compromise, except that the consensus key has also been leaked, which might enable undetected attacks like long-range attacks.
  • Complete compromise: Identical to deep compromise (as there is only one component in the TCB).

Step 2: TCB = { Consensus key + Safety Rules }

To improve on step 1, we focus on hardening the validator against safety attacks. To do this, we partition consensus and move a subset of the consensus module into the TCB, labelled safety rules. Safety rules contains a set of verification constraints that when enforced by enough validators (>= 2f+1) prevent forks in the consensus protocol (see the Voting Rules in the Consensus specification). Reasoning about security, we now see:

  • Shallow compromise:
    • Non-Quorum model: the attacker cannot violate safety (i.e., cannot fork), by the definition of safety rules. Moreover, the attacker cannot violate correctness (cannot extend the state arbitrarily) as this requires 2f+1 validators to certify and commit a non-semantic extension.
    • Quorum model: the attacker cannot violate safety (i.e., cannot fork), by the definition of safety rules. However, the attacker can violate correctness (they can extend the state arbitrarily) as 2f+1 validators can certify and commit a non-semantic extension. This is because a compromised safety rules will agree to vote on any execution state.
  • Deep compromise:
    • Consensus key: This is equivalent to a complete compromise in step 1.
    • Safety rules: This is equivalent to a shallow compromise in step 1.
  • Complete compromise: This is equivalent to complete compromise in step 1.

Step 3: TCB = { Consensus Keys + Safety Rules + Execution }

To prevent attacks on correctness (as seen in step 2 above), we need to ensure that shallow compromises cannot enable voting on proposals that arbitrarily extend state. To achieve this, we observe that one can simply move the execution logic (including the Move VM) into the TCB. This will enforce correct execution of transactions. However, one still needs to ensure that execution extends the correct state. Here, one could move storage into the TCB. However, this is naive as it bloats the TCB. Instead, we observe that it is more beneficial to treat storage as untrusted and instead have execution keep track of valid state root hashes and update them within the TCB. We call this approach execution correctness.

We now reason about the security of this approach:

  • Security Analysis
    • Shallow compromise:
      • Non-Quorum model: The attacker cannot violate safety (cannot fork) or violate correctness (cannot extend the chain arbitrarily).
      • Quorum model: The attacker cannot violate safety (cannot fork) or violate correctness (cannot extend the chain arbitrarily).
    • Deep compromise:
      • Consensus key: This is equivalent to a complete compromise in step 2.
      • Safety rules: This is equivalent to a deep compromise of safety rules in step 2.
      • Execution: This is equivalent to a shallow compromise in step 2.
    • Complete compromise: This is equivalent to complete compromise in step 2.

6. The Existing TCB (v1)

Today, execution correctness is still a work in progress and not part of the TCB. As such, shallow compromises defend against everything but correctness attacks (see step 2 of the incremental TCB). In this section, we take a look at various implementation details of the TCB as it stands today:

  • Storage is partitioned into two subsystems, secure storage and storage:
    • Secure storage holds security-sensitive data, e.g., cryptographic keys and state used by Safety Rules. Secure storage is considered part of the TCB.
    • Storage is used to store everything else, e.g., transactions and execution state. Storage is outside the TCB and untrusted by the TCB.
  • Isolated execution environments: The Diem TCB separates Safety Rules, Execution Correctness and secure storage into separate execution environments. This aims to prevent deep compromises from becoming complete compromises.
  • Automatic consensus key rotations: To prevent long range attacks and recover from consensus key compromises, Diem introduces a Key Manager which supports automatic consensus key rotations.
    • Note: To achieve this, Diem introduces another cryptographic key (the operator key), which is held in secure storage and used to endorse (i.e., sign) the consensus key on the blockchain.
  • Delegated signatures: To avoid exposing keys to components in other isolated execution environment (see the different analysis on deep compromises), Diem uses delegated signatures:
    • Safety rules asks secure storage to sign proposals and votes using the consensus key.
    • Key manager asks secure storage to sign the rotation transaction using the operator key.

7. Proposal & Path Forward (TCB v2)

Based on the observations above, we outline the following design and implementation improvements for the TCB (v2):

Design Improvements:

  • Move execution correctness into the TCB (or otherwise verify execution):
    • Reasoning: Diem is moving towards fullnode/client models that verify consensus signatures but do not verify execution. When this happens, validators will be the only line of defense responsible for correctly executing transactions.
    • Advantages: Makes it easier to reason about security, i.e., it will prevent quorum shallow compromises from violating semantic correctness. It will also become increasingly important if fullnodes don’t re-execute transactions in the future. It also makes sense if/when we move to TEEs/secure hardware.
    • Disadvantages: Performance & operational complexity (this will need to be moved into an existing TCB container, or a new container). Execution correctness is also very large, which leads to ask whether or not this is even possible/practical without really bloating the TCB.
  • Always export the consensus key to safety rules:
    • Advantages: Performance (no more delegated signatures).
    • Disadvantages: Safety rules comprises will leak the consensus key rather than just leak control of the key. However, in practice, there is little difference.
  • Introduce a smart secure storage layer and remove the key manager:
    • Reasoning: Secure storage is closely tied to vault today. In the future, we’d like to support heterogenous backends (e.g., as required by other operators). Moreover, if the key manager is compromised today the consensus and operator keys can be swapped out to adversary controlled values. This is an unideal implementation artifact due to vault permissions (e.g., the key manager has permissions to sign anything using the operator key — like a transaction that rotates the operator key)
    • Advantages: Removes the key manager and replaces it with a simple key rotator thread outside the TCB. Allows us to easily move away from vault and positions us towards heterogenous solutions.
    • Disadvantages: This would require a lot more investigation as well as changes to both Diem code and deployment code.

Implementation Improvements:

  • Update the on-chain configs to allow single field updates:
    • Reasoning: To rotate the consensus key, the key manager must fetch the on-chain config of the node from the blockchain, copy all fields, override the consensus key and set the entire config again. This opens attack vectors. Instead, the key manager should just be able to set a single field.
    • Advantages: Removes the need for the TCB to read/communicate with the blockchain. Now, the TCB can simply create and sign a new transaction that changes the consensus key field.
    • Disadvantages: Requires on-chain config modifications.
  • Remove the execution correctness key and allow safety-rules & execution to communicate directly:
    • Advantages: Simplifies the deployment architecture and allows consensus and execution correctness to operate in the same isolated execution environment (possible performance gain?)
    • Disadvantages: Requires investigation to see if this is possible.
  • Implement detection mechanisms for TCB & key compromises:
    • Ideas:
      • Set alerts for key rotations so that operators are notified whenever keys are changed on-chain.
      • Monitor consensus behavor to identify when nodes are acting Byzantine...
    • Advantages: Prevents an adversary from taking control of validators without being noticed.
    • Disadvantages: Requires a lot more investigation to identify and propose practical solutions.
  • Analyze/Audit the interface between execution correctness & storage (ensure it is untrusted):
    • Reasoning: It seems that the interface between execution correctness and storage makes some odd assumptions (e.g., storage doesn’t trust execution correctness, and thus recomputes results?).

8. Future Explorations for the TCB

The list below contains future explorations for the TCB. Each of these requires additional thought and analysis.

  • Multiple/Alternate Diem implementations: This will help to ensure failures are indeed independent between validators. Right now, we can’t really make that claim.
  • Further reducing the size of each TCB component (e.g., lines of code, dependencies, unsafe code, etc.): Right now, each TCB component contains logic that doesn’t really need to be in the TCB, e.g., the key manager contains logic for serializing json, communicating with the blockchain, handling JSON RPC calls, etc. This is likely the same for others.
  • Exploring isolated execution environments, e.g., TEEs, TPMs, HSMs, on-premises deployments, etc. This seems like an important next step given that we’re currently only using containers.
  • Reasoning about and hardening the interfaces of each TCB component: e.g., reasoning about the security guarantees of each API/interface and thinking about the possible attacks that can occur through misuse.
  • Formal code verification and analysis: e.g., dynamic and static analysis tools to ensure the TCB code is indeed doing what we think it’s doing.

LIP-1 Ability to resync channel state

What happens if one of the parties loses synchronization?

Scenario 1 - Server out of sync?
The protocol server sends a command to the protocol client and ask to mutate an object.
The command _reads object version 1 in an attempt to write version 2 while the protocol client possess version 5
in such cases what should the client do? fall back to the server version?

Scenario 2 - Client out of sync?
The protocol client sends a command that reads object version 1 and willing to mutate it onto version 2.
The server the last version is already version 5. In that case, since the server is the source of truth how the client is supposed
to fetch the diff and resync it's channel state? (versions 2-5)

Scenario 3 - One of the parties lost channel state completely (disaster recovery)

  1. When it is back up, how can it know it even had a channel with another VASP? maybe a channel health check mechanism is required here.
  2. If it knows about the channel, how can he ask the other side to help him reconstruct its state?

[LIP10, LIP1] Libra ID incorporation into payment API

Starting this issue to discuss options on how to incorporate Libra ID into payment API

There are two general options available

1. Use Libra ID in PaymentActorObject

Libra ID instead of subaddress

Currently PaymentActorObject is used to specify sender/receiver of the payment.

PaymentActorObject has a field address that currently contains address and subaddress of sender/recipient.

We can modify PaymentActorObject so that address field can contain Libra ID instead of address+subaddress pair.

Adding Rich data

Similarly, we can update PaymentActorObject to allow for sharing of the rich data. This can be done by adding field new info field to the PaymentActorObject. Info field would have type LIP10::UserObject (we would move this type definition to LIP-1)

Pros/cons

(+) Single request for p2p payment. This approach seem to overlay nicely with current LIP-1 design
(-) This approach requires triggering complex payment flow for any p2p transfer that involves Libra ID. This means that VASP that wants to use Libra ID has to implement KYC endpoint.
(-) With this design it becomes impossible to offload Libra ID to 3rd party service(for example service ran by association)
(-) Not clear how info endpoint will look like. Do we still have separate information endpoint just for that like LIP-10 currently suggests? Or do we create dummy payment just to exchange rich data and reuse payments endpoint from LIP-1?


2. Keep Libra ID endpoints separate, use reference_id to map Libra ID<->KYC

This option means that sender/receiver fields of the PaymentObject becomes optional (even for payment creation).

In this flow, Libra ID pay endpoint is used to negotiate reference_id. Negotiated reference_id can either be used to submit a payment on the chain right away, or it can be passes as PaymentObject::reference_id to the KYC endpoint from the LIP-1, to negotiate KYC data.

Pros/cons

(+) Libra ID becomes separate from KYC. Libra ID pay endpoint is a simple request-response endpoint and return reference id that can be used immediately for posting payment on the chain.
(+) VASPs do not have to implement KYC endpoint to use Libra ID for sending money below KYC threshold
(+) Libra ID can be separated into 3rd party service
(-) For payments that requires KYC, this design introduces extra 'hop` where reference_id first needs to be negotiated via Libra ID, and only then KYC endpoint can be used to continue the process

Off-Chain Protocol: Optional Result Field


dip: 161
title: Off-Chain Protocol: Optional Result Field
author: Xiao Li (@xli), SunMi Lee (@sunmilee), David Wolinsky (@davidiw)
type: Informational
discussion-to: #161
created: 2021-04-07
updated: 2021-04-07


Overview

Both the Off-Chain Protocol and Travel Rule Exchange (or PaymentCommand) are defined in DIP-1. The Off-Chain Protocol was defined in such a way as to support the minimal requirements for these exchanges, specifically, in this case, asynchronous responses to requests. Many types of communication benefit from synchronous responses as it provides for both protocol simplicity and reduced latency. In order to support synchronous commands, the CommandResponseObject can leverage an optional field: result. The result field, if defined, must contain __ObjectType that should uniquely define the shape or other fields within the result.

Here's an example from DIP-10:

{
   "_ObjectType": "CommandResponseObject",
    "status": "success",
    "result": {
	    "_ObjectType": "ReferenceIDCommandResponse",
	    "recipient_address": "c5ab123458df0003415689adbb47326d",
    },
    "cid": "12ce83f6-6d18-0d6e-08b6-c00fdbbf085a",
}

Cleanup transaction on-chain / off-chain dips

  • DIP-1 should be made off-chain API only --bring in the PingCommand as both a health check for these endpoints as well as a simple way to describe an e2e flow -- this would replace DIP-1
  • Move TR to DIP-13
  • Make DIP-4 just about refunds
  • Make DIP-5 about Subaddresses, PaymentURIs, and subaddress transactions (note intent identifier becomes PaymentURI)
  • DIP-8 is on pause waiting for follow up from FirstDAG
  • DIP-10 is in progress in another thread -- basically simplify it so that it is purely about identifier exchange off-chain resulting in a PII-free on-chain exchange

DIP-4 requires unclear out of band communication

DIP-4 indicates that the parties explicitly define a to and a from subaddress in the metadata. In order for this to be useful the parties would need to assign pseudonyms internally and exchange those externally. This requires an unmentioned protocol. I would propose that instead the recipient simply indicate unique subaddress for each "from". Using the flows in the doc:

NC => NC
No need

C => NC
Recipient NC would share a distinct subaddress so that it can track the submitter

NC => C
No need

C => C
Recipient VASP would share a distinct subaddress so that it can track the submitter

This information can then be stored easily within a one-time use QR code, bech32 encoded identifier requiring minimal interaction to generate verifiable payment for both payer and payee.

In any case (as proposed in LIP-4 or mentioned here), there is no way to prevent either party from leaky information or ensuring that they choose an identifier dynamically.

Proposal: Conflict-Resistant Sequence Numbers

Sequence numbers today provide a simple mechanism to prevent transactions from being replayed — only a single transaction can ever be processed for a given sequence number/account combination. Additionally, the sequence number provides a way of enforcing sequentiality; a sender can send transactions with sequence numbers 0, 1, 2, and 3 to the network in any order since the sender knows that 1 can only be processed after 0, 2 after 1, etc.

However, sequence numbers today present issues due to sequence number lockup: once a transaction at a given sequence number n is chosen, the sending account will block on being able to process other transactions with sequence numbers greater than n until that transaction has been processed. This issue has cropped up in multiple places so far, and in general will be an issue when any off-chain activity related to signing a full transaction is being considered.

In general we have observed a theme here with sequence numbers: they prevent replays, and ensure strict sequentiality, however this “strict sequentiality” presents serious usability issues in off-chain protocols. This proposal would introduce the idea of a Conflict-Resistant Sequence Number (CRSN) that relaxes this strict sequentiality to allow concurrent processing of transactions while also allowing some level of dependency between transactions sent by the same account to be expressed.

CRSNs would work at the account level and would be an opt-in feature. No breaking changes would be needed to Diem data structures as the transaction's sequence_number field can be reused.

Support different date formats in lip headers

The following format in the header front matter of a lip causes a Docusaurus build error, I think due to the plugin that renders the lips

05-29-2020

You have to use

05/29/2020

Error: Minified React error #31; visit https://reactjs.org/docs/error-decoder.html?invariant=31&args[]=Thu%20May%2028%202020%2017%3A00%3A00%20GMT-0700%20(Pacific%20Daylight%20Time)&args[]= for the full message or use the non-minified dev environment for full errors and additional helpful warnings.
(undefined) Error: Minified React error #31; visit https://reactjs.org/docs/error-decoder.html?invariant=31&args[]=Thu%20May%2028%202020%2017%3A00%3A00%20GMT-0700%20(Pacific%20Daylight%20Time)&args[]= for the full message or use the non-minified dev environment for full errors and additional helpful warnings.
    at T (main:25138:94)
    at V (main:25138:229)
    at X (main:25139:321)
    at toArray (main:25140:398)
    at a.b.renderDOM (main:25216:460)
    at a.b.render (main:25209:326)
    at a.b.read (main:25208:18)
    at Object.renderToString (main:25218:364)
    at render (main:51820:252)
Error: Failed to compile with errors.
    at /Users/joelm/dev/lip-primary/node_modules/@docusaurus/core/lib/commands/build.js:37:24
    at finalCallback (/Users/joelm/dev/lip-primary/node_modules/webpack/lib/MultiCompiler.js:254:12)
    at /Users/joelm/dev/lip-primary/node_modules/webpack/lib/MultiCompiler.js:277:6
    at done (/Users/joelm/dev/lip-primary/node_modules/neo-async/async.js:2931:13)
    at runCompilers (/Users/joelm/dev/lip-primary/node_modules/webpack/lib/MultiCompiler.js:181:48)
    at /Users/joelm/dev/lip-primary/node_modules/webpack/lib/MultiCompiler.js:188:7
    at /Users/joelm/dev/lip-primary/node_modules/webpack/lib/MultiCompiler.js:270:7
    at finalCallback (/Users/joelm/dev/lip-primary/node_modules/webpack/lib/Compiler.js:257:39)
    at /Users/joelm/dev/lip-primary/node_modules/webpack/lib/Compiler.js:273:13
    at AsyncSeriesHook.eval [as callAsync] (eval at create (/Users/joelm/dev/lip-primary/node_modules/tapable/lib/HookCodeFactory.js:33:10), <anonymous>:40:1)
    at AsyncSeriesHook.lazyCompileHook (/Users/joelm/dev/lip-primary/node_modules/tapable/lib/Hook.js:154:20)
    at onCompiled (/Users/joelm/dev/lip-primary/node_modules/webpack/lib/Compiler.js:271:21)
    at /Users/joelm/dev/lip-primary/node_modules/webpack/lib/Compiler.js:681:15
    at AsyncSeriesHook.eval [as callAsync] (eval at create (/Users/joelm/dev/lip-primary/node_modules/tapable/lib/HookCodeFactory.js:33:10), <anonymous>:4:1)
    at AsyncSeriesHook.lazyCompileHook (/Users/joelm/dev/lip-primary/node_modules/tapable/lib/Hook.js:154:20)
    at /Users/joelm/dev/lip-primary/node_modules/webpack/lib/Compiler.js:678:31
error Command failed with exit code 1.

See #7 for an example.

LIP-8 Recepient VASP can't send pre approval request to a sender VASP it is not aware of

In the recurring payments scenario recipient VASP (some merchant acquirer) should ask sending VASP (some consumer wallet) for a FundPullPreApprovalCommand (request for consumer consent for a recurring charge by the merchant). After such a request has been submitted the recipient VASP can now use "out-of-band methods" (Some interaction with the wallet user) to approve/reject the pull pre-approval request.

According to my understanding of the process, we may encounter a problem.
Let's say a merchant consumer wants to add Libra as a payment method (i.e to his favorite transportation app who wants to charge him for every trip).

The merchant app will ask it's Libra acquirer service to generate a request for a recurring charge consent. Problem is,
at that point in time nor the merchant or the acquirer can know who is the consumer VASP and who is the specific user (subaddress?) inside the VASP to whom the pre-approval request should be sent to.

Alternative flow might be:

  1. Acquirer provides a QR/link at the checkout page with the scope of the consent request (amount, currency, etc...), the acquirer VASP address, and the merchant user id.
  2. consumer scans the QR / opens the link with his Libra wallet.
  3. The Libra consumer wallet asks for consumer consent (approve/reject).
  4. Consumer wallet opens a channel to the requesting VASP (the merchant acquirer) and creates the FundPullPreApprovalObject with the consumer desition.

Thoughts?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.