Code Monkey home page Code Monkey logo

penumbra's Introduction

Penumbra logo Penumbra logo

Penumbra is a fully shielded zone for the Cosmos ecosystem, allowing anyone to securely transact, stake, swap, or marketmake without broadcasting their personal information to the world.

Getting involved

The primary communication hub is our Discord; click the link to join the discussion there.

The guide to using the Penumbra software and interacting with the testnets can be found at guide.penumbra.zone.

The (evolving) protocol spec is rendered at protocol.penumbra.zone.

The (evolving) API documentation is rendered at rustdoc.penumbra.zone.

The (evolving) protobuf documentation is rendered at buf.build/penumbra-zone/penumbra.

To participate in our test network, use Penumbra command line client pcli.

To join the test network as a full node, follow setup instructions for Penumbra node implementation pd.

Current work and roadmap

For a high-level view of current work-in-progress and future items, check out our:

Security

If you believe you've found a security-related issue with Penumbra, please disclose responsibly by contacting the Penumbra Labs team at [email protected].

License

By contributing to penumbra you agree that your contributions will be licensed under the terms of both the LICENSE-Apache-2.0 and the LICENSE-MIT files in the root of this source tree.

If you're using penumbra you are free to choose one of the provided licenses:

SPDX-License-Identifier: MIT OR Apache-2.0

penumbra's People

Contributors

agouin avatar aubrika avatar avahowell avatar borngraced avatar conorsch avatar cratelyn avatar cronokirby avatar dynst avatar ejmg avatar elsehow avatar erwanor avatar grod220 avatar hawkw avatar hdevalence avatar jessepinho avatar joaolago1113 avatar k0kk0k avatar kerber0x avatar kevinji avatar mikayla-maki avatar noot avatar pa1amar avatar plaidfinch avatar pygmygoats avatar redshiftzero avatar sqltrigger avatar talderei avatar turbocrime avatar valentine1898 avatar zbuc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

penumbra's Issues

Add notion of epochs

Penumbra's validator set changes should only happen at epoch boundaries. Adding a notion of epochs involves:

  • adding an epoch duration parameter (presumably this goes in the genesis file #31 #17; it's probably not necessary to change during a deployment, but it's useful to be able to adjust, so that, e.g., testnets can run with faster epochs)
  • adding a spot in the block processing logic for validator changes to be propagated back to tendermint at the end of an epoch (this can be a no-op at first)

Define initial ABCI query interface

ABCI offers a Query method to request data from the ABCI application. This will be the main point of access to the Penumbra chain state, and will probably be called via the Tendermint /abci_query RPC endpoint.

The ABCI query method takes data as bytes and path as string, but the RPC method takes data and path as string, because the RPC speaks JSON. So if data has binary data, there may be some encoding/decoding involved (?), or maybe the encoded data gets passed through to ABCI.

For the actual protocol, Protobufs seem like a good choice, because they involve an actual specification of the data interchange format.

  • Declare a .proto file with at least one ABCI query
  • Use prost to generate Rust protobuf types (with Serde support)
  • Implement the ABCI query in pd
  • Implement the ABCI query in pcli

Basic metrics support

Metrics are invaluable for debugging and observability. It's much easier to add them as the software is being developed, rather than to go back and add them into all of the existing code. So, at the outset:

  • Add support for metrics using the metrics crate (at the outset, this could just be a Prometheus endpoint and a single metric);
  • Write down a set of standards for metrics names, aiming for consistency in metrics as they're added.

Allow `pcli` to show account balances in both bonded / unbonded amounts.

Using #49, extend pcli so that the interface displays balances in the staking token and shielded staking tokens in terms of either unbonded or bonded stake, as in this section of the protocol docs:

It also provides an alternate perspective on the debate between fixed-supply and inflation-based rewards. Choosing the unbonded token as the numéraire, delegators are rewarded by inflation for taking on the risk of validator misbehavior, and the token supply grows over time. Choosing the bonded token as the numéraire, non-delegators are punished by depreciation for not taking on any risk of misbehavior, and the token supply is fixed.

Create `docker-compose` configs for building testnets

From #14:

Tendermint has a nifty docker-compose config for running a local
testnet of several nodes in Docker, we might want something like that
as well for dev purposes?

  • Fix Prometheus configs to read from each server
  • Create DB + pd + tendermint instance for each node
  • Modify setup_validator.py script to enable --populate-persistent-peers flag for tendermint/localnode invocation

Create tooling for shielded genesis

Augment the operationally-focused (setting up nodes, etc) tooling from #17 with tooling for creating genesis files compatible with shielded transactions. Genesis only happens once (even if that once happens many times during development), so it seems preferable not to have a special genesis input type for transactions, and instead have a way to have an initial state with shielded notes.

This should happen only after nailing down details and getting transaction functionality implemented with transparent proofs.

Implement client scanning / sync

Using #33 and #34, implement synchronization logic that does client-side transaction scanning.

This functionality should be exposed with a new pcli sync command, and happen automatically on every client action that requires access to the chain state.

Determine whether Penumbra could integrate Fuzzy Message Detection

Notes on FMD added in 38c921b and (currently) rendered here: https://penumbra.zone/crypto/primitives/fmd.html

That page has a number of questions, copied below:

  • How should the false positive rate be determined? In some epoch, let $p$
    be the false positive rate, $N$ be the total number of messages, $M$ be the
    number of true positives for some detection key, and $D$ be the number of
    detections for that detection key. Then
    $$
    E[D] = M + p(N-M) = pN + M(1-p),
    $$
    and ideally $p$ should be chosen so that:

    1. $E[D]$ is bounded above;
    2. When $M$ is within the range of "normal use", $E[D]$ is close enough to
      $pN$ that it's difficult for a detector to distinguish (what does this mean
      exactly?);
  • The notion of detection ambiguity only requires that true and false
    positives be ambiguous in isolation. In practice, however, a detector has
    additional context: the total number of messages, the number of detected
    messages, and the false positive probability. What's the right notion in this
    context?

  • What happens when an adversary manipulates $N$ (diluting the global
    message stream) or $M$ (by sending extra messages to a target address)? There
    is some analogy here to [flashlight attacks][flashlight], although with the
    critical difference that flashlight attacks on decoy systems degrade privacy of
    the transactions themselves, whereas here the scope is limited to transaction
    detection.

  • If a detector has detection keys for both the sender and receiver of a
    transaction, they will detect the corresponding message with both keys with
    probability $1$, relative to a base rate of probability $p^2$. How does this
    affect their information gain? How does this change as the detector has not
    just two keys, but some proportion of all detection keys? How much more of the
    transaction graph could they infer?

  • How are detection keys derived and/or shared, so that they can actually be
    used by participants in the protocol?

Track base reward rate, validator-specific rates, and voting power.

The application needs to track the base reward rate for staking, the base exchange rate, and the derived validator-specific exchange rates. The voting power for each validator is calculated from the size of their delegation pool, and should also be recorded.

It's important that the implementation treats the reward rate as a time series, not as a constant value, since eventually the reward rate should adjust to steer staking incentives. We're not immediately ready to implement that, but if we use a constant staking rate, we might write code that assumes the staking rate is constant, and we wouldn't notice, because it would be accidentally correct. As an alternative, we could choose the staking rate based on the leading byte of the hash of the last block in each epoch. This is economically meaningless, but it's easy to implement, and it means we're forced to build code around the idea that the staking rate is variable.

The details are in the staking section of the protocol spec, but the high-level picture is:

base reward rate (time series) ===> base exchange rate (time series)

base reward rate (time series)
   + validator funding streams (can change, so also time series) ===> validator-specific exchange rates (time series)

base exchange rate + validator exchange rate ===> validator voting power (time series)

Subtasks:

Add queries for staking-related rates

Extend the protocol created in #22 to allow querying the application state for:

  • historical base reward rates
  • historical commission percentages for each validator
  • historical exchange rates for shielded staking tokens

Choose a minimum denomination of the staking token

This is the analogue of satoshi for BTC, wei for ETH, uATOM for ATOM, etc.

BTC: 1e-8 BTC
ETH: 1e-18 ETH
ATOM: 1e-6 ATOM

Choosing 1e-6 fits with the rest of the Cosmos ecosystem and is probably good enough. 1e-18 seems really excessive.

Protocol notes don't actually deploy multiple versions

#5 changes the protocol notes so that there's a version field included in the URL paths. This way, in the future, there can be a separately-rendered set of docs for each tag or branch.

However, the current deploy pipeline uses the firebase cli, which only does atomic deployments, so there's no way to add a set of files to the existing site, and this means that only the most recently deployed version will actually appear. At the moment this isn't a big deal because there's just one version anyways.

Specify note contents

Fill in this section of the protocol spec; settle on (and write up) a choice of leadByte method.

Create stub version of wallet protocol

[Avoiding use of the phrase "light client" in the issue name to avoid confusion; this is analogous to the Zcash light wallet functionality, but with the relationship to the Tendermint light client protocol left undetermined]

As explained in the first part of #28, the state on a shielded blockchain like Penumbra is shaped differently than on a public one:

On a shielded blockchain, however, the state is fragmented across all users of the application, as each user has a view only of their "local" portion of the application state. Transactions update a user's state privately, and use a zero-knowledge proof to prove to all other participants that the update was allowed by the application rules.

Even when using transparent proofs to stub out the system, it's important to get the data flow right, so it would be good to build a client protocol from the beginning. This avoids the problem in Zcash where, historically, client functionality was integrated into the node software, elevating fullnode clients as the default and leaving non-fullnode clients as future work.

There are two relevant trust axes for a client protocol:

  1. Trust in the node to report the chain state correctly (can be engineered away using a light client protocol, etc);
  2. Trust in the node not to monitor the client's activity (can be engineered away using fuzzy detection, private retrieval, etc).

At the outset, the initial client protocol should have both trust (1) and trust (2), fetching full transactions from the Penumbra application and scanning them locally. Eventually, this should evolve to reduce trust (1) [easier] and reduce trust (2) [harder].

Because the protocol should evolve, it's probably better to treat it as its own protocol tunneled through Tendermint via ABCI, rather than using the generic Tendermint RPC methods that allow fetching blocks and transactions.

Because Tendermint has short block times, it's likely better to make the protocol oriented around ranges of blocks, rather than per-block queries, since many blocks will be empty and doing a lot of round trips is sad.

  • Define a .proto specification of the client protocol
  • Add prost support for generating Rust proto types
  • Add support inside of pd for serving transactions
  • Use the client protocol inside pcli

Reorganize notes into new sections

The notes should have a telescoping structure, starting with a basic overview and then zooming in to detailed specifications of all of the components. It would be helpful to pull out a high-level summary of all of the concepts into one subsection (maybe "Concepts") with subsections giving an overview of the different components (the staking mechanism, notes, transactions, threshold crypto, etc), and then create one section for each of those components with a detailed, spec-level description of them.

Restructure `/crypto/` section in protocol spec

The current structure of the protocol spec has one big Cryptography section, with a separate Primitives subsection.

I think this ends up being a bit unwieldy, since there's a lot of content in the Primitives section that gets hidden.

It would be better to divide the content based on whether or not it's Penumbra-specific, and create two sections, Cryptographic Primitives (containing all content that could be useful independently) and Protocol (containing how those legos are assembled into Penumbra).

Specify delegation transactions

Delegation transactions involve a specialized trade of one particular asset (unbonded stake) to another (the staking token of a particular validator).

These need to:

  • reveal the asset type of the spend and ensure it is unbonded stake;
  • create a new note with an appropriate amount of the validator's staking token;
  • "transparently encrypt" the new amount of the staking token to the validator set (similarly to #28 , this is meant as a placeholder for a value that should eventually be encrypted).

The exact details should be specified and added to the protocol spec.

Improve `docker-compose` setup

From #14:

  • > the correct way to volume mount the Tendermint dir so that it can be
    effected by running tendermint CLI commands locally on the host (for
    dev purposes) remains to be figured out
  • > docker-compose configuration should probably make a private network for tendermint -> pd RPCs so that nothing else on the host is allowed to talk to the pd process' ABCI RPCs

Specify undelegations

Undelegation transactions involve a specialized trade of one particular asset (unbonded stake) to another (the staking token of a particular validator).

This is slightly more complicated than delegation, because there are additional possibilities: the validator could be slashed, or it could have fallen out of consensus.

  • If the validator is still part of the consensus set, undelegations go into an undelegation queue. Otherwise, they take effect immediately. (This could happen if the validator was slashed, or if the validator was bumped out of the consensus set).

  • If the validator was slashed, the exchange rate needs to account for the slashing penalty.

Undelegations need to:

  • reveal the asset type of the spend and ensure it is bonded stake;
  • reveal the amount of bonded stake to unbond;
  • create a new note with an appropriate amount of the staking token.

The exact details should be specified and added to the protocol spec.

Choose a data store for Penumbra's application state

Some candidates:

  • RocksDB: Pros: fast, mature, fairly simple KVstore interface. Cons: big C++ dependency, platform issues, need to decide on queries upfront
  • SQLite: Pros: lightweight, simple, embeddable. Cons: possible scaling issues (?)
  • Postgres: Pros: very powerful "real database", flexible queries. Cons: cannot be embedded in the application.

One point to consider is that the setup of having tendermint drive pd using ABCI already requires bundling two programs and running them together. If this is done with docker, the marginal cost of having another program (postgres) is smaller, and instead of having to build the database, it can just be pulled as a docker image. On the other hand, leaning into docker in this way probably increases the pain of deployment for any non-Docker users.

Avoid full expansion of clue keys in `decaf377_fmd`

The FMD variant we use for Penumbra derives the child keypairs used for each bit of detection precision from a single root keypair (the compact clue key / compact detection key). In the current implementation of decaf377_fmd, all possible child keypairs are derived when a clue key is expanded, regardless of whether or not they will actually be used.

Two alternatives:

  • the child points could be derived each time the address is used, slowing down the case where the same address is used multiple times, but aiding simplicity;
  • the child points could be stored in a RefCell allowing interior mutability, and transparently cached inside the ExpandedClueKey as needed, at the cost of removing the Sync bound from ExpandedClueKey.

This is an optimization, so it's not important to do it now.

Track validator-associated data

There are at least two important pieces of data to associate to a validator:

  • a shielded address for the the validator's commission (#39);
  • the validator's commission percentage;
  • the validator's unclaimed reward balance.

To start, these can probably be defined at genesis, but eventually, validators should be able to update them. Determining how that should happen is out-of-scope for this issue.

Decide on a proof system and primitives

Penumbra needs SNARK proofs. Because the choice of proving system and proving curve can't really be cleanly separated from the rest of the system choices (e.g., the native field of the proving system informs what embedded curve is available, and how circuit programming is done), large parts of the rest of the system design block on making a choice of proving system.

Goals

  1. Near-term implementation availability. Creating fast and high-quality implementations of elliptic curves and proof systems is fun, but I don't want the project to block on it.
  2. High performance for fixed functionality. Penumbra currently intends to support fixed functionality; programmability is a good future goal but isn't a near-term objective. The fixed functionality should have as high performance as possible.
  3. Longer-term flexibility. The choice should ideally not preclude many future choices for later functionality. More precisely, it should not impose high switching costs on future choices.
  4. Recursion capability. Penumbra doesn't currently make use of recursion, but there are a lot of cool things it could enable in the future, and it would be able to do those in the future.

Setup ceremonies are beneficial to avoid for operational reasons, but not for security reasons. A decentralized setup procedure is sufficient for security.

Options

Proof systems:

  • Groth16:
    • Pros: high performance, very small proofs, mature system
    • Cons: requires a setup for each proof statement
  • PLONK:
    • Pros: universal setup, still fairly compact proofs, seems to be a point of convergence with useful extensions (plookup, SHPLONK, etc)
    • Cons: bigger proofs, worse constants than Groth16
  • Halo 2
    • Pros: no setup, arbitrary depth recursion
    • Cons: bigger proof sizes, only useful with the Pallas/Vesta curves which don't support pairings

Curve choices:

  • BLS12-381:

    • Pros: very mature, used by Sapling already
    • Cons: no easy recursion
  • BLS12-377:

    • Pros: constructed as part of Zexe to support depth 1 recursion using a bigger parent curve, deployed in Celo, to be deployed in Zexe
    • Cons: ?
  • Pallas/Vesta:

    • Pros: none other than support for Halo 2's arbitrary recursion
    • Cons: no pairings mean they cannot be used for any pairing-based SNARK

Thoughts

Although the choice of proof system (Groth16, Plonk, Halo, Pickles, ...) is not completely separable from the choice of proving curve (e.g., pairing-based SNARKs require pairing-friendly curves), to the extent that it is, the choice of the proof system is relatively less important than the choice of proving curve, because it is easier to encapsulate.

The choice of proving curve determines the scalar field of the arithmetic circuit, which determines which curves are efficient to implement in the circuit, which determines which cryptographic constructions can be performed in the circuit, which determines what kind of key material the system uses, which propagates all the way upwards to user-visible details like the address format. While swapping out a proof system using the same proving curve can be encapsulated within an update to a client library, swapping out the proving curve is extremely disruptive and essentially requires all users to generate new addresses and migrate funds.

This means that, in terms of proof system flexibility, the Pallas/Vesta curves are relatively disadvantaged compared to pairing-friendly curves like BLS12-381 or BLS12-377, because they cannot be used with any pairing-based SNARK, or any other pairing-based construction. Realistically, choosing them is committing to using Halo 2.

Choosing BLS12-377 instead of BLS12-381 opens the possibility to do depth-1 recursion later, without meaningfully restricting the near-term proving choices. For this reason, BLS12-377 seems like the best choice of proving curve.

Penumbra's approach is to first create a useful set of fixed functionality, and generalize to custom, programmable functionality only later. Compared to Sapling, there is more functionality (not just Spend and Output but Delegate, Undelegate, Vote, ...), meaning that there are more proof statements. Using Groth16 means that each of these statements needs to have its own proving and verification key, generated through a decentralized setup.

So the advantage of a universal setup (as in PLONK) over per-statement setup (as in Groth16) would be:

  1. The setup can be used for additional fixed functionality later;
  2. Client software does not need to maintain distinct proving/verification keys for each statement.

(2) is a definite downside, but the impact is a little unclear. As a point of reference, the Sapling spend and output parameters are 48MB and 3.5MB respectively. The size of the spend circuit could be improved using a snark-friendly hash function.

With regard to (1), if functionality were being developed in many independent pieces, doing many setups would impose a large operational cost. But doing a decentralized setup for a dozen proof statements simultaneously does not seem substantially worse than doing a decentralized setup for a single proof statement. So the operational concern is related to the frequency of groups of new statements, not the number of statements in a group. Adding a later group of functionality is easy if the first group used a universal setup. But if it didn't, the choice of per-statement setup initially doesn't prevent the use of a universal setup later, as long as the new proof system can be implemented using the same curve.

Because Penumbra plans to have an initial set of fixed functionality, and performance is a concern, Groth16 seems like a good choice, and leaves the door open for a future universal SNARK. Using BLS12-377 opens the door to future recursion, albeit only of depth 1.

Update protocol spec to match `decaf377-fmd` work

The decaf377-fmd implementation added in #79 should be reconciled with the existing protocol spec contents:

  • the terminology in the spec should be updated to match the terminology in the code (clue, address, detection key);
  • the section on compact keys should be cut out and moved to its own subsection;
  • the section on diversified detection should be cut out and moved to a separate subsection, where it's clear that it's not actually used in Penumbra;
  • some content should be added explaining the context that FMD will be used, hinting at how we can determine parameter choices (since a fair amount of existing protocol analysis doesn't apply to our setting).

Also:

  • the decaf377-fmd docs should have a brief overview of the purpose of the crate.

Specify `decaf377`, Decaf-for-Edwards-over-BLS12-377.

Unless there's a dramatic change to the analysis in #2, very likely that Penumbra will use BLS12-377. Penumbra needs a cryptographic group that can be used inside of an arithmetic circuit. This group would play the role Jubjub plays in Sapling, and be used for other purposes.

The Zexe paper, which defined BLS12-377, also defined (but did not name) a cofactor-4 Edwards curve defined over the BLS12-377 scalar field. It would be possible to use the Edwards curve directly, but doing so would provide a leaky abstraction, forcing all of the downstream constructions to pay attention to cofactors. Instead, it would be much better to specify (and name) Decaf for this curve, so that all protocols using BLS12-377 (including Penumbra) can have a clean abstraction to work with.

In the "machine" cost model for elliptic curve implementations that execute machine instructions, Decaf (and Ristretto) impose negligible additional cost compared to the underlying Edwards curve, because the dominant cost of encoding and decoding is an inverse square root operation. However, in the "circuit" cost model for an elliptic curve inside of an arithmetic circuit, this is no longer the case, because arithmetic circuits certify computation rather than perform it, and certifying an inverse is exactly as expensive as certifying a multiplication.

This means that unlike the machine case, using Decaf in a circuit does impose additional costs. These costs should be quantified precisely, but they are almost certainly worthwhile for general-purpose applications. Decaf provides a single, unified abstraction that works the same way inside and outside of a circuit and the same way for all applications, making it a good default choice.

Some special cases, such as Pedersen hashes, may benefit from using the underlying curve directly, but this functionality can be encapsulated as, e.g., a hash function.

The notes should include:

  • the above problem description;
  • a name for Decaf-for-Edwards-over-BLS12-377 (and Edwards-over-BLS12-377, which is missing one);
  • a specification of the encoding and decoding functions with test vectors;
  • a specification of hash-to-group -- this could usefully have two, one single-width and one double-width (hash twice and add), depending on the cost of the hash-to-group method;
  • a survey of other methods for handling cofactors inside a circuit and a comparison of the costs.

Create asset identifier for each validator's staking token

Penumbra's staking design treats all delegations to a particular validator as fungible, and provides native liquid staking tokens associated to each validator. To record these in the shielded pool (#27), we need a way to create an asset identifier linked to the validator's identity.

Specify transaction format

This is actually a prerequisite to #29; we need a transaction format.

The Cosmos format is defined in ADR20; it uses protobufs. The main downside of protobufs in this context is the lack of canonical serialization, but the rest of the ecosystem seems to do OK.

We should decide how we want to encode the container for the transaction's action descriptions, and how much of the ADR20 format it makes sense to adopt.

Add a big scary warning text to `pcli`

could be workshopped a bit, want something that says "fun" but also "don't use this for anything real"

Warning: you are about to lose money!

This message ... is part of a system of messages...
... we considered ourselves to be a fault-tolerant distributed system...
This message is a warning about danger.
The danger is in a particular location... the center of danger is the pcli binary...
The danger is to your funds, and it can destroy them.
The danger is unleashed only if you execute this software.

USE AT YOUR OWN RISK

Transparent proofs

Transparent blockchains operate as follows: all participants maintain a copy of the (consensus-determined) application state. Transactions modify the application state directly, and participants check that the state changes are allowed by the application rules before coming to consensus on them.

On a shielded blockchain, however, the state is fragmented across all users of the application, as each user has a view only of their "local" portion of the application state. Transactions update a user's state privately, and use a zero-knowledge proof to prove to all other participants that the update was allowed by the application rules.

There are then two main challenges involved in creating a shielded blockchain:

  1. Design of the application state and data flows to be compatible with per-user views of local portions of the chain state, and isolated updates to those portions that can be made private;
  2. Design of the cryptography that allows proving correctness of private updates.

Of these challenges, the first is more foundational for the system design, because it changes the entire architecture and assumptions about data availability. However, the second requires a large amount of detailed cryptographic design work (e.g., working through all of the details of a state update circuit).

Frontloading the cryptographic design work makes it difficult to develop the application iteratively and get rapid feedback on what aspects work well, but deferring it entirely risks creating a situation where the application state is designed incompatibly with private functionality. How do we balance this tension?

Frog put the cookies in a box. "There," he said. "Now we will not eat any more cookies."

"But we can open the box," said Toad.

"That is true," said Frog.

One idea to tread a middle ground between these extremes is to try to separate (1) and (2) by designing the application state around (1) -- i.e., designing a fragmented application state with update proofs -- but replacing the zero-knowledge proofs with "transparent proofs" that work as trivial proofs-of-knowledge. These transparent proofs are just a packet containing the witness data, and the verification algorithm takes the public inputs and uses them to check the desired relation against the witness data directly.

While these transparent proofs do not provide privacy, because they have the same interface as a zero-knowledge proof, they do ensure that the data flows in the system are compatible with an actually private implementation, and they can be gradually replaced by real ZK proofs as the system design solidifies.

Implement slashing

Some implementation of slashing is necessary to be able to implement undelegation, although it could possibly be a stub that's not (yet) connected to Tendermint events.

Slashing should:

  • Remove the validator from consensus immediately
  • Mark the validator as having been slashed

Implement client state

As explained in the first part of #28, the state on a shielded blockchain like Penumbra is shaped differently than on a public one:

On a shielded blockchain, however, the state is fragmented across all users of the application, as each user has a view only of their "local" portion of the application state. Transactions update a user's state privately, and use a zero-knowledge proof to prove to all other participants that the update was allowed by the application rules.

This means that the client library has to maintain client state. This probably looks like:

  • a set of viewing keys
  • a record of how much of the chain has been scanned with each viewing key
  • a set of live notes under the user's control
  • a record of transactions they have visibility into

Some questions:

  • what should back this data store? ideally, it'd be something that's storage-agnostic, so it could just as easily work for pcli as any other environment that allows blob storage.

Implement undelegations

Implement #47:

  • add implementations of undelegation transaction creation to client libraries and expose in pcli;
  • add implementation of undelegation queue to pd;
  • add implementation of undelegation checks to pd;

Implement rewards for funding streams

Following #41 and #40, the application should use the formulas in the staking description to implement payouts for validator's funding streams.

Along the lines of #39 (comment) , this should be done by creating additional notes while processing the epoch transition and adding them into the pending block. These notes should be controlled by the addresses specified in the funding streams.

Specify how validators claim staking rewards.

In Penumbra's shielded delegation system, delegators simply hold shares of the validator's delegation pool, so they don't have rewards to claim, but validators' commissions are treated as claimable rewards.

Validators need to periodically sweep their rewards from an accumulator into normal shielded notes. This involves creating transactions with special claim inputs, signed by a key the validator controls. Ideally, this key would be online, so validators can continuously sweep funds into the shielded pool.

Discussions with some validators indicated that it would be useful to have a separate key for this purpose, rather than reusing the validator's identity key. This would reduce the operational risk of having an online key to continuously sweep funds, since the key would only have control over rewards as they accumulate, not over any other funds or activity.

(See below, we can instead record the commissions as additional notes that are included into the note commitment tree).

  • Remove references to sweeping commissions from the protocol spec
  • Think about a general mechanism for how to track notes that are not created from specific transactions

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.