Code Monkey home page Code Monkey logo

schnorr-sig's People

Contributors

baumbata avatar nashtare avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

Forkers

mfaulk

schnorr-sig's Issues

Rely on another algebraic hash function

We currently rely on rescue_63_14_7 for hashing the transaction in our modified Schnorr signatures.
It represents currently 25% and could be brought down to ~11/12% by using Poseidon for instance, at the cost of a light increase in the AIR program (we could still use Rescue for the Merkle authentication paths and replace only the signature hashing).

Risk of panic! in message hashing

The current hash of the message before signing is computed as

    for chunk in message.chunks(8) {
        message_fp.push(Fp::from_bytes(&chunk.try_into().unwrap()).unwrap());
    }
    data.extend_from_slice(&message_fp);

This implies that any non-canonical chunk will be making Fp::from_bytes() panic. We should take chunks of 7 bytes instead, similarly to how binary data slices are hashed, to ensure canonical representation once converted to field elements.

Choose better instantiation of Rescue-Prime

Somehow related to #3, if we decide to stick to Rescue-Prime for the underlying hash function.
Unlike regular binary hash functions, we are hashing entire field elements here (there exists a method for hashing a sequence of bytes but in the context of an AIR program we would stick to hashing entire field elements). Because of that, the "indicator" for the y-coordinate is heavy, namely 1 entire field element of 64 bits.

Changing this to no extra information, and instead deserialize public keys solely from x-coordinates by choosing always the lexicographically largest corresponding y-coordinate would save 2 field elements for signatures of regular account transfers (1 sender / 1 receiver / extra data). If we consider the extra data to be the amount to be sent (128bits) and the sender's nonce (64 bits), this accounts for a total of $6 + 6 + 2 + 1 = 15$ field elements.In addition we have the x coordinate of the random point, for an additional 6 elements. In total, this means hashing a sequence of 21 field elements (23 without reducing the public key representation).

Let $c,r,d$ be respectively the hash capacity, rate and digest size. Let $N$ be the number of rounds. Let $t$ be the size of the whole sequence to be hashed (here $t = 21$ or $23$). If we consider all the operations happening on a hash state cell during a round as 1, i.e. $op_{round}$, then the goal is to minimize the number of calls to $op_{round}$ over the whole hashing sequence, which is $(c + r) \cdot N \cdot x$, where $x = ceil(t/r)$, i.e. minimizing $(c + r) \cdot N \cdot \dfrac{T}{r}$ with $T$ the smallest integer greater than $t$ and divisible by $r$.

The current instantiation is parameterized as RP-8-4, i.e. $c = 4, r = 4, d = 4, N = 7, t = 21, x = 6$. This yields a number of $op_{round}$ function calls of $336$.

We also know that with Rescue-Prime, to maintain a security level of 128 bits, we need to have a capacity $c$ at least of 4 field elements (given that our field size if 64 bits), and our digest $d$ (rate $r$ if not truncated) to be also at least 4. This gives us a lower bound on $c,d,r$. We can also assume $N$ to be constant and equal to 7 (with the current 40% overestimate for safety).

The other existing instantiation, RP-14-7 would need only 3 iterations if $t = 21$, for a cost of $294$. As data go in the rate portion during absorption, it would make sense to increase the ratio rate/capacity. To keep capacity at its minimum for security, i.e. $c = 4$, we end up aiming at minimizing $(4 + r) \cdot N \cdot \dfrac{T}{r}$. The minimum is obviously for $r = T$, but we would like to keep a practical range for the state size (i.e. no more than 16 field elements?). The best contenders in this range are:

  • $r = 11$ -> cost of $210$ (reduction of 37.5% from current one)
  • $r = 12$ -> cost of $224$ (reduction of 33.3% from current one)
  • $r = 7$ -> cost of $231$ (reduction of 31.25% from current one)
  • $r = 8$ -> cost of $252$ (reduction of 25% from current one)

We are also concerned about the number of constrained cells at a given frame, which is directly linked to the size of the hash state ($r + c$). This may make us favor the two last contenders, as having a state size of respectively $11$ and $12$ field elements, compared to $15$ and $16$ for the first ones. In addition, merging two hash digests is cleaner on the last instantiation, because $2 \cdot d$ elements fit perfectly into the rate portion. This is also this instantiation that is currently implemented in winterfell over the same f64 field. Hence I would tend to suggest to go for this one. it has also the advantage of providing the same number of $op_{round}$ whether we prune public keys of their $y$ coordinate or not, unlike the other one.

This would require:

  • integrating RP-12-8 instantiation in Toposware/hash
  • using this hashing instantiation here
  • (optional) changing the way PublicKeys are handled to reduce encoding to 48 bytes (cleaner)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.