walletwasabi / wabisabi Goto Github PK
View Code? Open in Web Editor NEWLicense: MIT License
License: MIT License
A list of items to revisit before by testnet release
t
scalar satisfying some proof of work requirement can be provided instead of having the coordinator generate it. This is desirable because null credential registration is permissionless but appends to the nullifier set. estimate cost of DoS in this way, both per round and long term fee credentials (previously i've always assumed the long term pool would not have them, but maybe that is not a big deal actually)Generators.Gh := Generators.G, Generators.Gg := H(Generators.Gh)
- make Pedersen commitments compatible with Confidential Transaction onesAfter we release the draft on the cryptographic primitives, here's a list of ideas on what questions we should research and settle to come up with the final protocol.
Our notation for attributes is cumbersome and confusing:
G_h
and G_g
are reversed from their usual roles due to a confusion on my parti
subindices for multiple attributes, when often that can be made implicit and isn't necessary or helpfulr'
G_V
and G_v
are different generators, lowercase v
stands for "value" but could be replaced with a
for amount?The big advantage of this new scheme is that it allows to send an arbitrary amount to a third party. However, for several denial of service vectors [either malicious or unintentional] a CoinJoin round fails after output registration. This means, that the coordinator and all participants have seen all addresses and associated receiving amounts. For the next attempted round, the old address should me marked as burned and not used again, and a new address should be generated for this round. For mix2self this is not an issue, as we are using HD wallets, and can thus simply derive a new unused address, which is still contained in the static backup.
The issue is how to derive a new address for the receiver?
I'm going to list a couple methods I know of, with pros and cons to it. Please continue the conversation with more techniques and trade-offs.
The transaction has never been broadcasted to the network, so ONLY coordinator and round participants know about the address reuse, not an outside observer. It might ok to trust that these participants are not-malicious, and then just ignore the problem and reuse the address.
As the CoinJoin should be trustless, this is not a solution.
Instead of asking for a single address, the sender can ask the receiver for a child xpub. Now the sender wallet can derive multiple addresses for the receiver, which are all in his static backup - and this is as good as we do it currently for the sender's wallet. The receiver does not be online to communicate a new address.
The big downside is that it breaks the current flow of payment request, as not a single address, but an xpub needs to be provided. Thus, only receivers who "update" their wallets and workflows to give out xpubs, a new one for every user can be part of the CoinJoin. Further, the sender wallet needs to sync the chain to find out which addresses are used / not used. This can take time.
The sender will know multiple addresses of the receiver, and thus potentially can follow a transaction history. Thus it is important for the sender to CoinJoin out of the child xpub into a different unleaked xpub. This is doable for wasabi users, but not for other wallet users.
Stealth addresses are static strings that the receiver generates, which then can be used by any sender in a Diffie-Hellman key exchange. The sender will add a random number to the stealth address, which generates a new public key. The receiver can generate a private key corresponding to that public key, if he gets the random number too. The receiver does not need to be online.
This might work, but it still requires adoption of a new workflow, and there is no beautiful implementation so far.
If a public key of the receiver is known by the sender, then it can be tweaked with a random number so that a new public key is generated, and when the receiver knows the random number too, he can derive the according private key. With taproot, public keys are exposed in the address, but we cannot wait for taproot to be activated and widely used. The crypto magic is also possible with ECDSA, however, current Bitcoin addresses do not expose the public key. So, the sender needs to figure out a public key of the receiver, which can be done if the sender knows a spent output of the receiver [which exposes the public key in the witness script]. The receiver does not have to be online.
This either requires a new address type, or the receiver to give out the public key directly to the sender, or some arbitrary guessing what a spent coin of the receiver is.
I don't mean the mini CoinJoin, but rather, that the receiver gives out an IP address or tor hidden service, that the sender can ping and get a new unused address from.
This breaks the workflow again, as now a endpoint, not an address needs to be communicated up front. The receiver also needs to be online at the time the sender makes the CoinJoin.
Essentially what we are trying to achieve boils down to a single question: what is the appropriate blind signature in our specific application scenario which supports these two added functionalities:
I claim that BLS blind signatures with a little tweak can achieve what we aim for. For sake of self-containedness I enclose the definition of BLS blind signatures.
Let be a pairing and , groups of prime order and a hash function and let be a generator of
(I use multiplicative notation and describe here the BLS Signature for symmetric pairings, but they can also be defined for asymmetric Type 2 and Type 3 pairings, see for instance here).
In order to generate a BLS-signature, the signer with private key ( is the public key)
computes and the verifier checks whether holds. The correctness is obvious, since
.
Now the blind signature version is as follows:
It is easy to verify that this is correct, since .
How shall we instantiate the hash function? In our case the hash function might be modular exponentiation, namely . However, this is not a second-preimage resistant hash function as by applying Fermat's little theorem. This can be fixed, if we restrict the message space to be . This can be enforced by using some range proof system.
After settling on the blind signature scheme and the applied hash function how can we prove our desired properties?
Suppose a user registers a public input UTXO and wants to split it into but without telling the server neither , nor . If the hash function is modular exponentiation, then one can look at the blinded message as a Pedersen commitment of the message. For that we know how to check simple relations due to the homomorphicity of the commitment scheme. If a user has and , then can easily convince the server that by opening the product of the two commitments, ie. , by sending to the signer. This naturally generalizes more than two splitted UTXOs.
Similarly a user could prove that two values "in a signature" add up to a public value (UTXO merging) by this verification equality: .
In summary, I think the real question is whether we want/find better blind signatures supporting our needs or not.
We have a few gaps in the security proofs that should ideally be closed
MAC_GGM
security is based on stronger assumptions than standard ones, and precludes adaptive adversaries, see @AdamISZ's blog post for more discussionis any other formalism needed for privacy?
It could be useful to list our considered alternative approaches in the final paper. Here I collect them in this issue.
Authors. István András Seres, nopara73, nothingmuch
Acknowledgement. We thank the numerous contributors to this scheme. We will update the full list of contributors after the paper passes the work in progress status.
Status. Work in progress.
Abstract. Generalization of Chaumian CoinJoins. WabiSabi enables the construction of trustless coinjoins with arbitrary input and output values.
A Bitcoin transaction consists of inputs and outputs. CoinJoins are Bitcoin transactions of multiple collaborating participants.
CoinJoins are inherently secure, because if a participant does not see its desired outputs in the resulting CoinJoin transaction, it refuses to sign. A CoinJoin round consists of an Input Registration, an Output Registration and a Transaction Signing phases.
To guarantee network level unlinkability in regards to input-input, output-output and input-output relations participants must use different anonymity network identities to register each and every inputs and outputs of theirs to a trustless coordinator. This technique leverages an already existing anonymity network infrastructure. Another approach could be what CoinShuffle, CoinShuffle++ and ValueShuffle took, where a custom anonymity network is created to facilitate the mixing. The anonymity set of this anonymity network is ideally the number of participatns in a round. However in order to avoid cross-mixing round correlation, the use of an existing anonymity network infrastructure is still desirable.
Our proposed scheme generalizes trustless construction of arbitrary CoinJoins. This enables numerous things those were not possible before.
While network level unlinkability guarantee is easy to achieve, it is difficult to achieve it in a way that also protects against Denial of Service (DoS) attacks. For this reason Chaumian CoinJoins were proposed. However they only work, because the blind signatures provided by the coordinator also correspond with pre-defined denominations. This greatly limits numerous aspects of how a resulting CoinJoin transaction can look like, which has consequences on user experience and on privacy guarantees.
WabiSabi provides a general and straight forward solution for the problem of trustlessly constructing and coordinating CoinJoins with arbitrary inputs and arbitrary outputs.
The protocol consists of epochs, rounds and phases.
Input Registration Phase, Output Registration Phase and Transaction Signing Phase follow each other and together they are called the Main Round.
Input Registration cannot fail, because inputs can be refused or kicked out of the round without penalty before moving on to Output Registration. Regardless if Output Registration succeeds or fails, the round must progress to Transaction Signing, because penalties can only be imposed on malicious inputs and not on malicious outputs. If Transaction Signing fails, the protocol progresses to Blame Round, which repeats the Main Round without the malicious inputs.
The Main Round and the potential Blame Rounds are called an Epoch.
Every input must be registered independently over a different anonymity network identitiy. An input can be registered if it is confirmed, mature and unspent. An input must be registered along with a proof of ownership.
(Note: confirmation is not a necessary requirement, but relaxing this requirement would need a specification of long list of sometimes dynamic conditions to be worked out, like if the transaction used RBF, minimum fee rate compared to current network, what is the size and length of the unconfirmed transaction chain it comes from, how this affects the DoS parameters, and such. So for the sake of simplicity, confirmation will be required.)
(ToDo: describe cryptographic operations here.)
Every desired output must be registered over a different anonymity network identity.
(ToDo: describe cryptographic operations here.)
Participants sign the final transaction.
For construction of coinjoin transactions with a coordinator we require some sort of tokens issued at input registration and redeemed at output registration.
This issue is for systematically comparing the various alternatives, with separate issues.
There are 4 approaches we've considered for issuing tokens representing amounts at input registration, presented in order of strictly increasing flexibility/capability (but also complexity)
More complicated building blocks allow simplification of the privacy considerations at the higher levels of the protocol (specifically how clients use tokens to combine/split amounts, see below).
Both the fixed denomination and the overkill approaches are solved already, so can be considered fallbacks. They represent extreme ends of the tradeoff spectrum, simplicity on the one hand vs. optimal unlinkability on the other.
The fixed denomination approach presents the smallest change from the current protocol, but also presents more difficulties with regards to improving the resulting transaction structure and adding flexibility (e.g. payments).
Arbitrary splitting or merging requires some novel construction, since our use case is a bit unusual than what is typically found in the e-cash literature: our tokens are short lived, the interaction is online, the issuer is also the verifier, and there is no risk of theft, only denial of service (since that is secured by the final signature phase).
Merging would simplify certain aspects, but with very marginal benefits compared to the advantage that arbitrary splitting into blinded amounts provides over fixed denominations. However, homomorphism signatures also presents some challenges to double spending prevention, so it may turn out to be significantly more complex.
Consequentially at least for now we will focus on option 2, arbitrary splitting into blinded amounts, which is a more modest goal, and sufficient for our needs, but still pursue client side merging as a nice to have stretch goal if we can realize.
First, we consider 3 levels of potential privacy leaks:
Cryptographic unlinkability is a hard requirement, necessary but not sufficient, but depending on how it is instantiated does not guarantee privacy at the higher levels.
Privacy is the most important property, but the following evaluation criteria also apply:
This section attempts to organize the different constructions we've considered for review. Individual approaches that have been studied in more detail have their own issues referenced here.
For the most part we've discussed different blind signature schemes where the messages are amounts. The e-cash literature contains several constructions, with divisible and compact e-cash being the closest to what we need, but typically address the offline setting. Single use anonymous credentials also apply.
Requires bilinear groups.
Uses a randomizable, partially blind signature scheme: http://www.cs.pwr.edu.pl/hanzlik/preludium/wyniki/paper2.pdf
Requires bilinear groups. Supports double spending prevention.
Anonymous credential system that supports single use credentials and does not require pairing: https://cs.brown.edu/people/fbaldimt/papers/CCS13.pdf
See also Wasabi Research Club meeting about this paper: https://www.youtube.com/watch?v=pgErjQSQQsg
Notably, this seems to be the simplest anonymous credential scheme that can be used "off the shelf" for this case, albeit inefficiently (a single attribute credential per amount), and it only requires standard cryptographic assumptions (no pairing).
Generalization of previous redactable signature schemes that permits restricted transformations on signed values: https://www.microsoft.com/en-us/research/wp-content/uploads/2014/07/CSF-CKLM14-.pdf
Double spending prevention unclear.
Similar to malleable signatures, but also randomizes the keys. Has been used to construct delegatable anonymous credentials: https://eprint.iacr.org/2018/923.pdf
Double spending prevention unclear.
Simpler construction based on standard groups specifically designed for the issuer-is-verifier setting: http://www0.cs.ucl.ac.uk/staff/S.Meiklejohn/files/ccs14.pdf
A more recent construction also allows MACs over group elements, similar to the ACL scheme: https://signal.org/blog/pdfs/signal_private_group_system.pdf
Since these schemes support unlinkable multi-show, double spending prevention would likely require some serial number based approach.
https://eprint.iacr.org/2015/626.pdf
@seresistvanandras - can you provide an overview?
@seresistvanandras suggested we may be able to recover some of the attractive properties of the BLS based signature scheme described in #5 without the overspending issues arising from signature homomorphism.
Blinded amounts with splitting/merging using Groth-Kohlweiss proofs
Several related approaches described in the divisible e-cash literature.
There are some non-trivial limitations and complexity in most schemes, but perhaps building blocks can be factored out and simplified for our needs, since divisible e-cash generally implies offline setting, and usually only support dividing some fixed denomination to arbitrary sub-amounts.
Typically these build a tree of commitments to serial numbers on fixed denominations, where spending a single node on the tree invalidates its children and parents but not its siblings or their descendants.
https://eprint.iacr.org/2012/298.pdf
@seresistvanandras just want to note that there's a Blinded Pedersen Commitment Scheme described in Section 2.7 of Anonymous Credentials Light paper.
Blind Pedersen Commitments | Blind BLS Signatures | |
---|---|---|
Crypto | relatively simple | complex |
Communication Rounds (Synchronous + Parallel) | 3+1 | 3+2 |
Coin Dividing | no privacy loss | no privacy loss |
Coin Merging | some privacy loss | no privacy loss |
Protocol Cleanness | introducing denominations are somewhat dirty, Blind-Pedersen Commitments rely on Knowledge assumptions which is pretty new and considered to be non-standard | solving malleability of BLS with accumulators is somewhat dirty |
At this part https://github.com/zkSNACKs/WabiSabi/blob/master/protocol.md#round-structure
Alices prove ownership of inputs to the coordinator,
which issues credentials with zero value for use in mandatory credential
presentations.
This seems to be wrong or confusing.
I had a brain fart and for some reason I put down algebraic MACs in #13 as relying on bilinear groups, and consequentially we did not look at them in much detail. This is not the case. Again I have @jonasnick to thank for prompting me to look again at Keyed-Verification Anonymous Credentials (KVACs).
The gist of KVAC is that it is designed for when the issuer is also the verifier, like in our use case.
A newer scheme: The Signal Private Group System and Anonymous
Credentials Supporting Efficient Verifiable Encryption allows attributes to be group elements: for our use case the attributes can be Pederesen Commitments to amounts.
Since this scheme allows unlinkable multi-show, this reintroduces the double spending problem. However, this should be easily solvable by introducing serial numbers which are revealed as part of credential showing.
Note that by only using Pedersen commitments as group attributes we don't require the blind issuance protocol of either KVAC scheme, which relies on ElGamal encryption of the requested attributes, which has the nice benefit of perfect hiding of the sub-amounts and serial numbers, so the coordinator can't retrospectively de-anonymize users even if there is a crypto break.
The user submits a request for N credentials, with either 1 or 2 group attributes (not clear which is simpler yet):
Along with a range proof for each amount commitment, and a proof that the sum of all the amount commitments matches the input amount.
The coordinator verifies the proofs, and responds with N MACs on the requested attributes, along with a proof of knowledge of the secret key.
The user executes the Show
protocol for the combination of credentials they want to use, which produces as output randomized commitments to the attributes and a proof of the MAC verification.
The user also proves that the serial number commitments can be opened to their corresponding serial numbers. These serial numbers are revealed for double spending protection, but the commitment opening should be done in zero knowledge to avoid revealing the randomness of the original commitment in the input registration phase or the randomization added in the Show
protocol.
The user also proves that the sum of the amount attributes matches the output amount, analogously to input registration (but there is no need for range proofs at this phase).
The user submits these proofs, the randomized attributes, and the serial numbers.
Note that the serial numbers should not be "revealed attributes" as I previously stated, since those would be trivially linkable to input registration.
Note that since all proofs are created with sigma protocol, the 2n+1 proofs should be composable into a proof of their conjunction. I believe this also holds if using bulletproofs for the range proofs at input registration.
The (sub-)proof of MAC validity could be modified so that a credential is considered valid if it has a valid MAC OR the amount it certifies is exactly 0.
Since users could then generate such credentials without ever contacting the server, this makes it possible to require valid credentials to be shown at input registration, which would allow users merge an arbitrary number of inputs together into a single credential without requiring the output registration phase to accept as many credentials as there are inputs.
As few as 1 spent & newly requested credentials per input/output registration operation are needed if output registrations also produce newly signed credentials for the remaining amount, but ideally a larger number on the order of log(participants) should be used to allow users to make parallel requests for improved privacy.
The balance proof in the section "Over-spending prevention by balance proof" doesn't make sense to me.
Suppose we don't need the unconditional hiding, in other words the randomness terms are zero.
The left hand side of the proof is:
The right hand side of the proof is:
Verifier cannot satisfy such the equation, The equation doesn't proof that the amounts in the commitments sum up to , anyway.
some citations are not really citations (i.e. references to a concept instead of a specific authoritative source, or the idea is only mentioned and not actually used) and would therefore be more appropriate to make into footnotes with links
In particular there are the the different transaction structures listed in the intro as being possible:
(opened as an issue so that we don't forget, putting it off for now since we're working on the text itself)
[Obsolated] See https://github.com/zkSNACKs/WabiSabi/blob/master/protocol.md
In this issue I would like to gain consensus slowly on the amount organization by establishing a stable base and expanding it with various ideas.
Changes can be considered as received coins.
From here on, up until noted otherwise
isStandard
checks are non-existents (dust limit, max tx size, max mempool chain, max input number, etc..)Our objectives with maximum privacy could be achieved by
An immediate issue we must resolve is that this scheme would operate with millions of inputs and outputs.
There are numerous ways standard denominations could be introduced to resolve our issue with the cost. It's most reasonable to introduce standard denominations those can be decomposed with each other, so for example using prime numbers would be a bad idea.
Design Decision to be made:
In this case, our scheme would look like this:
Mostly copypasta of #15, but with concrete signature scheme. This protocol is based on the paper: Anonymous Credentials Light (ACL) by Foteini Baldimtsi and Anna Lysyanskaya.
User has UTXO with value and wants to split it into outputs with values and such that .
User creates Pedersen Commitments: where . The terms correspond to a single attribute in an ACL signature with an empty message, with a range proofs for each.
The user also needs to prove the sum of the commitments - I think this can just be done revealing and letting the coordinator ensure that the combined commitment is a commitment to the public input amount with this randomness, but if this is problematic for the ACL security proof this can probably be proved in another ways.
From this protocol the user obtains signatures for commitments in a way where the following properties hold:
User can register a desired output for by sending the coordinator
Notice that, the first subscript of is part of the variable name in the paper, while the second one denotes the index of the amount commitment.
The protocol is the same, except instead of only registering signed zero commitments at output registration, some of those commitments aren't zero, but maybe they were coming from different inputs.
Since the coordinator cannot recognize the unblinded signatures at output registration, it can only tell they are valid, unlinkability is guaranteed.
Since the unblinded signatures are valid for single blinded commitments only, the coordinator can make sure one signature can only be used once.
Found another interesting paper: https://eprint.iacr.org/2019/373.pdf
We achieve transaction anonymity with a Zerocoin setup which is implemented through 1-out-of-many proofs over generalized Pedersen commitments as is discussed in [4]. Each coin is associated with a unique serial number and a monetary value. The serial number is explicitly revealed during the spend operation in order to prevent the double-spending of the coin. The users will be able to merge, split and redeem multiple
coins while also providing a balance proof to ensure that the transaction’s input and output values add up and no value is generated out of thin air. This proof generation method leverages the unique design properties of 1-out-of-many protocol, which is used to prove the validness of spent coins without revealing their origins and also encode coin value information necessary for generating a zero-knowledge balance proof.
In this protocol one would create blind Pedersen commitments to their desired outputs and prove the sum with rangeproofs. The underlying values would be blindly signed, thus they can be registered as outputs separately. However merging these values (eg. coin consolidating) is not figured out.
More information on Blind Signatures on messages committed in Pedersen Commitments: http://www.cs.pwr.edu.pl/hanzlik/preludium/wyniki/paper2.pdf
Another Pedersen commitment blinding scheme is described in ACL paper: https://eprint.iacr.org/2012/298.pdf
In the Inpute Registration Phase, the paper says These inputs must be unspent, confirmed and mature.
However, in current zero link, because of the segwit malleability fix, previous CoinJoin outputs CAN be unconfirmed at the point when they are registered for re-mixing.
Is this possible in the new scheme too?
I sent here a few ppl who was interested about Wabi-Sabi. However, it is very hard to figure out anything about this if you are not familiar with the topic. @nothingmuch explanation could be a good welcome story as README.MD.
https://gist.github.com/nothingmuch/9427a27e29ab3c525a23ae4fd6f8a5ae
tl;dr - Using a pretty simple scheme, getStatus
can be modified to be allow auditing by clients, allowing them to detect partitioning or attempts to reveal the final transaction data in a way that targets any specific user.
Before providing registering outputs clients must validate ownership proofs during connection confirmation for the full set of inputs. Similarly, before providing signatures the clients need to know the full list of outputs and the final order of the transaction.
In the current protocol description this is mitigated by providing this data in getStatus
responses, encrypted under a random symmetric key that is given to clients in response to successful connection confirmation.
However, this still allows the coordinator to withhold or delay data on specific connections, only allowing a small fraction of users at a time to learn the transaction data in the hope of learning correlations between clustered inputs.
We can instead guarantee that no connection is favored by a simple application of erasure coding:
whenever a round is waiting for data from clients with dependencies, the required data (inputs and ownership proof, or outputs) is split into small fixed size chunks and broadcast in the following manner:
The chunks are assembled by decoding with a simple linear time algorithm, since the chunks selected into each message are calculated deterministically by each side the client knows which messages contain which chunks and therefore how to decode them by simply XORing.
The client can only verify the XOR is correct with respect to those chunks after decoding the entire payload. If the coordinator provides a signature of some kind, this can be used to prove malfeasance in case it attempts to withhold data.
Since the coordinator knows which connection received which chunks, but these are uniformly distributed between the different messages, clients should wait until redundant messages have been acquired to post the final signature, but can randomly schedule providing signatures from other inputs when the transaction has been decoded.
Since clients maintain multiple connections to the coordinator and messages are totally ordered they can observe the message order between their connections and ensure that it is stable (consistent with round robin ordering). All connections should lie on the same circular buffer, if a new connection does not get interleaved with the existing one or two connections received the same message indicates, that may indicate a partition attack. (is it a problem that the coordinator can inflate message numbers? this allows the coordinator to grind chunks in order to partition subsets of connections, mitigated by small chunk size and multiple connections)
Since the messages contain the XOR of chunks selected, and that selection is based on the client randomness the choice can't be adverserially controlled by the coordinator, and therefore in combination with the connection ordering, so long as chunks are sufficiently clients should learn the full payload only dependent on the number of connections they have open.
In short, this technique can be used to detects attempt to partition users or perform timing attack from the client's perspective, and to ensure that clients maintain at least some number of circuits (based on the per connection cap).
Clients can therefore be reasonably sure that the order in which they learn of the full payload compared to other clients is random, and allowing them to audit the coordinator publishing transcripts of status updates if they contain order violations or anomalous gaps that may indicate data withholding attempts.
As a side benefit this means that clients that don't care about the round data can contribute to privacy with only a fraction of the bandwidth requirements (parameterized by the per payload/connection cap) of including all data in every getStatus
response.
[1] the number of chunks has to be sampled from a correctly modeled distribution for decoding to work well
For a proper research paper we need quantified arguments about CoinJoin inefficiencies. Our goal is that WabiSabi will enable us to somewhat alleviate these inefficiencies.
These are the statistics we were discussing with @nopara73 and @nothingmuch:
The number of input UTXOs in a coinjoin that have less value than the minimum denomination. In current CoinJoin implementations these UTXOs if merged, than coordinator can link them. This is a privacy-leak we will be able to eliminate with WabiSabi with input merging.
The number of output UTXOs that right after the CoinJoin tx were redeemed for payment purposes. We could use a heuristic that every tx with 2 output UTXOs are payments. This might be a good approximation of the payment transactions using CoinJoin outputs. With WabiSabi these transactions could happen in the CoinJoin tx itself, therefore saving considerable blockspace for the whole community.
Calculating the ratio/number of unmixed change outputs in CoinJoin txs.
Pls feel free to add below or edit this issue if you have any meaningful idea we should measure!
I tested the performance of the signature algorithm with 10,000 signatures. First test was simply signing a received random byte array (what the server does) and it took 20.29ms to sign them all. The second test was signing a random byte array but using new keys every time, it took 2,443.44 ms for 10,000 signatures. Finally I signed, unblinded (this is done by the client but I had no choice) and verify the signature reusing the signing keys and it took 7,832.81 ms.
| Method | Toolchain | SignatureCount | Mean | Error | StdDev | Median | Rank |
|----------------------------------- |-------------- |--------------- |------------:|-----------:|-----------:|------------:|-----:|
| OnlySignMessage | netcoreapp3.1 | 10000 | 20.29 ms | 0.838 ms | 2.377 ms | 20.23 ms | 2 |
| SignRandomMessageUsingNewKeys | netcoreapp3.1 | 10000 | 2,443.44 ms | 48.717 ms | 45.570 ms | 2,449.43 ms | 3 |
| SignBlindedMessage_VerifySignature | netcoreapp3.1 | 10000 | 7,832.81 ms | 133.732 ms | 125.093 ms | 7,825.85 ms | 4 |
I can have better tests tomorrow.
Just in case, these are the tests:
public class BlindSignatureTest
{
[Params(10_000)]
public int SignatureCount;
[Benchmark]
public void OnlySignMessage()
{
var r = new Key();
var key = new Key();
var signer = new SchnorrBlinding.Signer(key, r);
foreach(var i in Enumerable.Range(0, SignatureCount))
{
var blindSignature = signer.Sign(new uint256(r.PubKey.ToBytes().Take(32).ToArray()));
}
}
[Benchmark]
public void SignRandomMessageUsingNewKeys()
{
var message = new uint256(Encoders.Hex.DecodeData("243F6A8885A308D313198A2E03707344A4093822299F31D0082EFA98EC4E6C89"), false);
foreach(var i in Enumerable.Range(0, SignatureCount))
{
var r = new Key();
var key = new Key();
var signer = new SchnorrBlinding.Signer(key, r);
var blindSignature = signer.Sign(new uint256(r.PubKey.ToBytes().Take(32).ToArray()));
}
}
[Benchmark]
public void SignBlindedMessage_VerifySignature()
{
var r = new Key();
var key = new Key();
var signer = new SchnorrBlinding.Signer(key, r);
var message = new uint256(Encoders.Hex.DecodeData("243F6A8885A308D313198A2E03707344A4093822299F31D0082EFA98EC4E6C89"), false);
var requester = new SchnorrBlinding.Requester();
var blindedMessage = requester.BlindMessage(message, r.PubKey, key.PubKey);
foreach(var i in Enumerable.Range(0, SignatureCount))
{
var blindSignature = signer.Sign(blindedMessage);
var unblindedSignature = requester.UnblindSignature(blindSignature);
SchnorrBlinding.VerifySignature(message, unblindedSignature, key.PubKey);
}
}
}
Let me draft the protocol we came up with so far!
We would like to have an enhanced Chaumian CoinJoin mixing protocol which allows mixing participants not only to mix to their own addresses (Mix2Self), but potentially mix to an address belonging to another entity (Mix2Other). Put differently mixer participants should be able to register an output UTXO with an arbitrary payment amount, different from the equal value denomination. This would allow users to use CoinJoins not only for mixing but also for payments. This would increase the practicality of CoinJoins and would hopefully attract more users, hence increasing our anonymity sets.
What lower level functionalities we need to achieve our goals?
Input UTXO splitting
Whenever a user registers an input UTXO, say 5 BTC, should be able to receive blind signatures on several, say two, output UTXO values, for instance one with 2 BTC and one with 3 BTC, in a way that users can convince the coordinator that values "inside" the blinded messages add up to the input UTXO value, in this example 5 BTC, while still maintaining the confidentiality of the committed values from the coordinator.
Input UTXO merging
To make a payment users might need to consolidate (merge) their registered input UTXOs. Therefore, we would like to allow users to merge their registered input UTXOs without disclosing to the coordinator which input UTXOs were merged.
Note that since most of the protocol happens off-chain we have the freedom to freely use almost any kind of cryptography to achieve our goals.
In order to generate a BLS-signature, the signer with private key ( is the public key)
computes the signature and the verifier checks whether holds. The correctness is obvious, since
.
The BLS blind signature scheme works as follows:
Blind BLS Signature verification:
It's easy to verify that this is correct, since .
How shall we instantiate the hash function applied in the Blind BLS signature scheme? We would like to retain nice algebraic properties, hence in our case the hash function might be modular exponentiation, namely for another generator element . This is a hard to invert function if the DLOG assumption holds in the group we're working in. However, this is not a second-preimage resistant hash function as by applying Fermat's little theorem. This can be fixed, if we restrict the message space to be . This can be enforced by using some efficient range proof system.
Note that if we apply the aforementioned hash function then at blinding phase essentially users end up obtaining a Pedersen commitment to their messages, ie. .
For ease of exposition we omitted the range proofs but naturally whenever users send their blinded messages to the coordinator they also send rangeproofs to the coordinator to convince her about the fact that the values inside the blinded messages are less than . We need this to ensure the second preimage resistance of the applied hash function. Note that for Pedersen commitmens we have several efficient range proofs we can choose from.
This is an example of merging two blindly signed input UTXOs, but this naturally generalizes to arbitrary number of blindly signed input UTXOs.
Input UTXO merging functionality
Providing the coordinator two hashes and two valid signatures one can merge two input UTXOs to an output UTXO with value as a user could prove that two values "in a signature" add up to a public value (UTXO merging) by this verification equality: .
Achieving double-spending
Coordinator will accept each valid signature once and only once.
Note that messages represent UTXO values. Since coordinator could know that input UTXOs will never be more than 21,000,000BTC, which amounts to satoshi, coordinator by seeing a hash value could quite easily brute force the exponent due to the small message space. This attack vector might be alleviated to pad each message with, say 196 bits of random values corresponding to subsatoshi amounts. This padding scheme will thwart such an attack from the coordinator.
Do you see any problem with the proposed scheme? Any dangerous edge case? Any attack vector we do not address so far? Questions?
This is another blabbering issue where I document some of my thoughts on one aspect of WabiSabi, this is probably out of scope for the current state of research, also the nuances in implementation will of course differ greatly.
The user has three different options to choose from, when he desires to send some bitcoin: Low time preference WabiSabi
, High time preference WabiSabi
, reckless regular
. The trade-offs matrix is: privacy
, speed
, fees
.
This is a CoinJoin that provides high privacy
, because this tx has many users / k anonset. the speed
however is slow, in both the coordination [minimum k anonset / timeout trigger the round] as well as in on-chain confirmation [14/24 h fee target]. The coordinator fees
can be very low when doing a high-privacy remix, or high with fresh bitcoin.
This CoinJoin is designed for sending to one-self, both CoinJoin / remixes to self, as well as to a different wallet like hardware wallet; or making a payment to a receiver who can wait for both the appearance of the tx in the mempool [after coordination] as well as confirmation.
This is a CoinJoin that provides medium privacy
, because the tx has only a low number of users / k anonset [but above 1]. The speed
is medium fast, as the coordination is quick [for example 10 users or 5 minutes], and the on-chain confirmation target is a high priority too. The coordinator fees might be even free, considering the low anonset and the small value utxo the coordinator would create.
This CoinJoin is designed for sending to others in a CoinJoin. The coordintaion is quick, so it can be expected that users leave their wasabi open after starting to negotiate the tx. And for these tx, all peers agree to a high priority on-chain fee. The privacy provided is lower, because less users, but still better than a non-CJ tx.
This is a non-CoinJoin transaction. The privacy
is the worst, as no anonset is generated at all. The speed
is the fastest, as there is no coordination, and the user can select his own custom on-chain fee preference. There is no coordination fee
at all, so it is likely the cheapest.
This is for users who want to make a custom transaction, who do not want to use coinjoin for this payment, or require some very fast confirmation. It can be argued that in a privacy focused wallet this option should be hidden in the UI, or at least not the default.
The automatic coin selection should favor high anonset coins for this tx, as then there are still some privacy guarantees.
Caution. I don't know if there is crypto for this, but I strongly suspect there is and conceptually it could be the ideal scheme we are looking for.
User has UTXO with value v
and want to split them into outputs with values v1
and v2
such that v = v1 + v2
.
User creates n
double blinded Pedersen Commitments: C(v1, r10, r11), C(v2, r20, r21), C(v3, r30, r31), C(v4, r40, r41), ... C(vn, rn0, rn1)
where v3, v4, ... vn
equal to 0
.
User also creates a Pedersen commitment for their sums: C(v1+v2+...vn, r10+r20...+rn0, r11+r21+...+rn1)
.
User tells the coordinator the double blinded Pedersen commitments, the commitment to their sums, v1+v2+...vn
, r10+r20...+rn0
and r11+r21+...+rn1
and rangeproofs, preferably Bulletproofs.
The coordinator signs the double blinded commitments one by one and the user unblinds these signatures such that they are valid for the single blinded commitments: C(v1, r10), C(v2, r20),... C(vn, rn0)
.
User can register a desired output for v1
by sending the coordinator C(v1, r10)
, some 0 commitments, but for simplicity let's say it only sends C(v3, r30)
, valid signatures for those commitments and their sums: v1 + v3 = v3
, r10 + r30
.
The protocol is the same, except instead of only registering signed zero commitments at output registration, some of those commitments aren't zero, but maybe they were coming from different inputs.
Since the coordinator cannot recognize the unblinded signatures at output registration, it can only tell they are valid, unlinkability is guaranteed.
Since the unblinded signatures are valid for single blinded commitments only, the coordinator can make sure one signature can only be used once.
This might be too early to bring up, but I have a couple ideas on the coordinator fee, and am curious if the crypto magic of the current protocol would hold up for it.
unlinkable pre-payment of the coordinator fee across rounds.
In a self-spend, say the user has inputs worth 1.01 btc
, and the fee is 0.005 btc
, then the user would receive 1 btc
anonset output, no on-chain change, but instead a fee pre-payment worth 0.005 btc
, which can be used for the next round.
I have an intuition that this would make subset sum analysis on non-equal-value outputs even more difficult.
If a user has a small value coin, say 0.01 btc
, then for most real payments he would have to consolidate it with another coin, thus revealing common ownership. Instead, he can register this small coin in the input of a CoinJoin, without receiving any output, the value goes directly to the coordinator fee output. In exchange he gets 0.01 btc
fee pre-payment that can be used in a subsequent round to pay for the fee.
The coordinator could gift fee pre-payment to users, as a reward for contributions, or as marketing, or for whatever reason. If the pre-payment can be transferred between users, then this could be a further opportunity to gift rounds to others.
I think this would mess with assumptions of the CoinJoin quite a bit. The details of course depend on how ever we are going to structure the tx. But in general...
I spoke with Balint, and in general, having a fee pre-paid giftcard is not of legal concern, this is not a custodianship. Rather, it is debt on the companies balance sheet, denominated in bitcoin, and it is settled in the providing of the coordination service. There is no payout of pre-paid giftcards to bitcoin directly. [that would be the users has no inputs, but has outputs, and provides the giftcards off-chain.]
Here I need Your help with clarification if something like this is possible. The idea is that the coordinator can give out a token with a certain value. This token can be used in input registration to prove that the fee is paid. If possible, without the coordinator finding out that such a gift card was used. Or at least, that he does not know which gift card was used.
Hey,
I read through d498f64 and I like the design. Great to see that KVACs + Pedersen Commitments of value and serial number attributes + rangeproofs in fact lead to a working and simple system. I'm looking forward to see this in practice - in particular because it's not only applicable to Chaumian Coinjoin, but can also be used to implement ecash.
pi^sum[1]
be the sum of differences between M_vi
and C_vi
r
-values?sum(z_i)
to the coordinator instead of providing a zero knowledge proof the prod(M_vi/C_vi)
relation. This would be a problem, because after seeing sum_z
, the coordinator can for all subsets of M_vi
he has previously seen, test whether G_v^sum_z prod(M_vi) ?= prod(C_vi)
and therefore link M_vi
issuances with the credential presentation.t_i
is from credential@RubenSomsen suggested a way to prevent double spending without needing a serial number attribute.
First, observe that instead of revealing the serial number s
, with a new generator G_r
knowledge of s
can be proven revealing only a curve point C_r = G_r^s
. This commitment does not contain z
, so it's fixed for a single attribute and can be used as a serial number/nullifier if the proof also relates the witness s
to C_s
.
Next, to eliminate the serial number attribute we can use the r
term from the amount commitment (M_v = G_h^v G_g^r
) instead of s
, to create C_r = G_r^r
.
A proof of knowledge of r
with a statement (C_r, C_v / G_v^z G_h^v) = (G_r^r, G_g^r)
(DLEQ) is sufficient.
A G_r
is required because if G_g
is used the coordinator could brute force the small message space of values to deanonymize the scheme as described in #40.
Like serial numbers this does not provide unconditional hiding but that can be addressed if M_v
includes additional randomness term with yet another distinct generator.
From the paper "Compact E-Cash": https://link.springer.com/chapter/10.1007/11426639_18
References:
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.