Comments (24)
@melaurent if the locktime height should be hidden until it is used (which TBH doesn't seem super useful to me), it is more verifier-efficient to just hide it behind a hash and reveal it explicitly. Then no EC math needs to be done.
Further, with your suggestion n
would need to be stored forever so that historical validators could learn what multiple of K
to subtract from the utxo set. By avoiding EC math, it's possible to design things so that historical validators never learn about locktimes. Strictly speaking this weakens the security model for locktimes, but I think the resulting security model is exactly what we want for locktimes because they're so short-lived. That is, online validators ensure they are not validated until they are long-buried and long-expired, at which point violating them would require rewriting so many blocks that the violation was unnecessary anyway, and then nobody looks at them again.
from grin.
This belongs in the transaction signature. Yeah, it'd have to be encoded in another field.
from grin.
Given that we already sign the fee, it'd either have to get composed or added as another signature.
from grin.
Been thinking a little about this.
We currently sign the fee
on the transaction itself - so presumably we could include a height or time in the message being signed.
We do the following to create the message (with just the fee currently) -
fn u64_to_32bytes(n: u64) -> [u8; 32] {
let mut bytes = [0; 32];
BigEndian::write_u64(&mut bytes[24..32], n);
bytes
So there is space in there to include the timelock info.
But - this limits the timelock to the entire transaction, so all outputs would be affected by this, change outputs as well.
Is this an acceptable approach?
Ideally the timelock would be per-output but we have no way of doing this in MW - we don't sign anything limited to just a single output.
I guess the general approach would be to make the necessary transactions within the wallet to build a single output of the right amount and then send the timelocked transaction creating a single timelocked output (and avoid ending up with a potentially large change output also encumbered with a timelock).
from grin.
I also saw some mention on the ML of keeping the TxKernel
as small as possible and aiming to avoid needing to persist a timelock on every TxKernel
.
We have the features
bitflags in there - so I'm thinking we could use one of the bits in features
to flag timelocked transactions - only serialize/deserialize the timelock if present.
This would mean the majority of TxKernels would remain the same size (assuming most are not encumbered by timelocks).
from grin.
Yes, the only thing MW can do that I'm aware of is put a timelock on a kernel, which effectively timelocks the whole transaction.
I'd prefer that every kernel have a timelock to make "real" ones inconspicuous, for privacy's sake. But it's not a strong preference.
from grin.
@apoelstra I thought I saw you recommend on the ML somewhere to avoid putting a timelock on every kernel to minimize the storage cost?
But yeah I agree it makes sense to have it on all of them to keep them all similar.
Except - is it not going to be trivially easy to identify no-op timelocks or very short lock periods by just comparing the kernels to the height or timestamp of the blocks they originate from?
Say an output with a lock_height of 100 originating in a block of height 100 (so no effective locktime) is going to be readily distinguishable from an output in the same block with a lock_height of 600 (500 blocks later).
from grin.
There isn't any way to do relative locktimes (that I know of) -- the way kernel locktimes work is that they make that kernel invalid until the specified block is up.
Relative locktimes would require validators learn when outputs were spent relative to when they were created, which isn't possible in general in MW because later validators never learn about old spent outputs. There may still be value in supporting this because typical relative locktimes are quite short-lived, but these locktimes would have a much weaker form of security than the rest of the system. They would need to be associated with outputs rather than kernels.
Yes, I did comment on how I wanted to save space, but Igno (I think) convinced me that "only four bytes" per kernel was fine :).
from grin.
Oh interesting. I had not fully thought through the output validation vs kernel validation.
The way we're doing locktimes on coinbase outputs is based on validating the output, not the kernel.
I need to go read through that again and see if I can understand how to apply what you're saying here to coinbase locktimes.
from grin.
Even with coinbase outputs, historical validators won't be able to validate that the funds were unmoved until maturity. I think this is acceptable, the maturity rule is only about reorg safety and to "exploit this" you have to rewrite way more blocks than you would to just replace mature coinbases in the first place.
I hadn't noticed this before. Interesting.
from grin.
Initial PR up for discussion (based on conversation above) - #167
from grin.
Another idea. Assume we have hashed switch commitments as proposed by Andrew in [1]:
vH + rG, sha256(rJ)
By default we reuse the r
blinding factor. How about time locked transactions use the block height h
instead? On spend, the input would have to reveal h
.
[1] https://lists.launchpad.net/mimblewimble/msg00165.html
from grin.
As discussed on Gitter, my proposal was for time-lock outputs, whereas the previous discussion mostly revolved around time-locked transactions (that can't be included in a block before h
).
from grin.
For time-locked output, couldn't we introduce a new NUM base point K such that an output commits to vH + rG + hK
where h is the height at which the output becomes spendable. When someone wants to spend the output, the h value has to be revealed so that the verifier can subtract hK
from the excess, and check if the current block height is higher than h.
from grin.
Oops! I realize suddenly that there is a miner-collusion vulnerability with outputs that have relative locktimes. If somebody spends these before the tx is confirmed (so clearly way before the locktime is up), a miner can include both transactions. The timelocked output will get cut-through and nobody will know that any rule was violated.
from grin.
Oh - so its not valid to put the tx in the pool (this can be consensus enforced).
But the miner can skip the pool entirely and include the tx in the block?
from grin.
A miner can include any transaction they want without telling anyone else until it's in the block, yes.
Regarding output lock times, as those are in the chain, assumingly whoever relies on them would wait until they're mined and detected in the chain before proceeding with the contract? Also right now we don't support multi-kernel transactions.
from grin.
@antiochp I'm saying that these outputs are not secure at all until they've been deeply confirmed.
@ignopeverell I'm actually not sure what application unconditionally locked outputs have, so I don't know how protocols using them would likely work. But this seems like a pretty serious footgun.
from grin.
(this is all assuming we can solve the issue with miner collusion and zero confirmations).
Switch commit proposal -
vH + rG, sha256(rJ)
The range proof signs the extra switch commitment rJ
so this cannot be modified.
@ignopeverell proposal above to use the switch commit to hide the lock_height h
(by using it as the blinding factor in the switch commitment) -
vH + rG, sha256(hJ)
What if we used blake2
instead of sha256
as the hashing function?
Edit: I see we are actually using blake2
in the code (but without the secret key).
We could then pass h
in as the secret key to the blake2 function.
vH + rG, blake2(h, rJ)
We could then preserve reuse of r
across both the commitment and the switch commitment (assuming this is desirable).
To spend you would need to reveal h
and rJ
.
I'm sure I'm missing something really basic here as to why this falls down. Thoughts?
from grin.
Just to circle back, miner collusion is indeed an issue in the general case. And actually even without miner collusion, if we start allowing multi-kernel transactions (we don't at the moment), it will become even more of a problem.
However in the particular case of time-locking miner rewards (which is what I believe you're after here), I don't see any immediate issue with this scheme. It would also be a nice way to try out switch commitment hash pre-image reveals.
from grin.
Multi-kernel transactions will come into play when we start using Schnorr signatures or is this something else? My understanding is we will have multiple kernels when multiple parties are involved in building parts of the tx independently?
Could you explain how this is involved as I don't think I understand the impact here?
Thanks!
from grin.
Schnorr signatures are aggregates (they've actually been rebranded as aggregate signatures or aggsigs) so that would still only generate a single kernel. In short, each party generate a partial signature and all of them can then be aggregated into a single final one.
Multiple kernels come into play for the current issue because if you want to do cut-through between 2 transactions, generating a single one afterward, you do need to be able to have multiple kernels. Otherwise the resulting "merged" transaction would not sum correctly with the kernel. Does that make more sense?
from grin.
Yeah ok I see how we'd end up with a tx with multiple kernels.
But I don't see how this affects outputs?
Is the problem that if we allow txs to be merged (by allowing multiple kernels) then anyone can effectively merge them and at that point we can no longer enforce any of this via consensus rules because inputs and outputs can effectively be removed before they are added to the chain?
Just thinking through this some more - can any two txs effectively be merged like this?
I was assuming they could only be merged if one spent an output from the other - but that's not actually the case. We wouldn't gain much by merging two completely unrelated txns - but it could still happen?
I think I answered my own question...
from grin.
Sort of related - I know you mentioned coin-join previously.
I thought coin-join was only effective if all the outputs were of the same value, a bunch of interchangeable outputs with the same denomination.
But this is not applicable to Grin as the amounts are hidden? So can any outputs put together end up doing something similar (but we can effectively ignore amounts and treat all outputs as the same for this purpose)?
Ah I think it clicked for me - if we allow "coin-joined" txs with multiple kernels to be pushed onto the network then we no longer know if any inputs/outputs have been cut-through as a result of the merged txs. So it is not just a miners that could be colluding, any party in the network could have done this.
from grin.
Related Issues (20)
- build website deposit and withdraw grin token using grin-wallet api
- v5.1.2 building error HOT 1
- Potential P2P Bandwidth attack
- log4rs: No space left on device (os error 28) HOT 1
- Config parsing error causes panic
- Very often node is PongMessage instead of HeadersMessage.
- TxHashsetDownload is starting in parallel with TxHashsetPibd HOT 4
- Prune Node sync more than 7 hours HOT 4
- Beta 5.2 Shutdown in step 4/7, many connected_peers: failed to get peers lock, Shutting down reader connection, waiting for thread exit, Shutdown
- bug in computing proof size rounded up to next higher 2-power
- grin 5.1.2 failed to build against rust 1.71.0 HOT 3
- Aborting PIBD error. restart fast sync v5.2.0-beta.3 HOT 6
- Add one extra layer to hide slatepack addresses.
- Foreign api get_blocks doesnโt return genesis block
- building Grin openbsd HOT 1
- Add chain type parameter into get_status rpc
- Compiling error Linux error[E0310]: the parameter type `T` may not live long enough HOT 2
- When run_tui is set to false it does not return to the cmd line once started HOT 2
- CPU consumption and DoS on calling get_outputs rpc HOT 4
- Update tokio to allow compilation for Windows ARM arch
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. ๐๐๐
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google โค๏ธ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from grin.