laudiacay / barustenberg Goto Github PK
View Code? Open in Web Editor NEWbarretenburg in rust (here we go)
License: Apache License 2.0
barretenburg in rust (here we go)
License: Apache License 2.0
The following tasks need to get done:
PippengerRuntimeState::new()
PippengerRuntimeState::pippenger_unsafe()
PippengerRuntimeState::pippenger()
There is a lot of commented out code in this file. Not sure what needs to be done!
Make sure they're all reflected by rayon parallel iterators in the rust. See some examples in PR #61.
We need to implement:
copy_placeholder()
add_polynomial_evaluations_to_transcript()
(not defined in barettenberg)compute_batch_opening_polynomial()
(not defined in barettenberg)init_quotient_polynomial()
compute_opening_elements()
get_circuit_size()
May need more help and comments here for the functions not implemented in barettenberg.
this is big :(
https://gist.github.com/shuklaayush/d3952ae12124082b28aa71834e3cafe1
thanks to @shuklaayush
NO cryptography primitive tests
NO retesting arkworks- we trust them
CHECK in this lib for what we have implemented. test THAT. :D
Describe the immediate problem.
What's the impact of this bug?
Describe the sort of fix that would solve the issue.
Describe the bug
A clear and concise description of what the bug is.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
A clear and concise description of what you expected to happen.
Screenshots
If applicable, add screenshots to help explain your problem.
Desktop (please complete the following information):
Smartphone (please complete the following information):
Additional context
Add any other context about the problem here.
your todo list for success:
kate_batch_open
and test_kate_open
tests in src/plonk/proof_system/commitment_scheme.rs
the kate_batch_open
test will require getting transcripts working (and, therefore, pedersen)- as of sept 30, pedersen is unimplemented, and transcripts are untested. so this may get half-done before transcripts and half-done after.
now's a fine time to start, though- this should not be difficult at all if transcripts + pedersen are working.
This file is all commented out yet it is seemingly fairly complete.
NB: Feature requests will only be considered if they solve a pain or present a useful refactoring of the code.
The pederson commitment tests are currently failing due to testing agains the hard-coded generator values from barretenburg. Due to limitations in Arkworks we use a slightly different canonical method for creating generators by hashing to a curve.
To fix these tests trace over the hash-to-curve function and manually compute the resulting pederson hash values for Fq::one() from the generators.
Here are some tasks, my friends!
ComposerType::create_manifest()
ComposerBase::compute_wire_copy_cycles()
ComposerBase::compute_sigma_permutations()
Then we should add in the accompanying tests from here.
PolynomialManifest::len()
We need to implement the following:
ProvingKey::from_reader()
impl Serialize for ProvingKey
impl Deserialize for ProvingKey
One unimplemented function here:
Then we should add in the accompanying tests from here.
We need to implement:
add_opening_evaluations_to_transcript()
compute_opening_polynomial()
batch_open()
batch_verify()
zac added precompute_miller_lines
as a VERY small optimization to the pairing code.
We have about 99% of the code needed to make this work implemented in the srs
branch! However, last second we chose to use arkworks for this, because the last 1% of the work would have been annoying.
There's a missing constant in arkworks that's necessary here to implement our own pairing from scratch. It's twist_coeff_b
from the Fq2 params in the original code. What is it? We (including zac) think it's the (Fq2) b
of the Weierstrass form of the Fq12
twist's automorphism curve, represented as a pair of x
and y
Fq
s.
It's constructed in a weird representation (some kind of limb form?) in Barettenberg. We don't know what this representation is yet. So far we've been unable to figure out how to get to the arkworks representation of the constants (just a decimal string representing x of the point) to turn into the barettenberg constants that we do have. It doesn't seem to be an endianness issue. It might be a WNAF or montgomery representation situation. We don't know. Zac recommended we get the c++ compiling and print things out...
tl;dr holding off on this! it should be a VERY small performance gain. We'll get back to it FAR in the future.
CC after discussion with @Autoparallel
Implement:
quotient_required_challenges()
update_required_challenges()
sum_linear_terms()
compute_non_linear_terms()
We need to get the following done to be able to implement other issues (e.g., #34)
reduced_ate_pairing_batch_precomputed()
conditionally_subtract_from_double_modulus()
tag_coset_generator()
coset_generator()
external_coset_generator()
NB: Feature requests will only be considered if they solve a pain or present a useful refactoring of the code.
Describe the pain that this feature will solve.
Describe the impact of not having this feature.
Describe the solution.
Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
Describe the solution you'd like
A clear and concise description of what you want to happen.
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
Additional context
Add any other context or screenshots about the feature request here.
There is a single unimplemented!()
here:
Then we should add in the accompanying tests from here.
this includes precomputemillerlines
pub(crate) fn divide_by_pseudo_vanishing_polynomial(&self, _coeffs: &[&mut [&mut Fr]], _target: &EvaluationDomain<'a, Fr>, _num_roots_cut_out_of_vanishing_poly: usize);
pub(crate) fn compute_kate_opening_coefficients<Fr: Field>(src: Fr, dest: Fr, z: usize, n: usize) -> Fr;
pub(crate) fn get_lagrange_evaluation<Fr: Field>(zeta: &Fr, _domain: &EvaluationDomain<'a, Fr>, _num_roots_cut_out_of_vanishing_polynomial: usize);
pub(crate) fn compute_barycentric_evaluation<Fr: Field>(_coeffs: &[Fr], _num_coeffs: usize, z: &Fr, domain: &EvaluationDomain<'a, Fr>) -> Fr;
Here are TODOs and implementations as tasks:
EvaluationDomain::compute_generator_table()
Then we should add in the accompanying tests from here.
A lot of tests, across a lot of the repo.
Mostly pretty fundamental things... mostly should be easy wins and very GPT-able... not assigned a bounty yet because I'm not done adding stuff here.
proving_key_from_serialized_key
buffer_serialization
, basic_compression_equality
, compression_inequality_circuit_type
, compression_inequality_different_circuit_size
, compression_inequality_different_num_public_inputs
, compression_inequality_different_commitments
, compression_inequality_different_num_commitments
, compression_equality_different_contains_recursive_proof
, compression_equality_different_recursive_proof_public_input_indices
To finish #57 we need to convert the constant twist_coeff_b
from barrentenburg into a Fp . This will involve building barrentenburg and printing out the byte representation of the constant, writing a script to convert the value, and adding it as a hardcoded constant with the ['MontConfig'] macro.
We need the following to get done:
Verifier::new()
Verifier::validate_commitments()
Verifier::validate_scalars()
Here's a list of the TODOs as tasks:
Pippenger::get_num_points()
PippengerReferenceString::get_monomial_size()
PippengerReferenceString::get_monomial_points()
Then we should add in the accompanying tests from here.
Use the nior-lang grumpkin lib
write the prover test get it compiling
property of @JDLBeckman
Here's the two tasks:
ReferenceStringFactory::get_prover_crs()
ReferenceStringFactory::get_verifier_crs()
Then we should add in the accompanying tests from here.
We need to:
coset_generator()
external_coset_generator()
Implement:
compute_round_commitments()
compute_quotient_contribution()
Here is a list of functions as tasks to complete here:
PedersenBlake3s::hash()
PlookupPedersenBlake3s::hash()
Transcript::from_serialized()
Transcript::apply_fiat_shamir()
Transcript::export_transcript()
Then we should add in the accompanying tests from here.
take these tests:
https://github.com/AztecProtocol/aztec-packages/blob/af93a33fdd5d73648d31b4e4f7347d29b8892405/barretenberg/cpp/src/barretenberg/proof_system/circuit_builder/standard_circuit_builder.test.cpp
https://github.com/AztecProtocol/aztec-packages/blob/af93a33fdd5d73648d31b4e4f7347d29b8892405/barretenberg/cpp/src/barretenberg/proof_system/composer/composer_lib.test.cpp
https://github.com/AztecProtocol/aztec-packages/blob/af93a33fdd5d73648d31b4e4f7347d29b8892405/barretenberg/cpp/src/barretenberg/proof_system/composer/permutation_lib.test.cpp
either implement them and get them passing, or justify why they shouldn't be implemented
I think this is pretty trivial. If I'm wrong, flag me.
Here are some things we gotta do!
StandardSettings::compute_quotient_evaluation_contribution()
StandardSettings::append_scalar_multiplication_inputs()
I don't believe we need to get to TurboSettings
, UltraSettings
, etc.
EvaluationDomain::coset_fft_inplace()
EvaluationDomain::coset_fft_vec_inplace()
EvaluationDomain::coset_fft()
EvaluationDomain::coset_fft_with_generator_shift()
fmt::Display for Proof
Tagging as low priority since these seems non-essential.
The function is in polynomials/polynomial_arithmetic.rs
and takes the form:
pub(crate) fn divide_by_pseudo_vanishing_polynomial(
&self,
_coeffs: &[&mut [&mut Fr]],
_target: &EvaluationDomain<'a, Fr>,
_num_roots_cut_out_of_vanishing_poly: usize,
) {
unimplemented!()
}
I am not sure exactly what this function should be doing mathematically.
There is a TODO here about "being super incorrect" inside of fft_inner_parallel
.
Low priority given this is to do with parallelization.
Inside of compute_permutation_lagrange_base_single_helper()
we have a few TODOs
// ~ snip
if permutation[i].is_public_input {
// TODO: Replace with correct external_coset_generator function
output.coefficients[i] *= external_coset_generator::<Fr>();
} else if permutation[i].is_tag {
// TODO: Replace with correct tag_coset_generator function
output.coefficients[i] *= tag_coset_generator::<Fr>();
} else {
let column_index = permutation[i].column_index;
if column_index > 0 {
// TODO: Replace with correct coset_generator function
output.coefficients[i] *= coset_generator::<Fr>(column_index - 1);
}
}
// ~ snip
Further help is needed here to clarify how this should change.
This will give you a good look into the transcripts, fiat-shamir, and pedersen commitments. I don't think it'll be too hard, most of this except the pedersen is done.
commit_native_with_multiple_indices
, commit_native
, compress_native_array
, compress_native_buffer_to_field
, compress_native
, compress_native_with_multiple_indices
, compress_native_index
, merkle_damgard_tree_compress
, merkle_damgard_compress_with_multiple_ivs
, merkle_damgard_compress
zero_one
, endomorphism_test
, hash_single
, hash_pair
, merkle_damgard_compress
, merkle_damgard_compress_multiple_iv
, merkle_damgard_tree_compress
. Get them passing.univariate_serialization
and validate_transcript
tests written and passingAfter this, the batch_commit test from the commitments bounty should be tackle-about.
lots of rngs get used in lots of threads. these are sort of adhoc- there's no way to do deterministic tests.
this is bad.
figure out the right thing to do with this.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.