aztecprotocol / aztec-packages Goto Github PK
View Code? Open in Web Editor NEWLicense: Apache License 2.0
License: Apache License 2.0
Confirmed by @benesjan
To work around this, set up nvm globally and ensure the . ~/.nvm/nvm.sh line does not run
The world state synchroniser should offer a status interface through which it is possible to see sync progress and the current 'state' value.
const deployer = new ContractDeployer(abi, aztecRPCClient);
const txHash = await deployer.deploy(...agrs).send(options).getTxHash();
It should throw an error if the formatting of the abi isn’t as expected. E.g.:
const token = new Contract(address, abi, aztecRPCClient);
const receipt = await token.methods.mint(100n).send(options).getReceipt();
This is the object that is returned when calling a contract method, e.g. contract.methods.myMethod(arg1, arg2)
.
interface ContractMethod<Return = any> {
call(options: CallOptions): Promise<Return>;
send(options: SendOptions): Promise<TransactionReceipt>;
estimateGas(options: CallOptions): Promise<number>;
encodeAbi(): Promise<Buffer>;
}
CallOptions
and SendOptions
have one optional value for now:
interface CallOptions {
from?: AztecAddress;
}
If from
is not provided, it will be the first account in the key store.
TransactionReceipt
.Provide a simple TestKeyStore that generates random keys.
interface KeyStore {
addAccount(): Promise<AztecAddress>;
getAccounts(): Promise<AztecAddress[]>;
getSigningPublicKeys: Promise<AztecAddress[]>;
signTxRequest(txRequest: TxRequest, address: AztecAddress): Promise<Signature>;
}
** This is not a complete interface. We only need APIs for the first few milestones for now.
A default account key and signing key should be generated upon creation:
class TestKeyStore {
constructor() {
this.accounts.push(randomAccount);
}
}
The interface is AztecRPCClient
instead of AztecRPCServer
because the client is what the developers will interact with. An AztecRPCServer should implement the interface of a client so that all functionalities of a client will be supported by a server.
interface AztecRPCClient {
addAccount(): Promise<AztecAddress>;
getAccounts(): Promise<AztecAddress[]>;
getCode(contract: AztecAddress): Promise<Buffer>;
createDeploymentTxRequest(
bytecode: Buffer,
args: Fr[],
portalContract: EthAddress,
contractAddressSalt: Fr,
from: AztecAddress,
): Promise<TxRequest>;
signTxRequest(txRequest: TxRequest, account: AztecAddress): Promise<Signature>;
createTx(txRequest: TxRequest, signature: Signature): Promise<Tx>;
sendTx(tx: Tx): Promise<TxHash>;
}
** This is not a complete interface.
We should provide an AztecRPCServer implementation:
class AztecRPCServer implements AztecRPCClient {
constructor(
private keyStore: KeyStore,
private synchroniser: Synchroniser,
private simulator: Simulator,
private proofGenerator: ProofGenerator,
private aztecNode: AztecNode,
) {}
}
And a function to create an AztecRPCServer with required components:
function createAztecRPCServer({ keyStore, synchroniser, simulator, proofGenerator, node } = {}) {
keyStore = keyStore || new TestKeyStore();
synchroniser = synchroniser || new Synchroniser();
simulator = simulator || new Simulator();
proofGenerator = proofGenerator || new ProofGenerator();
node = node || new AztecNode();
return new AztecRPCServer(keyStore, synchroniser, simulator, proofGenerator, aztecNode);
}
addAccount
, getAccounts
, and signTxRequest
are called on the given KeyStore.createDeploymentTxRequest
should use the Simulator to construct the TxRequest.createTx
should use the ProofGenerator to create proof data.sendTx
should send tx to AztecNode.For the first few milestones, a Synchroniser does nothing special. All required data can be fetched from an AztecNode.
The data archiver should expose an interface for querying a range of blocks
There is a TODO here with a question mark about bounds checks.
This should indeed have a check to ensure LHS and RHS are 32 bytes. Since this assumption is used in the next lines:
wasm.writeMemory(0, lhs);
wasm.writeMemory(32, rhs);
If lhs
is more than 32 bytes then rhs
would be overwriting memory written for lhs
. The pointer for the result is also at offset 64
which assumes that rhs
is 32 bytes
Once started, the P2P module should use it's configured rollup source to retrieve rollups and reconcile it's tx pool. It should report it's current synced rollup number via a status interface. This interface should indicate it's status via a 'state' value on it's status interface.
There's a bug in the current api: if a tx does not belong to any account in the aztec rpc server, and it's not in the tx pool, its status will be marked as DROPPED
. But it's more likely that it has been settled.
We should be able to query settled tx from aztec node to check if a tx has been mined. The api will also be useful if we want to query a specific tx and show it in an explorer.
We don't allow users to query tx receipt if they are not the sender or the recipient. They don't get much information from the receipt anyway. They won't be able to see the from
and to
addresses.
The world state synchroniser should offer an interface to retrieve the current synced rollup Id.
We are currently only hashing the aztec and portal addresses (along with the data commitments and nullifiers) as part of the tx hash, but not the function tree root, since that'd require importing pedersen into all modules that depend on tx. We need to decide whether to include that dependency, or change the L2 block structure so it includes the function tree root as well.
See this conversation for more details.
Running the integration tests from scratch require ANVIL.
The error is as follows:
% yarn test:integration ~/aztec3-packages/yarn-project/end-to-end master mainframe
[anvil] /bin/sh: 1: anvil: not found
[anvil] anvil exited with code 127
--> Sending SIGTERM to other processes..
[test] yarn test:integration:run exited with code SIGTERM
To fix, I downloaded foundryup
and installed ANVIL by running the foundryup
script.
See: https://github.com/foundry-rs/foundry/blob/master/README.md#installation which will instruct you to run:
curl -L https://foundry.paradigm.xyz | bash
There are parts in the code which write to wasm memory without allocating such as here.
This is a possible footgun as you can only write 1024 bytes of data there. If you write more than this, I assume you would start corrupting either the stack or data on the heap.
Document usage of 0 and this limitation. It may make more sense to add a zeroPtr
function which documents it and then no longer use the 0
magic constant in the codebase when writing to WASM memory. This way, you document this edge case in one place.
Once the rollup simulator has returned the rollup proofs, the sequencer state should change to represent something like 'publishing rollup'.
The world state synchroniser should offer an interface to add new leaves to a tree. This interface should return the new root of the tree.
The client takes a server url in constructor and calls the server for each method call:
class HTTPAztecRPCClient implements AztecRPCClient {
constructor(serverUrl: string) {}
sendTx(tx) {
return this.server.post('aztec_sendTx', [tx]);
}
}
Default serverUrl: localhost:45730
We should be able to generate the client and the server using the json-rpc package, which will be moved to foundation
later.
Part of https://github.com/AztecProtocol/aztec3-milestones/issues/18
A developer should be able to build the generate the payload for a rollup given that he have the current state.
type AppendOnlyTreeSnapshot = {
root: fr,
next_available_leaf_index: uint32, // type tbd
}
type CalldataPreimage {
rollup_id: u32,
start_private_data_tree_snapshot: AppendOnlyTreeSnapshot,
start_tree_of_historic_private_data_tree_roots_snapshot: AppendOnlyTreeSnapshot,
start_nullifier_tree_snapshot: AppendOnlyTreeSnapshot,
start_contract_tree_snapshot: AppendOnlyTreeSnapshot,
start_tree_of_historic_contract_tree_roots_snapshot: AppendOnlyTreeSnapshot,
end_private_data_tree_snapshot: AppendOnlyTreeSnapshot,
end_nullifier_tree_snapshot: AppendOnlyTreeSnapshot,
end_contract_tree_snapshot: AppendOnlyTreeSnapshot,
end_tree_of_historic_private_data_tree_roots_snapshot: AppendOnlyTreeSnapshot,
end_tree_of_historic_contract_tree_roots_snapshot: AppendOnlyTreeSnapshot,
new_commitments: Array<fr, ?>,
new_nullifiers: Array<fr, ?>,
new_contracts: Array<fr, ?>
new_contract_data: Array<{az_address, eth_address}, ?>,
}
Once the rollup transactions have been sent successfully, the sequencer's state should change to represent 'rollup published'.
An engineer can start an instance of the P2P module configured with a TxPool and a rollup source
When creating a component for a3, it should be developed as a typescript class, which can be imported and used in any other components. If we want it to run as a server, we can use this package to initialise a JSON-RPC server for that component.
import { JsonRpcServer } from '@aztec/json-rpc';
import { WalletImplementation } from '@aztec/wallet';
const wallet = new WalletImplementation();
const server = new JsonRpcServer(wallet);
server.start(8080);
const wallet = new JsonRpcClient<Wallet>('wallet.com');
const signature = await wallet.signTxRequest(accountPubKey, txRequest);
The JsonRpcClient will convert supported custom types to json compatible types under the hood.
For example, for the method signTxRequest(accountPubKey: PublicKey, txRequest: TxRequest): Promise<Buffer>
,
it will send [{ name: 'PublicKey', value: accountPubKey.toString() }, { name: 'TxRequest', txRequest.toString() }] to the server.
And on receiving result from the server, it converts { name: 'Buffer', value: '0x123...abc' }
to Buffer.from('123...abc', 'hex')
.
const publicClient = new JsonRpcClient<PublicClient>('public-client.com');
const keyStore = new JsonRpcClient<KeyStore>('key-store.com');
Hello, this is a test.
Once stopped, the status interface should update with appropriate states and it should no longer sync to further rollups.
Any intermediate state applied to a Merkle Tree should be discarded when a new rollup is received. The true world state should always take precedence.
For this we will need to instatiate an instance of the private client. In turn, this should create and configure instances of P2P module, Rollup Archiver, Merkle Tree DB and Sequencer.
As the sequencer has a number of dependencies that are independently querying rollups we inevitably has an 'eventually consistent' model. Therefore the sequencer needs to wait until all of it's dependencies are up to the latest rollup number before commencing the next rollup
It should be possible to query the world state synchroniser for the current root and size for any of the trees within the world state db.
When the P2P Module receives a new rollup it should remove any transactions from the pool that are contained in that rollup. This should be reflected in a call to getTxs on the P2P Module.
Subtask for https://github.com/AztecProtocol/aztec3-milestones/issues/17
An engineer should not push uncommented code, especially not for user-facing code which is most of the code in Aztec. Linter rules should be added to ensure a minimum of comments. Should also use typedoc
or similar to generate documentation from the code.
This would improve our use of the ecosystem, as not every package can work with node16. Instead we want to use the new bundler option in typescript 5:
https://devblogs.microsoft.com/typescript/announcing-typescript-5-0/#moduleresolution-bundler
This would fix having to cast certain packages to 'any' to use them in typescript.
The P2P module should be able to be stopped. Once stopped it should indicate that it is stopped via it's status interface and should no longer sync to new rollups.
On receiving a note, AccountState
should get the nullifier for the note by calling simulator with a view function abi.
And later when the same nullifier is included in a mined block, we know the note has been spent and can be deleted from db.
The sequencer should offer a status interface with a 'state' value. This value should indicate that the sequencer is 'waiting'.
Once a call to stop the P2P Module has been made it should no longer be possible to add transactions to the pool. Requests to add a tx should fail.
Once a tx has been successfully added to the P2P Module, the sequencer state should change to something representing 'creating rollup'. A change to the merkle tree states should also be observed.
The archiver should expose the ability to stop the process of retrieving/archiving blocks. Once this has been called, the status interface should update it's 'state' value accordingly.
Common a3 types should be defined and exported from aztec3-circuits
since they reflect what the circuits require, and should be maintained in that repo.
We expect to use the following types in AztecRPCServer:
interface PrivateCircuitPublicInputs {}
interface CallContext {}
interface TxRequest {}
interface FunctionData {}
class Fr {}
class VerificationKey {}
class AztecAddress {}
class Signature {}
Constants should also be defined there:
const INPUT_ARGS_LENGTH
const RETURN_VALUES_LENGTH
** The above lists are not complete.
CURRENT BEING DONE BY SANTIAGO
interface TxContext {}
interface ContractDeploymentData {}
It should be possible to start the data archiver with configuration for an Ethereum host, a rollup contract, a starting block and a persistence interface.
The archiver should expose an interface informing of it's current synced rollup number.
We have set up a smaller wasm file that contains only barretenberg cryptographic primitives (see here). Projects that need access to them but not to the full a3 circuits should rely on that wasm to reduce package size.
The P2P Module should offer an interface allowing an external party to query the rollup number it is currently against.
The P2P module should enable the engineer to add a tx to the configured tx pool. Once added to the pool, it should be returned when a call to getTxs is made on the P2P Module.
Provide a command line tool aztec-cli
, a wrapper around aztec.js
that has higher level apis for better cli experience.
For the first milestone, developers can use it to deploy a contract:
aztec deploy_contract path/to/abi.json [portalContractAddress] [contractAddressSalt] [...constructorArgs]
deploy_contract
does the following under the hood:
const abi = JSON.parse(fs.readFileSync('path/to/abi.json'));
const client = new HTTPAztecRPCClient(aztecRpcServerUrl);
const deployer = new ContractDeployer(abi, client);
const txHash = await deployer.deploy(…constructorArgs).send(txOptions).getTxHash();
Terminal will print out contract address and txHash
.
Developers can get tx receipt from the txHash:
aztec getTxReceipt txHash
Server url and txOptions will be fetched from options or from environment variables:
export AZTEC_RPC_SERVER_URL=localhost:1234
aztec-cli deploy_contract path/to/abi.json 0xabc...132 --aztecRpcServerUrl localhost:45730
It will check options first, then env vars. So from running the above snippet, the url will be localhost:45730
.
txOptions
has one optional value for now:
interface TxOptions {
from?: AztecAddress;
}
If from
is not provided, it will be the first account in the key store.
Check out Commander for building this command line tool.
The world state synchroniser should be able to be 'started' by giving it instances of a rollup source interface and a world state DB interface.
Currently it always uses the sender's public key. Recipient public key should be included in the execution result for each new note so that we can use it to create the encrypted data in accountState.createUnverifiedData
.
The archiver should expose a status interface that gives information pertaining to it's sync state. This interface should inform of it's progress in syncing with the rollup contract. It should indicate it's current status via 'state' value on it's status interface.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.