Code Monkey home page Code Monkey logo

aztec-packages's People

Contributors

alexghr avatar arielgabizon avatar aztecbot avatar benesjan avatar charlielye avatar codygunton avatar critesjosh avatar dbanks12 avatar fcarreiro avatar iammichaelconnor avatar jeanmon avatar just-mitch avatar kevaundray avatar ledwards2225 avatar leilawang avatar lherskind avatar lucasxia01 avatar ludamad avatar maddiaa0 avatar nventuro avatar philwindle avatar rahul-kothari avatar rumata888 avatar sirasistant avatar sklppy88 avatar spalladino avatar spypsy avatar suyash67 avatar tomafrench avatar zac-williamson avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aztec-packages's Issues

Should generate real note nullifier.

On receiving a note, AccountState should get the nullifier for the note by calling simulator with a view function abi.
And later when the same nullifier is included in a mined block, we know the note has been spent and can be deleted from db.

Engineer can stop the archiver

The archiver should expose the ability to stop the process of retrieving/archiving blocks. Once this has been called, the status interface should update it's 'state' value accordingly.

Engineer should be able to import a3 types from aztec3-circuits.

Common a3 types should be defined and exported from aztec3-circuits since they reflect what the circuits require, and should be maintained in that repo.

We expect to use the following types in AztecRPCServer:

interface PrivateCircuitPublicInputs {}
interface CallContext {}
interface TxRequest {}
interface FunctionData {}

class Fr {}
class VerificationKey {}
class AztecAddress {}
class Signature {}

Constants should also be defined there:

const INPUT_ARGS_LENGTH
const RETURN_VALUES_LENGTH

** The above lists are not complete.

CURRENT BEING DONE BY SANTIAGO

interface TxContext {}
interface ContractDeploymentData {}

ANVIL is required when running tests

Problem

Running the integration tests from scratch require ANVIL.

The error is as follows:

% yarn test:integration                                                           ~/aztec3-packages/yarn-project/end-to-end master mainframe
[anvil] /bin/sh: 1: anvil: not found
[anvil] anvil exited with code 127
--> Sending SIGTERM to other processes..
[test] yarn test:integration:run exited with code SIGTERM

Solution

To fix, I downloaded foundryup and installed ANVIL by running the foundryup script.

See: https://github.com/foundry-rs/foundry/blob/master/README.md#installation which will instruct you to run:

curl -L https://foundry.paradigm.xyz | bash 

Provide command line tool for interacting with the network.

Provide a command line tool aztec-cli, a wrapper around aztec.js that has higher level apis for better cli experience.

For the first milestone, developers can use it to deploy a contract:

aztec deploy_contract path/to/abi.json [portalContractAddress] [contractAddressSalt] [...constructorArgs]

deploy_contract does the following under the hood:

const abi = JSON.parse(fs.readFileSync('path/to/abi.json'));
const client = new HTTPAztecRPCClient(aztecRpcServerUrl);
const deployer = new ContractDeployer(abi, client);
const txHash = await deployer.deploy(…constructorArgs).send(txOptions).getTxHash();

Terminal will print out contract address and txHash.

Developers can get tx receipt from the txHash:

aztec getTxReceipt txHash

Configuration

Server url and txOptions will be fetched from options or from environment variables:

export AZTEC_RPC_SERVER_URL=localhost:1234
aztec-cli deploy_contract path/to/abi.json 0xabc...132  --aztecRpcServerUrl localhost:45730

It will check options first, then env vars. So from running the above snippet, the url will be localhost:45730.

txOptions has one optional value for now:

interface TxOptions {
  from?: AztecAddress;
}

If from is not provided, it will be the first account in the key store.

Reference

Check out Commander for building this command line tool.

getTxReceipt should return correct status.

There's a bug in the current api: if a tx does not belong to any account in the aztec rpc server, and it's not in the tx pool, its status will be marked as DROPPED. But it's more likely that it has been settled.

Solution 1

We should be able to query settled tx from aztec node to check if a tx has been mined. The api will also be useful if we want to query a specific tx and show it in an explorer.

Solution 2

We don't allow users to query tx receipt if they are not the sender or the recipient. They don't get much information from the receipt anyway. They won't be able to see the from and to addresses.

Include functionTreeRoot of new contracts when calculating tx hash

We are currently only hashing the aztec and portal addresses (along with the data commitments and nullifiers) as part of the tx hash, but not the function tree root, since that'd require importing pedersen into all modules that depend on tx. We need to decide whether to include that dependency, or change the L2 block structure so it includes the function tree root as well.

See this conversation for more details.

Provide an AztecRPCServer to deploy a contract.

Interface

The interface is AztecRPCClient instead of AztecRPCServer because the client is what the developers will interact with. An AztecRPCServer should implement the interface of a client so that all functionalities of a client will be supported by a server.

interface AztecRPCClient {
  addAccount(): Promise<AztecAddress>;
  getAccounts(): Promise<AztecAddress[]>;
  getCode(contract: AztecAddress): Promise<Buffer>;
  createDeploymentTxRequest(
    bytecode: Buffer,
    args: Fr[],
    portalContract: EthAddress,
    contractAddressSalt: Fr,
    from: AztecAddress,
  ): Promise<TxRequest>;
  signTxRequest(txRequest: TxRequest, account: AztecAddress): Promise<Signature>;
  createTx(txRequest: TxRequest, signature: Signature): Promise<Tx>;
  sendTx(tx: Tx): Promise<TxHash>;
}

** This is not a complete interface.

Implementation

We should provide an AztecRPCServer implementation:

class AztecRPCServer implements AztecRPCClient {
  constructor(
    private keyStore: KeyStore,
    private synchroniser: Synchroniser,
    private simulator: Simulator,
    private proofGenerator: ProofGenerator,
    private aztecNode: AztecNode,
  ) {}
}

And a function to create an AztecRPCServer with required components:

function createAztecRPCServer({ keyStore, synchroniser, simulator, proofGenerator, node } = {}) {
  keyStore = keyStore || new TestKeyStore();
  synchroniser = synchroniser  || new Synchroniser();
  simulator = simulator || new Simulator();
  proofGenerator = proofGenerator || new ProofGenerator();
  node = node || new AztecNode();
  return new AztecRPCServer(keyStore, synchroniser, simulator, proofGenerator, aztecNode);
}
  • addAccount, getAccounts, and signTxRequest are called on the given KeyStore.
  • createDeploymentTxRequest should use the Simulator to construct the TxRequest.
  • createTx should use the ProofGenerator to create proof data.
  • sendTx should send tx to AztecNode.

For the first few milestones, a Synchroniser does nothing special. All required data can be fetched from an AztecNode.

Provide a json-rpc package to run a server for a given implementation.

When creating a component for a3, it should be developed as a typescript class, which can be imported and used in any other components. If we want it to run as a server, we can use this package to initialise a JSON-RPC server for that component.

Running a wallet server:

import { JsonRpcServer } from '@aztec/json-rpc';
import { WalletImplementation } from '@aztec/wallet';

const wallet = new WalletImplementation();
const server = new JsonRpcServer(wallet);
server.start(8080);

In Dapp:

const wallet = new JsonRpcClient<Wallet>('wallet.com');
const signature = await wallet.signTxRequest(accountPubKey, txRequest);

The JsonRpcClient will convert supported custom types to json compatible types under the hood.
For example, for the method signTxRequest(accountPubKey: PublicKey, txRequest: TxRequest): Promise<Buffer>,
it will send [{ name: 'PublicKey', value: accountPubKey.toString() }, { name: 'TxRequest', txRequest.toString() }] to the server.
And on receiving result from the server, it converts { name: 'Buffer', value: '0x123...abc' } to Buffer.from('123...abc', 'hex').

In a wallet implementation, we can create different clients for different services:

const publicClient = new JsonRpcClient<PublicClient>('public-client.com');
const keyStore = new JsonRpcClient<KeyStore>('key-store.com');

Document usage of zero when writing memory

Problem

There are parts in the code which write to wasm memory without allocating such as here.

This is a possible footgun as you can only write 1024 bytes of data there. If you write more than this, I assume you would start corrupting either the stack or data on the heap.

Solution

Document usage of 0 and this limitation. It may make more sense to add a zeroPtr function which documents it and then no longer use the 0 magic constant in the codebase when writing to WASM memory. This way, you document this edge case in one place.

Provide an HTTPAztecRPCClient that connects to an HTTP server.

The client takes a server url in constructor and calls the server for each method call:

class HTTPAztecRPCClient implements AztecRPCClient {
  constructor(serverUrl: string) {}

  sendTx(tx) {
    return this.server.post('aztec_sendTx', [tx]);
  }
}

Default serverUrl: localhost:45730

We should be able to generate the client and the server using the json-rpc package, which will be moved to foundation later.

Provide contract classes to deploy and interact with contracts.

A contract deployer class to deploy contract:

const deployer = new ContractDeployer(abi, aztecRPCClient);
const txHash = await deployer.deploy(...agrs).send(options).getTxHash();

It should throw an error if the formatting of the abi isn’t as expected. E.g.:

  • No functions
  • No function names
  • Parameters missing types
  • Unexpected types
  • No ACIR bytecode
  • No constructor function (all contracts must have a constructor: even if it’s only a no-op constructor).
  • No ACIR version
  • No Noir Compiler version / commit hash

A contract class to interact with a contract:

const token = new Contract(address, abi, aztecRPCClient);
const receipt = await token.methods.mint(100n).send(options).getReceipt();

Contract Method APIs

This is the object that is returned when calling a contract method, e.g. contract.methods.myMethod(arg1, arg2).

interface ContractMethod<Return = any> {
  call(options: CallOptions): Promise<Return>;
  send(options: SendOptions): Promise<TransactionReceipt>;
  estimateGas(options: CallOptions): Promise<number>;
  encodeAbi(): Promise<Buffer>;
}

CallOptions and SendOptions have one optional value for now:

interface CallOptions {
  from?: AztecAddress;
}

If from is not provided, it will be the first account in the key store.

TBD

  • Interface of the abi.
  • Interface of the TransactionReceipt.

Provide a simple KeyStore implementation.

Provide a simple TestKeyStore that generates random keys.

KeyStore Interface

interface KeyStore {
  addAccount(): Promise<AztecAddress>;
  getAccounts(): Promise<AztecAddress[]>;
  getSigningPublicKeys: Promise<AztecAddress[]>;
  signTxRequest(txRequest: TxRequest, address: AztecAddress): Promise<Signature>;
}

** This is not a complete interface. We only need APIs for the first few milestones for now.

A default account key and signing key should be generated upon creation:

class TestKeyStore {
  constructor() {
    this.accounts.push(randomAccount);
  }
}

A developer can build a calldata "block"

Part of https://github.com/AztecProtocol/aztec3-milestones/issues/18

A developer should be able to build the generate the payload for a rollup given that he have the current state.

type AppendOnlyTreeSnapshot = { 
    root: fr,
    next_available_leaf_index: uint32, // type tbd
}

type CalldataPreimage {
    rollup_id: u32,

    start_private_data_tree_snapshot: AppendOnlyTreeSnapshot,
    start_tree_of_historic_private_data_tree_roots_snapshot: AppendOnlyTreeSnapshot,
    start_nullifier_tree_snapshot: AppendOnlyTreeSnapshot,
    start_contract_tree_snapshot: AppendOnlyTreeSnapshot,
    start_tree_of_historic_contract_tree_roots_snapshot: AppendOnlyTreeSnapshot,
    
    end_private_data_tree_snapshot: AppendOnlyTreeSnapshot,    
    end_nullifier_tree_snapshot: AppendOnlyTreeSnapshot,    
    end_contract_tree_snapshot: AppendOnlyTreeSnapshot,    
    end_tree_of_historic_private_data_tree_roots_snapshot: AppendOnlyTreeSnapshot,    
    end_tree_of_historic_contract_tree_roots_snapshot: AppendOnlyTreeSnapshot,

    new_commitments: Array<fr, ?>,
    new_nullifiers: Array<fr, ?>,
    new_contracts: Array<fr, ?>
    new_contract_data: Array<{az_address, eth_address}, ?>,
}

Engineer can stop the P2P module

The P2P module should be able to be stopped. Once stopped it should indicate that it is stopped via it's status interface and should no longer sync to new rollups.

Engineer can see when the archiver has synced with rollup contract

The archiver should expose a status interface that gives information pertaining to it's sync state. This interface should inform of it's progress in syncing with the rollup contract. It should indicate it's current status via 'state' value on it's status interface.

Check if inputs to PedersenCompress are 32 bytes

Observation

There is a TODO here with a question mark about bounds checks.

Remark

This should indeed have a check to ensure LHS and RHS are 32 bytes. Since this assumption is used in the next lines:

 wasm.writeMemory(0, lhs);
 wasm.writeMemory(32, rhs);

If lhs is more than 32 bytes then rhs would be overwriting memory written for lhs. The pointer for the result is also at offset 64 which assumes that rhs is 32 bytes

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.