Code Monkey home page Code Monkey logo

rollup-data-availability's Introduction

Rollup Data Availability

Tests Deploy

Utilising NEAR as storage data availability with a focus on lowering rollup DA fees.

Components

Herein outlines the components of the project and their purposes.

Blob store contract

This contract provides the store for arbitrary DA blobs. In practice, these "blobs" are sequencing data from rollups, but they can be any data.

NEAR blockchain state storage is pretty cheap. At the time of writing, 100KiB is a flat fee of 1NEAR. To limit the costs of NEAR storage even more, we don't store the blob data in the blockchain state.

It works by taking advantage of NEAR consensus around receipts. When a chunk producer processes a receipt, there is consensus around the receipt. However, once the chunk has been processed and included in the block, the receipt is no longer required for consensus and can be pruned. The pruning time is at least 3 NEAR epochs, where each epoch is 12 Hours; in practice, this is around five epochs. Once the receipt has been pruned, it is the responsibility of archival nodes to retain the transaction data, and we can even get the data from indexers.

We can validate that the blob was retrieved from ecosystem actors in the format submitted by checking the blob commitment. The blob commitment currently needs to be more efficient and will be improved, but it benefits us because anybody can build this with limited expertise and tooling. It is created by taking a blob, chunking it into 256-byte pieces, and creating a Merkle tree, where each leaf is a Sha-256 hash of the shard. The root of the Merkle tree is the blob commitment, which is provided as [transaction_id ++ commitment] to the L1 contract, which is 64 bytes.

What this means:

  • consensus is provided around the submission of a blob by NEAR validators
  • the function input data is stored by full nodes for at least three days
  • archival nodes can store the data for longer
  • we don't occupy consensus with more data than needs to be
  • indexers can also be used, and this Data is currently indexed by all significant explorers in NEAR
  • the commitment is available for a long time, and the commitment is straightforward to create

Light client

A trustless off-chain light client for NEAR with DA-enabled features, Such as KZG commitments, Reed-Solomon erasure coding & storage connectors.

The light client provides easy access to transaction and receipt inclusion proofs within a block or chunk. This is useful for checking any dubious blobs which may not have been submitted or validating that a blob has been submitted to NEAR.

A blob submission can be verified by:

  • taking the NEAR transaction ID from Ethereum for the blob commitment.
  • Ask the light client for an inclusion proof for the transaction ID or the receipt ID if you're feeling specific; this will give you a Merkle inclusion proof for the transaction/receipt.
  • once you have the inclusion proof, you can ask the light client to verify the proof for you, or advanced users can manually verify it themselves.
  • armed with this knowledge, rollup providers can have advanced integration with light clients and build proving systems around it.

In the future, we will provide extensions to light clients such that non-interactive proofs can be supplied for blob commitments and other data availability features.

It's also possible that the light client may be on-chain for the header syncing and inclusion proof verification, but this is a low priority right now.

TODO: write and draw up extensions to the light client and draw an architecture diagram

DA RPC Client

This client is the defacto client for submitting blobs to NEAR. These crates allow a client to interact with the blob store. It can be treated as a "black box", where blobs go in, and [transaction_id ++ commitment] emerges.

The da-rpc crate is the rust client, which anyone can use if they prefer rust in their application. The responsibility of this client is to provide a simple interface for interacting with NEAR DA.

The da-rpc-sys crate is the FFI client binding for use by non-rust applications. This calls through to da-rpc to interact with the blob store, with some additional black box functionality for dealing with pointers wrangling and such.

The da-rpc-go crate is the go client bindings for use by non-rust applications, and this calls through to da-rpc-sys, which provides another application-level layer for easy interaction with the bindings.

Integrations

We have some proof of concept works for integrating with other rollups. We are working to prove the system's capabilities and provide a reference implementation for others to follow. They are being actively developed, so they are in a state of flux.

We know that each rollup has different features and capabilities, even if they are built on the same SDK. The reference implementations are not necessarily "production grade", they serve as inspiration to help integrators make use of NEAR DA in their system. Our ultimate goal is to make NEAR DA as pluggable as any other tool you might use. This means our heavy focus is on proving, submission and making storage as fair as possible.

Architecture Diagrams can be viewed at this directory

OP Stack

https://github.com/near/optimism

We have integrated with the Optimism OP stack. Utilising the Batcher for submissions to NEAR and the proposer for submitting NEAR commitment data to Ethereum.

CDK Stack

TODO: move this

https://github.com/firatNEAR/cdk-validium-node/tree/near

We have integrated with the Polygon CDK stack. Utilising the Sequence Sender for submissions to NEAR.

Arbitrum Nitro

https://github.com/near/nitro

We have integrated a small plugin into the DAC daserver. This is much like our http sidecar and provides a very modular integration into NEAR DA whilst supporting arbitrum DACs. In the future, this will likely be the easiest way to support NEAR DA as it acts as an independent sidecar which can be scaled as needed. This also means that the DAC can opt-in and out of NEAR DA, lowering their infrastructure burden. With this approach, the DAC committee members just need to have a "dumb" signing service, with the store backed by NEAR.

๐Ÿ‘ท๐Ÿšง Intregrating your own rollup ๐Ÿšง๐Ÿ‘ท

The aim of NEAR DA is to be as modular as possible.

If implementing your own rollup, it should be fairly straightforward, assuming you can utilise da-rpc or da-rpc-go(with some complexity here). All the implementations so far have been different, but the general rules have been:

  • find where the sequencer normally posts batch data, for optimism it was the batcher, for CDK it's the Sequence Sender and plug the client in.
  • find where the sequencer needs commitments posted, for optimism it was the proposer, and CDK the synchronizer. Hook the blob reads from the commitment there.

The complexity arises, depending on how pluggable the commitment data is in the contracts. If you can simply add a field, great! But these waters are unchartered mostly.

If your rollup does anything additional, feel free to hack, and we can try reach the goal of NEAR DA being as modular as possible.

Getting started

Makefiles are floating around, but here's a rundown of how to start with NEAR DA.

Prerequisites

Rust, go, cmake & friends should be installed. Please look at flake.nix#nativeBuildInputs for a list of required installation items. If you use Nix, you're in luck! Just do direnv allow, and you're good to go.

Ensure you have setup near-cli. For the Makefiles to work correctly, you need to have the near-cli-rs version of NEAR-CLI. Make sure you setup some keys for your contract, the documentation above should help. You can write these down, or query these from ~/.near-credentials/** later.

If you didn't clone with submodules, sync them: make submodules

Note, there are some semantic differences between near-cli-rs and near-cli-js. Notably, the keys generated with near-cli-js used to have and account_id key in the json object. But this is omitted in near-cli-rs becuse it's already in the filename, but some applications require this object. So you may need to add it back in.

If using your own contract

If you're using your own contract, you have to build the contract yourself. And make sure you set the keys.

To build the contract:

make build-contracts

The contract will now be in ./target/wasm32-unknown-unknown/release/near_da_blob_store.wasm.

Now to deploy, once you've decided where you want to deploy to, and have permissions to deploy it. Set $NEAR_CONTRACT to the address you want to deploy to, and sign with. For advanced users, take a look at the command and adjust as fit.

Next up: make deploy-contracts

Don't forget to update your .env file for DA_KEY, DA_CONTRACT and DA_ACCOUNT for use later.

If deploying optimism

First clone the repository

Configure ./ops-bedrock/.env.example. This just needs copying the without .example suffix, adding the keys, contract address and signer from your NEAR wallet, and should work out of the box for you.

If deploying optimism on arm64

To standardize the builds for da-rpc-sys and genesis, you can use a docker image.

da-rpc-sys-unix This will copy the contents of da-rpc-sys-docker generated libraries to the gopkg/da-rpc folder.

op-devnet-genesis-docker This will create a docker image to generate the genesis files

op-devnet-genesis

This will generate the genesis files in a docker container and push the files in .devnet folder.

make op-devnet-up This should build the docker images and deploy a local devnet for you

Once up, observe the logs

make op-devnet-da-logs

You should see got data from NEAR and submitting to NEAR

Of course, to stop

make op-devnet-down

If you just wanna get up and running and have already built the docker images using something like make bedrock images, there is a docker-compose-testnet.yml in ops-bedrock you can play with.

If deploying polygon CDK

First clone the repository

Now we have to pull the docker image containing the contracts.

make cdk-images

why is this different to op-stack?

When building the contracts in cdk-validium-contracts, it does a little bit more than build contracts. It creates a local eth devnet, deploys the various components (CDKValidiumDeployer & friends). Then it generates genesis and posts it to L1 at some arbitrary block. The block number that the L2 genesis gets posted to is non-deterministic. This block is then fed into the genesis config in cdk-validium-node/tests. Because of this reason, we want an out of the box deployment, so using a pre-built docker image for this is incredibly convenient.

It's fairly reasonable that, when scanning for the original genesis, we can just query a bunch of blocks between 0..N for the genesis data. However, this feature doesn't exist yet.

Once the image is downloaded, or advanced users built the image and modified the genesis config for tests, we need to configure an env file again. The envfile example is at ./cdk-stack/cdk-validium-node/.env.example, and should be updated with the respective variables as above.

Now we can just do:

cdk-devnet-up

This wil spawn the devnet and an explorer for each network at localhost:4000(L1) and localhost:4001`(L2).

Run a transaction, and check out your contract on NEAR, verify the commitment with the last 64 bytes of the transaction made to L1.

You'll get some logs that look like:

time="2023-10-03T15:16:21Z" level=info msg="Submitting to NEARmaybeFrameData{0x7ff5b804adf0 64}candidate0xfF00000000000000000000000000000000000000namespace{0 99999}txLen1118"
2023-10-03T15:16:21.583Z	WARN	sequencesender/sequencesender.go:129	to 0x0DCd1Bf9A1b36cE34237eEaFef220932846BCD82, data: 438a53990000000000000000000000000000000000000000000000000000000000000060000000000000000000000000f39fd6e51aad88f6f4ce6ab8827279cfffb922660000000000000000000000000000000000000000000000000000000000000180000000000000000000000000000000000000000000000000000000000000000233a121c7ad205b875b115c1af3bbbd8948e90afb83011435a7ae746212639654000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000651c2f3400000000000000000000000000000000000000000000000000000000000000005ee177aad2bb1f9862bf8585aafcc34ebe56de8997379cc7aa9dc8b9c68d7359000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000651c303600000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000040b5614110c679e3d124ca2b7fca6acdd6eb539c1c02899df54667af1ffc7123247f5aa2475d57f8a5b2b3d3368ee8760cffeb72b11783779a86abb83ac09c8d59	{"pid": 7, "version": ""}
github.com/0xPolygon/cdk-validium-node/sequencesender.(*SequenceSender).tryToSendSequence
	/src/sequencesender/sequencesender.go:129
github.com/0xPolygon/cdk-validium-node/sequencesender.(*SequenceSender).Start
	/src/sequencesender/sequencesender.go:69
2023-10-03T15:16:21.584Z	DEBUG	etherman/etherman.go:1136	Estimating gas for tx. From: 0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266, To: 0x0DCd1Bf9A1b36cE34237eEaFef220932846BCD82, Value: <nil>, Data: 438a53990000000000000000000000000000000000000000000000000000000000000060000000000000000000000000f39fd6e51aad88f6f4ce6ab8827279cfffb922660000000000000000000000000000000000000000000000000000000000000180000000000000000000000000000000000000000000000000000000000000000233a121c7ad205b875b115c1af3bbbd8948e90afb83011435a7ae746212639654000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000651c2f3400000000000000000000000000000000000000000000000000000000000000005ee177aad2bb1f9862bf8585aafcc34ebe56de8997379cc7aa9dc8b9c68d7359000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000651c303600000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000040b5614110c679e3d124ca2b7fca6acdd6eb539c1c02899df54667af1ffc7123247f5aa2475d57f8a5b2b3d3368ee8760cffeb72b11783779a86abb83ac09c8d59	{"pid": 7, "version": ""}
2023-10-03T15:16:21.586Z	DEBUG	ethtxmanager/ethtxmanager.go:89	Applying gasOffset: 80000. Final Gas: 246755, Owner: sequencer	{"pid": 7, "version": ""}
2023-10-03T15:16:21.587Z	DEBUG	etherman/etherman.go:1111	gasPrice chose: 8	{"pid": 7, "version": ""}

For this transaction, the blob commitment was 7f5aa2475d57f8a5b2b3d3368ee8760cffeb72b11783779a86abb83ac09c8d59

And if I check the CDKValidium contract 0x0dcd1bf9a1b36ce34237eeafef220932846bcd82, the root was at the end of the calldata.

0x438a53990000000000000000000000000000000000000000000000000000000000000060000000000000000000000000f39fd6e51aad88f6f4ce6ab8827279cfffb922660000000000000000000000000000000000000000000000000000000000000180000000000000000000000000000000000000000000000000000000000000000233a121c7ad205b875b115c1af3bbbd8948e90afb83011435a7ae746212639654000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000651c2f3400000000000000000000000000000000000000000000000000000000000000005ee177aad2bb1f9862bf8585aafcc34ebe56de8997379cc7aa9dc8b9c68d7359000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000651c303600000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000040b5614110c679e3d124ca2b7fca6acdd6eb539c1c02899df54667af1ffc7123247f5aa2475d57f8a5b2b3d3368ee8760cffeb72b11783779a86abb83ac09c8d59

If deploying arbitrum nitro

Build daserver/datool: make target/bin/daserver && make target/bin/datool

Deploy your DA contract as above

Update daserver config to introduce new configuration fields:

"near-aggregator": { "enable": true, "key": "ed25519:insert_here", "account": "helloworld.testnet", "contract": "your_deployed_da_contract.testnet", "storage": { "enable": true, "data-dir": "config/near-storage" } },

target/bin/datool client rpc store --url http://localhost:7876 --message "Hello world" --signing-key config/daserverkeys/ecdsa

Take the hash, check the output:

target/bin/datool client rest getbyhash --url http://localhost:7877 --data-hash 0xea7c19deb86746af7e65c131e5040dbd5dcce8ecb3ca326ca467752e72915185

rollup-data-availability's People

Contributors

dndll avatar ecp88 avatar encody avatar firatnear avatar taco-paco avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

rollup-data-availability's Issues

Harden deployment of optimism

Description

We need to harden the deployment of Optimism in a development & production setting. Right now, both the testnet & devnet are set up in various bash scripts, which tend to fail every time.

Since everything was setup in scripts and mounting random directories, the system was hard to deploy as a container, so everything was on one server. Ultimately we need to be able to scale these things, so containerisation without relying on scripts is necessary, files should be mounted, and environments should be pre-created and bootstrapped.

We should use envfiles and limit any custom scripts as much as possible to prepare a reasonable deployment that doesn't require intervention.

Also, we should ensure the NEAR private keys we provide can be changed, right now there is discrepancies because of near-api-js/api-rs. Assign one access key to batcher and one to block derivation (that should be read only soon anyway)

Also, we should allocate one Access key per actor (batcher, proposer & sequencer)

CDK Submission recovery on ethereum error

Description

When a submission passes but the subsequent Ethereum transaction fails, the NEAR submission is replayed by CDK endlessly.

We should have a mechanism to understand:

  • why it reverts, so far all the gas estimator says is "reverted"
  • recover if we submitted already

Attached is some logs.
_cdk-validium-sequence-sender_logs (3).txt

Tasks

Blob store storage management

Currently, we just deposit onto the blob store, we should pass this onto users since multiple namespaces could be supported in the store.

Investigate if we should use `near-fetch`

We are reimplementing a bunch of stuff near-fetch already has - Ideally we just contribute work there (unless they're using incompatible runtimes), but we should try to reach a sovereign client that isn't broken by these things.

near-fetch also already has:

  • memoization of nonces
  • retryability

If it doesn't have archival failover, we should probably roll up our stuff into that.

Private forks

Since this project hasn't been released yet, we need to move the forks from my personal to private repos at Pagoda or here. This means we can build our own CI and our Infrastructure and roll up any public stability enhancements into the main node until we eventually announce this project.

error when make -C ./crates/da-rpc-sys

thread 'main' panicked at 'cbindgen not found in path', crates/da-rpc-sys/build.rs:15:19
note: run with RUST_BACKTRACE=1 environment variable to display a backtrace

Split polygon cdk into a standalone repository

blocked by #31

Tasks

[Placeholder] Onchain light client

It's relatively easy to do, we don't quite need it right now and introduces actor complexity for submitting headers, proofs and such.

Push da-rpc-go to the go pkgs repo

blocks #30
blocks #29

Also needs to do this on CI pull request merge.

Tasks

No tasks being tracked yet.

Expose a singular FFI interface for golang -> op-rpc-sys

Right now the FFI is utilised in 3 places in optimism:

  • da_client, used by op-node to instantiate the da client
  • calldata_source, used by op-node to join Frames with FrameData (read from near)
  • op-service/txmgr, used by the batcher to submit FrameData to near

This should use one op-near module that:

  • creates the client
  • exposes any dtos
  • any utilities needed for transformations
  • error reading from the err_msg ptr

op-node & batcher should utilise this for their dtos and client instantiation.

Blob store clearance

Description

Currently, we manually clear the store using clear_all. We should expose functionality in the blob store to clear after a time period and allow this to be configurable per namespace.

If a rollup has a required DTD of 7 days, we can clear it after around 8 days to be safe.

Magi as a sequencer

Description

Determine what needs to be done to use magi as a sequencer.

Additional info

The original op-node is incredibly unreliable, with panics and null pointers in most places, and is written in go. We have a partial implementation of DA on magi over at ./op-stack/optimism-rs. The maintainers of Magi are very receptive to working together and for us to support Magi's development, especially around the DA interface issues. It has been confirmed that we shouldn't use Magi in production yet because it is new and unaudited, but for us, it would probably be easier to work with Magi since it is in rust.

The main issue is magi is missing a bunch of endpoints that the sequencer/batcher actors need, whilst it is able to run as a non-sequencer, it needs a few endpoints to be able to work well. So far, the endpoints I found are:

  • optimism_rollupConfig
  • optimism_sync

Additionally, this work might mitigate any headaches we have with go -> rust as we build the client for the DA layer, since the current client is written in go(./op-stack/openrpc) and is mostly untested.

Madara

Tasks

No tasks being tracked yet.

Split the op-stack into a standalone repository

make sure to retain the same structure so we can get upstream updates for free.

Blocked by #31

Tasks

Remove submodules and clean out repo

We're public now, so no need for submodules

remove:

  • light-client (introduce same CI as here there)
  • op-stack (fix any weird makefile docker stuff)
  • cdk (move into near)

publish:

  • da-rpc-go (on merge)
  • da-ffi docker image to this package
  • cdk-docker image to own package

Better repository structure

Currently, the naming of everything and place it lives is unorthodox.

For example:

  • ./op-stack/da-rpc implies this is for optimism, but this is also used by polygon
  • ./crates/op-rpc-* is the same as above

Tasks

Extract DA features out of the light client

We want to keep the light client a black box and apply extensions to its capabilities with near DA. At its current state, we write the extensions onto it.

Expose a plugin system (light client as a library) to live as a standalone entity, and we can work with it for DAS. This means we can eventually release the light client and get it audited without it affecting DAS.

Convert op-stack/openrpc to rust and expose it to op-node

Since we can't use magi as a sequencer yet, and the openrpc implementation is mainly untested, it needs work to build a reasonable integration with near.

The go-near-api built by folks at Aurora doesn't expose some functionality to make view rpc calls, we did try to do this via reflection but there were issues. I think the best foot forward here is to build a client in rust that we can control and then expose it to op-node & friends.

Note: the client work is implemented in op-stack/optimism-rs, we need to expose FFI to this and then implement tests and such.

Decide on blob root approach

Right now, the blob commitment is built in celestia's blob module, which uses a namespace merkle tree and MMR to build a Merkle root, which is posted to Ethereum along with the block height.

We should decide whether we:

  • build the root ourselves in the client side, and post this when we submit the blobs
  • build the root in the contract and return it

Relates to #8 - blocked by it

Light client tests and docs

There are tests for the main protocol and some unit tests for features, but we should try to get this up to a nice standard of testing and rustdoc

Also lints

Security Policy violation SECURITY.md

This issue was automatically created by Allstar.

Security Policy Violation
Security policy not enabled.
A SECURITY.md file can give users information about what constitutes a vulnerability and how to report one securely so that information about a bug is not publicly visible. Examples of secure reporting methods include using an issue tracker with private issue support, or encrypted email with a published key.

To fix this, add a SECURITY.md file that explains how to handle vulnerabilities found in your repository. Go to https://github.com/near/rollup-data-availability/security/policy to enable.

For more information, see https://docs.github.com/en/code-security/getting-started/adding-a-security-policy-to-your-repository.


This issue will auto resolve when the policy is in compliance.

Issue created by Allstar. See https://github.com/ossf/allstar/ for more information. For questions specific to the repository, please contact the owner or maintainer.

[WIP] Data Availability Sampling

As of writing this issue, we provide DAS 0.1, which exposes merkle inclusion proofs over a HTTP API in the light client. The workflow is as follows:

Assume client is synced

L2 -> Contract: Submits blob at Tx(ABC)
Contract returns [height, blob_commitment_root]
L2 -> L1: [height, blob_commitment_root]
Anyone can find the transaction id for height
Anyone -> Light client/Validator/Archive: request inclusion proof for transaction id OR receipt
Anyone can manually calculate inclusion of the transaction/receipt, if adept enough
Anyone -> Light client: Verify proof
Anyone can read the contract store at blob_commitment_root and calculate the root themselves

This works off of an assumption that NEAR validators:

  • have not been compromised
  • will not withhold proofs to the light client (currently get_light_client_proof is EXPERIMENTAL)
  • have not removed the data
  • will hold the data for a period of time

As well as the light client:

  • is synced correctly
  • has not been subject to a long range attack
  • has the canonical head
  • will not withhold proofs to the user

To limit trust assumptions, we have to

Optimise gas calls for contract layer

We don't utilise any of the data we provide to the contract, but we still deserialise it.

We should try to remove as much of the gas from calls as much as possible. Some ideas were:

  • remove bindgen
  • no_std? (see keypom contracts, not the main one)
  • no serde deserialisation
  • no logging
  • no panics

This will require updating the rust client to pass the borsh serialized bytes when submitting, and also require modifications on retrieval to deserialise the opaque bytes.

Add note in the README how to use Aurora's light client

We should add a note in the README, such that if you don't trust our light client it's fine, we can relay to Aurora, and even work this into the application if you're willing to pay the gas for verification onchain

Needs: DA-RPC merkle root

Update Namespace to use [u8 ++ u32]

Namespaces are currently unbounded, update them in the contract, node & client to be a u32, this should cover plenty of integrations.

Edit: added version byte to prefix

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.