synapsecns / sanguine Goto Github PK
View Code? Open in Web Editor NEWSynapse Monorepo
License: MIT License
Synapse Monorepo
License: MIT License
All contracts should have a startBlock
that is public + set at deploy time
sqlite file sent in dm (exceeded github size limit).
Query to replace:
select tx_hash, block_number from receipts where tx_hash not in (select tx_hash from eth_txes);
Issue does not seem to occur the oether way around:
select tx_hash from eth_txes where tx_hash not in (select tx_hash from receipts);`
but in the event of an error we just keep indexing and skip that log entirely.
Loop should return an error. When last indexed is used in the next block, 1 will be subtracted and this will get inserted
One secondary fix is to insert everything in a single transaction. Also may be a bit faster. This should help maintain monotonic validity of blocks throughout #114
Right now, when responding to incidents, monitoring, etc we have a lot of issues finding a public rpc to check against. For these systems, it's crucial to have a live, accessible rpc to be able to check against. To that end, we're going to build an rpc proxy with the following functionality
ReplicaManager should only contain logistical code, and as minimal as state as possible.
One workflow we have a lot in our codebase looks like this:
It'd be kind of nice for debuggability if we didn't have to generate these by hand and could use abigen to handle it for us. I think I'd still use the json parser to tie the events to the topic, but we could build in empty hash checking and constant generation
Testing cases for:
Optimistic Message Timing is:
Message + Optimistic Second
leafReplica
Publish actions (composite and contrib) from monorepo into new repos automaitcally so they can be used by other repos
Sending a message incurs costs on both local and remote chain. The costs are:
Home
contract. This needs to be done every X minutes (assuming there were new messages dispatched). Requires spending local chain gas.ReplicaManager
contracts. This needs to be done whenever a new update is signed on Home
for every remote chain that has incoming messages in that update. Requires spending remote chain gas.proveAndProcess
on the remote ReplicaManager
to execute the message. Requires spending remote chain gas.
prove
part consumes a fixed amount of gas regardless of message.process
part consumes a variable amount of gas, and might additionally require a gas airdrop.Thus, it makes sense to split the total fee this way:
proveAndProcess
part without executing the message. Hmmm maybe keep a removed list?
Originally posted by @joecroninallen in #325 (comment)
There are a few things that have come up in production that would greatly improve the usability of omnirpc.
Right now, between sanguine/scribe
and sanguine/agents
, there is a lot of replicated DB code. Some attempts to resolve this have been in #125 where there is a new dbcommon
dir, but it is incomplete, only filling in some helpers and generic functions.
Currently, clickhouse primary keys are done using mysql style primaryKey
directives:
These won't work in clickhouse since PrimaryKey is specified at table creation time (see here)
You need to add primary keys based on this. OrderBy should use the same thing.
Let me know when this is done and we'll merge the pr then move on to implementing a ReplacingMergeTree
#692 paves the way for automatically generating typecast files based on the contracttype file
This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.
These updates are currently rate-limited. Click on a checkbox below to force their creation now.
@graphql-codegen/cli
, @graphql-codegen/client-preset
, @graphql-codegen/introspection
)@emotion/react
, @emotion/styled
)@types/react
, @types/react-dom
, react
, react-dom
)github.com/aws/aws-sdk-go-v2
, github.com/aws/aws-sdk-go-v2/config
, github.com/aws/aws-sdk-go-v2/service/kms
)@docusaurus/core
, @docusaurus/logger
, @docusaurus/module-type-aliases
, @docusaurus/plugin-content-docs
, @docusaurus/preset-classic
, @docusaurus/theme-common
, @docusaurus/tsconfig
, @docusaurus/types
, @docusaurus/utils
, @docusaurus/utils-common
, @docusaurus/utils-validation
)k8s.io/apiextensions-apiserver
, k8s.io/apimachinery
, k8s.io/client-go
, k8s.io/kubectl
)go.opentelemetry.io/otel
, go.opentelemetry.io/otel/exporters/otlp/otlptrace
, go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc
, go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp
, go.opentelemetry.io/otel/exporters/prometheus
, go.opentelemetry.io/otel/metric
, go.opentelemetry.io/otel/sdk
, go.opentelemetry.io/otel/sdk/metric
, go.opentelemetry.io/otel/trace
)go.opentelemetry.io/otel
, go.opentelemetry.io/otel/trace
)go.opentelemetry.io/otel
, go.opentelemetry.io/otel/trace
)go.opentelemetry.io/otel
, go.opentelemetry.io/otel/metric
, go.opentelemetry.io/otel/trace
)go.opentelemetry.io/otel
, go.opentelemetry.io/otel/trace
)go.opentelemetry.io/otel
, go.opentelemetry.io/otel/metric
, go.opentelemetry.io/otel/sdk
, go.opentelemetry.io/otel/trace
)go.opentelemetry.io/otel
, go.opentelemetry.io/otel/trace
)go.opentelemetry.io/otel
, go.opentelemetry.io/otel/metric
)go.opentelemetry.io/otel
, go.opentelemetry.io/otel/metric
, go.opentelemetry.io/otel/trace
)go.opentelemetry.io/otel
, go.opentelemetry.io/otel/metric
, go.opentelemetry.io/otel/trace
)go.opentelemetry.io/otel
, go.opentelemetry.io/otel/metric
, go.opentelemetry.io/otel/trace
)go.opentelemetry.io/otel
, go.opentelemetry.io/otel/trace
)go.opentelemetry.io/otel
, go.opentelemetry.io/otel/trace
)react
, react-dom
)@testing-library/jest-dom
, @testing-library/react
)mocha
, @types/mocha
)node
, @types/node
)Historically, we've had a lot of issues with missing events from blockchains we operate on. This has resulted in missing transactions and poor/unreliable analytics. Additionally, every service we operate has to tail head in some capacity (for example: #102 is an external monitoring service).
We can fix this through a generic event sourcing microservice. All off-chain agents + services can run an instance of this service (embedded or separately) and query it rather than querying the chain directly. Notably, this service can and will be eth specific so the domain abstractions used in core
need not apply here.
The service is completely separate from core (new folder in sanguine
) and has a dedicated config (see: https://github.com/synapsecns/sanguine/tree/master/core/config)
The config might look something like the following:
chains:
- id: 1 # chain id
url: "http://127.0.0.1:8545" # rpc url
confirmation_threshold: 15 # how many blocks to wait before indexing events (so we can account for re-orgs)
- id: 137
url: "http://127.0.0.1:8546"
contracts:
- address: "0x0c4229E35D61d51559Bc450c17337E623179f50b"
chain_id: 1
start_block: 2
- address: 0xa0b86991c6218b36c1d19d4a2e9eb0ce3606eb48
chain_id: 137
start_block: 4
The only job of this service is to allow storing and querying of the results of eth_getLogs
for these contracts. Implementation wise, here's what that looks like:
eth_getLogs
at a given block and runs it continuously until head and then keeps listening to chain tip. The service will very closely resemble: https://github.com/synapsecns/synapse-node/blob/master/pkg/evm/watcher/contract_watcher.go#L139 Crucially, failure of any kind will not result in continuing to index future blocks outside of the failed range. The failed range will be retried until successful, even across restarts. This means that the latest height has to be persisted.Events are then idempotently inserted into a database in two parts (and seperate tables)
types.Log
: all the log data in individual tables so we can query here
types.Receipt
: the receipt (which you can fetch using eth_getTransactionByReceipt
) for the txhash in the log above. Logs in this type (see link) can be fetched using a foreign key constraint. Any logs for contracts we're not watching should be inserted into the logs table as well w/ a foreign key constraint to txhash.
That's it. This allows our other services to event source in indexers without worrying about the reliability of the chain and allows individual indexers to break without needing to implement backfilling
Currently, Foundry tests are unstructured, reuse repeated code and don't have a single test function naming convention.
This should be fixed sooner than later. Ideally, at a point where there are very few solidity PRs into master
.
After many attempts (#139), I've spent too much time trying to get github package registry auth for npm to work. This should be done at some point in the future.
Run make chart-test
on each new image build and test the helm chart against that new iamage
XAppConfig / ReplicaManager should work together to enroll / un-enroll replicas from active / failed / archived replicas.
SystemRouter
contract features a method allowing to do a few system calls to different system contracts with different data at the same time, whether on local or remote chain:
sanguine/packages/contracts-core/contracts/system/SystemRouter.sol
Lines 97 to 108 in 3d61027
In reality, a lot of system calls are going to be either:
Thus, to avoid setting up an extra array every time, and to make the code cleaner, these two wrappers are required.
This is probably best done after testing suite from #57 is implemented.
Message recipients are supposed to check if the snapshot root was submitted at least "recipient optimistic period" seconds ago (Destination-enforced optimistic period specified in the message may differ).
For this reason it makes sense to pass block.timestamp - rootSubmittedAt
instead of rootSubmittedAt
to all the message recipients.
There is an error with this repository's Renovate configuration that needs to be fixed. As a precaution, Renovate will stop PRs until it is resolved.
Location: .github/renovate.json
Error type: The renovate configuration file contains some invalid settings
Message: Invalid configuration option: forkProcessing
ReplicaManager needs general coverage for proving and processing messages.
With the near completion of Scribe (#114), we're ready to start indexing events for our analytics api. The current state of the analytics api is quite convoluted. analytics.synapseprotocol.com is currently broken on several chains and missing lots of data. You can see that code here along w/ the explorer code here.
A second iteration of analytics, comprised of synapse-indexer and analytics-api requires too much complexity/is too stateful to deploy (which was part of the motivation for #114 along with issues like #153 popping up all over the place rather than in one place where they can be fixed all at once).
The finished product will be a graphql api that looks like this over go, but the first step is to replicate the indexer.
Let's walk through a few real bridging transactions and how they should be indexed. Since this is you're first contribution, I'll run through some steps to get started further below:
Here we take an example from the live bridge and walk through the indexing process. Your indexer will take a yaml config file that should look something like the following. I only define two chains since those are the two used for the example. These config values will make sense as I go through the example
chains:
- id: 1 # chain id
url: "http://127.0.0.1:8545" # rpc url
contracts:
# this is a list since in some cases we have multiple versions of the same contract. You'll need to define these as an enum somewhere
- type: bridge
# this will be sourced by the person writing the config for abi.receipt. blockNumber, e.g. this is from https://github.com/synapsecns/synapse-contracts/blob/master/deployments/mainnet/SynapseBridge.json
start_block: 13033669
# some contracts (really only bridgeconfig/poolconfig: an older iteration of bridge config) are only on ethereum
- type: "bridgeconfig"
address: "0x5217c83ca75559B1f8a8803824E5b7ac233A12a1"
# see: https://github.com/synapsecns/synapse-contracts/blob/master/deployments/mainnet/BridgeConfigV3.json#L1100
start_block: 14259367
# an older verison of bridge config
- type: "bridgeconfig"
address: "0xAE908bb4905bcA9BdE0656CC869d0F23e77875E7"
start_block: 13949327
# when we start using v3.
end_block: 14259367
- id: 42161
url: "http://127.0.0.1:8546"
contracts:
- type: bridge
# see: [0x6F4e8eBa4D337f874Ab57478AcC2Cb5BACdc19c9](https://github.com/synapsecns/synapse-contracts/blob/master/deployments/arbitrum/SynapseBridge.json#L2)
address: "0x6F4e8eBa4D337f874Ab57478AcC2Cb5BACdc19c9"
# see: https://github.com/synapsecns/synapse-contracts/blob/master/deployments/arbitrum/SynapseBridge.json#L1462
start_block: 657404
- type: pool
# https://github.com/synapsecns/synapse-contracts/blob/master/deployments/arbitrum/nUSDPoolV3.json#L2
address: 0x9Dd329F5411466d9e0C488fF72519CA9fEf0cb40
# see: https://arbiscan.io/tx/0x500afe6cf8e927ccad7a8a2e01f7d3bfc2fa9ef3af6a55f841d71bd5b62c84d3, older deploys don't have the receipt so we pull it from the top right corner of the contract address in the explorer, arbiscan in this case
start_block: 5152261
# url of the scribe service, should probably also be embedable
scribe: http://scribe:1231
Let's look at a live example. Here is a transaction which occured on arbitrum. As we can see from the data, the user is bridging to ethereum.
Note: I've chosen the most complicated bridge type here, other types such as mint do not require bridgeconfig, etc
This transaction is going to trigger a few events that will get populated in the contracts we watch on scribe. The first is the bridge event. This particular event triggered on the bridge is TokenRedeemAndRemove
We can see it contains the following items:
We now know that on ethereum, 0x59719d517208b306eA9c7a9FD90D6215163323Ee will receive a minimum of 5330566953 nusd (which will then be swapped for tokenIndexTo: 0 which is usdc) before 1662394851 (Monday, September 5, 2022 4:20:51 PM) on ethereum (chain id 1). If the swap can't be completed, the user will receive nusd on the other end which they can then trade for any token in the pool.
We can also look at the raw data (for most transactions, txes triggered this can't be used for indexing because other contracts can call ours, but it is helpful for understanding the flow) and see the method called:
Since it's a swapAndRedeemAndRemove
, we can see exactly what methods are called for the contract to execute in L2BridgeZap
:
In addition to being pased in the input, these are also passed as a log:
that can be parsed by the abi we generated and inserted
We also have another event to index here: a swap.
We can see the raw swap data here:
We can see exactly what happened here. We're going to want to index this so we can calculate pool volume.
This transaction triggered a bridge that was then received at the other end. Let's take a look at the transaction here. We can see here that withdrawAndRemove
was called.
One of the challenges of parsing transactions on the other end is the pool is never emitted directly:
We can see from the contract that in cases where the swap is not successful, we simply transfer the token (nusd in this case) to the user. Since there's nothing more to index here, we can finish up after just indexing the receiving TokenWithdrawAndRemove
without any pool data.
In cases where expectedOutput >= swapMinAmount
(most cases), we'll also receive an event from a pool. But how do the validators know which pool to pass here? And why is the token different than the address on the origin chain)
This is where bridgeconfig comes in. Two calls are made to BridgeConfigV3
, in your case, these should be archive calls at the block_number
of the transaction. First we call getTokenID(0x2913E812Cf0dcCA30FB28E6Cac3d2DCFF4497688, 42161)
. This is the token address in the call above and the chain id from above. This should be called on 0x5217c83ca75559B1f8a8803824E5b7ac233A12a1
rather than the other bridge config since the the current block number is greater than start block. If this tx were between 13949327 and 14259367 we'd use 0xAE908bb4905bcA9BdE0656CC869d0F23e77875E7
instead.
We can try this out on etherscan here. This won't be an archive call, but it's good enough for us to see what happened, since bridge config hasn't changed in the meantime. We can see the tokenID is nusd
:
Now, let's figure out the token
address we want to use on chainID 1 using the token id we just got:
This data corresponds to this struct, in order:
We can see here the the token address 0x1b84765de8b7566e4ceaf4d0fd3c5af52d3dde4f
matches nusd
on ethreum. Since this transaction is a swap, we want to query the pool config as well to see what pool we've swapped (or attempted to swap on). Let's call getPoolconfig
with the token address we received above:
. We can see the first argument is nusd and the second is a SwapFlashLoan
contract. This is where the swap from nusd to usdc happened in our contract.
If we go back to the event logs for the tx we're inspecting here we can see an event emitted by this contract:
. Our topic map will tell is this is RemoveLiquidityOne
. We'll need to store this for swap analytics. We can also see the amount of tokens the user actually received this way and use that for volume calculations.
We can also see from the logs a TokenWithdrawAndRemove
events:
One final thing to note. You can see the last indexed topic here is bytes32 kappa
. Kappa is simply the keccac256(origin_tx_hash)
.
So in this transaction, we should've indexed the following:
TokenRedeemAndRemove
: on arbitrumTokenSwap
: on arbitrumTokenWithdrawAndRemove]
(https://github.com/synapsecns/synapse-contracts/blob/9e390f7c826ab09c48c3c8fe3d040226ee8b3aa0/contracts/bridge/SynapseBridge.sol#L108): on ethereumRemoveLiquidityOne
From this, we'll be able to compute a few things:
First, you're going to create a new service in services/explorer
, next you're going to need to generate some contracts. This readme will walk you through the process. (Note: prior to the merge of #166, you could've imported synapse-node and used its contracts. The topics file and bridge
folder generally are worth referencing. I'd recommend adding the contracts repo as a submodule in order to abigen against them. I'd also reccomend giving the contracts a versioned name, as it's quite possible we'll have to generate multiple ersions in order to parse events against them. For instance, we've had several iterations of the BridgeConfig so far.
There are a few contracts you'll have to generate abi's for in order to succesfully track events from the bridge:
In general, all events from these contracts should be indexed in a standardize way (e.g. store all data in the db as structured data). Many of the bridge events are indexed here so you should straight up be able to copy and paste the code. Ordinarily, copying and pasting code is a big no-no, but in this case since we're deprecating synapse-node it's fine. Crucially, you'll need the topicMap and the standardized parsing
Create a config parser for the config defined above, you should be able to use this file and the corresponding test. You'll use this to decide which contracts to index/their types.
Create a graphql client against scribe, @CryptoMaxPlanck should be able to walk you through this, but your goal is to be able query continiously and index against the JSON. You're best bet here is going to be to use the raw JSON
scalar and call UnmarshallJSON
on the ethereum types, e.g. for logs this method. These can then be used to parse out events, like so
I'd use DBService here for reference. You're going to want to store all these events in a format that they can easily be aggregated in real time. You'll need a tiny bit of additional data, namely the prices. I'd probably handle this with a sql join.
You should be able to staright up copy this schema. This doesn't include analytics methods, but should be a good start to the sever.
The server should be run independently of the indexer.
Should also check if solidity was changed
As outlined in 97a6c84, System Call is likely to cover all the usecases for a System Message. Thus, the Flag in the SystemMessage
can be deleted and the whole library should be renamed/transformed into SystemCall
.
Also, the library needs a proper explanation as to how the system calls are working.
A System Call works like this:
// Some arbitrary data, last three arguments are always the same
// Function is protected by onlySystemRouter modifier, so that only local chain system router can call it
function someFunction(<...>, uint32 origin, SystemEntity caller, uint256 rootSubmittedAt) external onlySystemRouter;
bytes memory payload = abi.encodeWithSelector(someFunction.selector, <...>);
systemRouter.systemCall(uint32 destination, uint32 optimisticSeconds, SystemEntity recipient, payload);
payload = abi.encodePacked(payload, abi.encode(origin, caller));
payload = abi.encodePacked(payload, abi.encode(rootSubmittedAt));
recipient.call(payload);
// this calls recipient.someFunction(<...>, origin, caller, rootSubmittedAt);
enum SystemEntity
instead of actual contract address. address <> SystemEntity
matching is done within the System Router.enum SystemEntity {
Origin,
Destination,
...
}
origin, caller, rootSubmittedAt
parameters verified by system routers. Using them, recipient can restrict the prerequisites for calling the function to any combination of the following:On-chain calls will always use
block.timestamp
, so this restriction will disable them.
One of the interesting things an rpc proxy allows us to do is to run additional sanity checks on bridge transactions. While these checks will differ for v2, we'd like to implement something in omnirpc for the v1 bridge that can identify and stop non-key compromise based attacks before they happen. It works by simulating the state of the bridge on the destination chain and making sure no more tokens are issued than were burned on the origin chain. The diagram above briefly describes what I go into more detail with here:
eth_sendRawTransaction
we should trigger this check workflow. One nice to have here: if we could figure out a generalizable way to implement these checks, it might be helpful going forward. For interesting for eth_getReceipt we may want to do (v,r,s) verification for chain id.omnirpc/confirmations/1/rpc/[chainid]
. This can be a bit challenging in kubernetes#225, potentially infrastructure helm charts
Right now, abi is automatically generated using go:generate
from the abigen module. This requires manually running go generate ./...
periodically.
The problem with this is it breaks some of the implied functionality of a monorepo: namely that go tests don't fail when solidity is updated with breaking changes. There are three potential solutions here:
Always run go:generate ./...
in ci (and probably only in the core/contracts
directory). This introduces a few problems. First, the docker pull process & abi generation steps are somewhat time consuming. This also runs other go generate steps with the potential to flake (some may be networked). Additionally, this has the disadvantage of not making users update the repo with new auto-generated abi.packages/contracts and and only run
generate` if needed. Additionally, this has the disadvantage of not making users update the repo with new auto-generated abi.go generate ./...
. use a diff check and fail if results are different. This has the disadvantage of making changes to the packages/contracts
repo not usable without go.go generate ./...
as part of the typechain generation process?There's likely a better, more generic solution here, but this is what I've got so far.
Currently, the updates of Home
merkle roots on remote Replica
contracts is done in the synchronous way, i.e. all signed updates need to be applied to every Replica
. This does not scale well.
Need to implement the updates in asynchronous way, i.e. where updater signs a bunch of updates of Home
merkle root, within the signing rules (i.e. without getting slashed). Any of the updates can be applied on any of the Relplica
, allowing to skip the updates that doesn't include any messages to given Replica
.
Currently, the makefile does not download the latest version or pin a version which can result in versioning differences between local and ci
Apply latest updates from the original repo:
If you look at #130 you can see that despite changes to files in the .github
folder, no new labels are added.
The issue likely resides in .github/labeler.yml
Add a way for Home
or ReplicaManager
contracts to send messages to another Home
or ReplicaManager
contracts on other chains. This could be used for variety of reasons:
Home
setups: adding new Updater
, WatchTower
, etc.As the security is crucial here, these messages could be only sent with a long optimistic period (to prevent fraud updates screwing the protocol itself).
curl 'https://scribe.interoperability.institute/confirmations/5/rpc/1666600000' \
-H 'authority: scribe.interoperability.institute' \
-H 'accept: */*' \
-H 'accept-language: en-US,en;q=0.9,eu;q=0.8,af;q=0.7,ar;q=0.6,hy;q=0.5,bn;q=0.4,la;q=0.3,zh-CN;q=0.2,zh-TW;q=0.1,zh;q=0.1,he;q=0.1' \
-H 'cache-control: no-cache' \
-H 'content-type: application/json' \
-H 'origin: chrome-extension://nkbihfbeogaeaoehlefnkodbefgpgknn' \
-H 'pragma: no-cache' \
-H 'sec-ch-ua: "Google Chrome";v="105", "Not)A;Brand";v="8", "Chromium";v="105"' \
-H 'sec-ch-ua-mobile: ?0' \
-H 'sec-ch-ua-platform: "macOS"' \
-H 'sec-fetch-dest: empty' \
-H 'sec-fetch-mode: cors' \
-H 'sec-fetch-site: none' \
-H 'user-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36' \
--data-raw '{"id":"1665190841281","jsonrpc":"2.0","method":"eth_chainId","params":[]}' \
--compressed
Current set of tests is good, but lacking a few things:
In PRs such #576 , PR's that might have some go changes, and revert them do not remove the go generate tag.
Deprecate typecast in ethergo
As part of efforts to separate indexing logic from agents, we've built (scribe #114) and (omnirpc #135). The final step here to making the off-chain agents invulnerable to chain-watching induced errors is configuring the indexer at a domain level. Currently, for notary (#77) we start an indexer to sync messages. This performs getLogs on the contracts and ignores every event from origin
except for dispatch. Leaving starting this loop up to the notary will create the need to seperate dbs at an agent level despite these agents performing the same requests and makes testing harder.
Additionally, storing both agent level and indexer level data in the MessageDB
interface creates the potential for logic errors in future work. Instead, we can do the following.
func NewIndexer(config DomainConfig, db ){}
DomainConfig struct {
// Standard domain config (e.g. rpc, chain id, etc)
DestinationAddress common.Address
OriginAddress common.Address
ConfirmationThreshold uint
StartHeight uint32
AttestationCollector common.Address
}
The indexer can then sync all events whichever of these contracts is not blank common.Adress{} != domainConfig.x
and any agent can pull from that. Crucially, the IndexerDB
exposes two types IndexWriter
which is only used by the indexer and IndexReader
which all agents use.
Each Notary
, Broadcaster
, Guard
etc each have their own seperate db stores with the prefix based off the id speficied in the config.
#316 implements the MVP for Bonding Managers, contracts that are responsible for syncing the information about the off-chain actors, who have committed their staking bond on Synapse Chain.
The actions tied to bonding, and especially unbonding, require the confirmation that the new piece of information has been successfully passed to all relevant domains. For example, if an off-chain agent wants to unstake their bond, we need to do both:
Only after both are completed, the staking bond could be unlocked.
The incentives should be structured in a way, where the slashed actor will have been reported already (assuming rational actors) by the time the information about slashing is finally forwarded.
BondingPrimary
on Synapse Chain forwards new information to all the BondingSecondary
contracts (pings).BondingSecondary
responds with a pong upon handling the information.BondingPrimary
should ensure that number of received pongs is equal to number of sent pings.BondingSecondary
knows that the new piece of information is synced everywhere.N
status changes to prevent out-of-gas DoS attacks).TODOs after #755 is merged:
sendSystemMessage()
instead of sendBaseMessage()
receiveSystemMessage()
instead of handle/receiveBaseMessage
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.