Code Monkey home page Code Monkey logo

ocean-subgraph's Introduction

banner

ocean-subgraph

๐Ÿฆ€ Ocean Protocol Subgraph

Build Status

js oceanprotocol

๐Ÿ„ Get Started

This subgraph is deployed under /subgraphs/name/oceanprotocol/ocean-subgraph/ namespace for all networks the Ocean Protocol contracts are deployed to:

โ›ต Example Queries

All Data NFTs

{
  nfts(orderBy: createdTimestamp, orderDirection: desc, first: 1000) {
    id
    symbol
    name
    creator
    createdTimestamp
  }
}

Note: 1000 is the maximum number of items the subgraph can return.

Total Orders for Each User

{
  users(first: 1000) {
    id
    totalOrders
  }
}

Total Orders for All Users

{
  users(first: 1000) {
    id
    totalOrders
  }
}

Note: 1000 is the maximum number of items the subgraph can return.

Total Orders for a Specific User

{
  user(id: $user) {
    id
    totalOrders
  }
}

Note: all ETH addresses like $user in above example need to be passed as a lowercase string.

All Orders

{
  orders(orderBy: createdTimestamp, orderDirection: desc, first: 1000){
    amount
    datatoken {
      id
    }
    consumer {
      id
    }
    payer {
      id
    }
  }
}

Note: 1000 is the maximum number of items the subgraph can return.

๐ŸŠ Development on Barge

  1. Clone barge and run it in another terminal:
git clone https://github.com/oceanprotocol/barge.git
cd barge
./start_ocean.sh --with-thegraph

If you have cloned Barge previously, make sure you are using the latest version by running git pull.

  1. Switch back to your main terminal and clone the repo and install dependencies:
git clone https://github.com/oceanprotocol/ocean-subgraph/
cd ocean-subgraph
npm i
  1. Let the components know where to pickup the smart contract addresses.
export ADDRESS_FILE="${HOME}/.ocean/ocean-contracts/artifacts/address.json"
  1. Generate the subgraphs
node ./scripts/generatenetworkssubgraphs.js barge
npm run codegen
  1. To deploy a subgraph use:

npm run create:local-barge npm run deploy:local-barge

npm run create:local
npm run deploy:local
  • Alternatively, if you want to get the sub-graph quickly running on barge, you can run npm run quickstart:barge which combines steps 4-5 above.

You now have a local graph-node running on http://127.0.0.1:9000/subgraphs/name/oceanprotocol/ocean-subgraph/graphql

๐ŸŠ Deploying graphs for live networks

  1. Clone the repo and install dependencies:
git clone https://github.com/oceanprotocol/ocean-subgraph/
cd ocean-subgraph
npm i
  1. Generate & deploy on rinkeby
npm run quickstart:rinkeby

๐Ÿ” Testing

  • Please note: the npm run test command is currently not working due to this issue.

To run the integration tests locally, first start up barge by following the instructions above, then run the following terminal commands from the ocean-subgraph folder:

export ADDRESS_FILE="${HOME}/.ocean/ocean-contracts/artifacts/address.json"
npm run test-integration

โœจ Code Style

For linting and auto-formatting you can use from the root of the project:

# lint all js with eslint
npm run lint

# auto format all js & css with prettier, taking all configs into account
npm run format

๐Ÿ›ณ Releases

Releases are managed semi-automatically. They are always manually triggered from a developer's machine with release scripts. From a clean main branch you can run the release task bumping the version accordingly based on semantic versioning:

npm run release

The task does the following:

  • bumps the project version in package.json, package-lock.json
  • auto-generates and updates the CHANGELOG.md file from commit messages
  • creates a Git tag
  • commits and pushes everything
  • creates a GitHub release with commit messages as description
  • Git tag push will trigger Travis to do a npm release

For the GitHub releases steps a GitHub personal access token, exported as GITHUB_TOKEN is required. Setup

โฌ†๏ธ Deployment

Do the following to deploy the ocean-subgraph to a graph-node running locally, pointed against mainnet:

npm run codegen

# deploy
npm run create:local
npm run deploy:local

To deploy a subgraph connected to Rinkeby or Ropsten test networks, use instead:

# Rinkeby
npm run create:local-rinkeby
npm run deploy:local-rinkeby

# Ropsten
npm run create:local-ropsten
npm run deploy:local-ropsten

You can edit the event handler code and then run npm run deploy:local, with some caveats:

  • Running deploy will fail if the code has no changes
  • Sometimes deploy will fail no matter what, in this case:
    • Stop the docker-compose run (docker-compose down or Ctrl+C) This should stop the graph-node, ipfs and postgres containers
    • Delete the ipfs and postgres folders in /docker/data (rm -rf ./docker/data/*)
    • Run docker-compose up to restart graph-node, ipfs and postgres
    • Run npm run create:local to create the ocean-subgraph
    • Run npm run deploy:local to deploy the ocean-subgraph

To deploy to one of the remote nodes run by Ocean, you can do port-forwarding and the above :local commands will work as is.

๐Ÿ› License

Copyright ((C)) 2023 Ocean Protocol Foundation

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

   http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

ocean-subgraph's People

Contributors

akshay-ap avatar alexcos20 avatar dependabot[bot] avatar idiom-bytes avatar jamiehewitt15 avatar kremalicious avatar lacoop6tu avatar loznianuanamaria avatar mariacarmina avatar md00ux avatar mihaisc avatar ssallam avatar trentmc avatar trizin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ocean-subgraph's Issues

subgraph only indexes non-checksummed addresses

EIP-55 checksum address: 0x02088F1E1D088a8D1D5D888c34b2fD3073d950C1

// lower case address
"0x02088F1E1D088a8D1D5D888c34b2fD3073d950C1".toLowerCase()
"0x02088f1e1d088a8d1d5d888c34b2fd3073d950c1"

checksum address: https://subgraph.mainnet.oceanprotocol.com/subgraphs/name/oceanprotocol/ocean-subgraph/graphql?query=%7B%0A%20%20pools(where%3A%20%7B%20datatokenAddress%3A%20%220x02088F1E1D088a8D1D5D888c34b2fD3073d950C1%22%7D)%20%7B%0A%20%20%20%20id%2C%0A%20%20%7D%0A%7D%0A
lowercase address: https://subgraph.mainnet.oceanprotocol.com/subgraphs/name/oceanprotocol/ocean-subgraph/graphql?query=%7B%0A%20%20pools(where%3A%20%7B%20datatokenAddress%3A%20%220x02088f1e1d088a8d1d5d888c34b2fd3073d950c1%22%7D)%20%7B%0A%20%20%20%20id%2C%0A%20%20%7D%0A%7D%0A

https://web3-tools.netlify.app/ tool to validate checksummed addresses

My expectation as a user is that subgraph indexes checksum addresses or allows all two variations (checksum, lowercase).

'totalCount' on `poolTransactions'

Is your feature request related to a problem? Please describe.
The transactions are paginated in the front end. To properly implement pagination, totalCount is needed.

Describe the solution you'd like
totalCount should be the total number of results from a query ignoring first and skip

For example:
this query should have totalCount = 12 (if I counted correctly )

Subgraph repo should have integration tests

Recommended steps:

  • spin up barge
  • spin up docker graph-node connected to local ganache
  • spin up docker pgsql
  • spin up docker ipfs
  • deploy ocean contracts
  • deploy subgraph
  • use ocean.js to generate some transactions
  • write a script that queries the graph-node and compares with ocean.js results

New npm command to quickly startup the subgraph on barge

Currently there are quite a few commands required to get the subgraph running on barge:

npm install
export ADDRESS_FILE=\"${HOME}/.ocean/ocean-contracts/artifacts/address.json\"
npm run codegen
npm run bargesetup
npm run create:local-barge
npm run deploy:local-barge

I suggested introducing a new command so this can be reduce to:

npm install
npm quickstart:barge

This will help quickly get everything up and running for testing.

FixedRateExchange support in the graph

Allow graph to index all FixedRateExchange events (create, update rate)
The frontend will use that to search for fre that can swap a certain datatoken.

Misleading output when the subgraph starts running on Barge

When the subgraph starts running on barge it gives the following output:

Build completed: QmdzMqYJCwLkwtQqaHYNewnMwR8VSG9quHrDYpVtjXu2c9

Deployed to http://127.0.0.1:8000/subgraphs/name/oceanprotocol/ocean-subgraph/graphql

Subgraph endpoints:
Queries (HTTP):     http://127.0.0.1:8000/subgraphs/name/oceanprotocol/ocean-subgraph
Subscriptions (WS): http://127.0.0.1:8001/subgraphs/name/oceanprotocol/ocean-subgraph

However, this is misleading because the sub-graph is actually running on:

http://127.0.0.1:9000/subgraphs/name/oceanprotocol/ocean-subgraph/graphql

totalLockedValue [sic] is a fantasy number

This query returns as totalLockedValue:

OCEAN 608499837

which would be converted:

โ‚ฌ 287,748,011

We do not have 287 Million Euro locked in our pools. So it's completely wrong.


(And it should be totalValueLocked not totalLockedValue. Already commented in the PR)

Add swap volume, consume volume to subgraph

  • Swap volume at pool level, denominated in OCEAN (required for oceanprotocol/market#334)
  • Swap volume for all datatokens across all pools, denominated in OCEAN

  • Consume volume for a datatoken, denominated in OCEAN
  • Consume volume for all datatokens, denominated in OCEAN

  • #consumes for a datatoken
  • #consumes across all datatokens

Calculate liquidity for each pool

Liquidity is expressed in Ocean.
Total Value Locked = total value locked in all pools (GUI)

Related to: oceanprotocol/market#311 (comment)

(see how balancer is using namings in graph schema)

Implementation details
poolfactory has totalLockedValue (in OCEAN)
pool has lockedValue (in OCEAN)
on each pool update:

  oldValue = pool.lockedValue
  pool.lockedValue = oceanReserve + dtReserve * spotPrice
  totalLockedValue = totalLockedValue + pool.lockedValue - oldValue

Upgrade Rust & IPFS

Mar 30 01:41:34.664 INFO Trying IPFS node at: http://ipfs-cluster.dev.svc.cluster.local:5001/
thread 'tokio-runtime-worker' panicked at 'Failed to connect to IPFS: api returned unknwon error '405 - Method Not Allowed
'', node/src/main.rs:653:25

It seems that you are using a very old rust library
ipfs-api = { version = "=0.7.1", features = ["hyper-tls"] }

https://github.com/oceanprotocol/graph-node/blob/448ad2858c69e25eec622deaaa0d1c9c2305e13f/node/Cargo.toml#L22

The API version is now only supporting POST method, whilee GET is no longer supported.

Most likely due to your IPFS node version on image: ipfs/go-ipfs:v0.4.23, while ours are running on v0.7.0.

Any chance to upgrade the IPFS node version?

Implement pool snapshot

Will be usefull in graphs/statistics

Probable schema.

type PoolSnapshot @entity {
  id: ID!
  pool: Pool!
  totalShares: BigDecimal!
  swapVolume: BigDecimal!                                             # swap value 24h
  swapFees: BigDecimal!                                               # swap fee value 24h  
  timestamp: Int!                                                     # date without time
  spotPrice: BigDecimal!                                              # TODO: last spot price or first one?
  tokens: [PoolSnapshotTokenValue!] @derivedFrom(field: "poolSnapshot")
}

Tests: Datatokens

  • Create datatokens & check all fields (name, symbol, publisher, etc)
  • Check balances
  • Check orders

Refactor and prepare for V4

ToDO:

  • Implement global statistics
  • Implement roles
  • Order
  • Pool snapshot
  • Metadata Update -> Changed to NftUpdate
  • Fees: Pool and Datatoken (still thinking about the schema, will leave them at last)
  • check if the token templates are defined correctly in the yaml
  • Check schema and add comments to all fields (where it makes sense)

add compute jobs

On the market, oceanprotocol/market#338 states that we should get the compute history from The Graph. But we can't rely on existing tokenOrders for that, see oceanprotocol/market#462 (comment)

So if we want to replace the expensive provider calls when doing ocean.compute.status we would need a way to get all compute jobs for a given account from the subgraph.

But if I got this right, starting a compute job in itself does not create an on-chain transaction, but only asks user for a signature. So this might make oceanprotocol/market#338 not feasible altogether.

subgraph can't find pool for NEFTUR-75

Readme Improvements

Readme needs to be made clearer in a few places, particularly around testing:

  • npm run test command doesn't work
  • Running export ADDRESS_FILE="${HOME}/.ocean/ocean-contracts/artifacts/address.json" in the same terminal as the test commands
  • Cloning Barge
  • Making sure most up to date version of barge is used

Fail to index tx

May 01 23:16:27.803 INFO 2 triggers found in this block for this subgraph, block_hash: 0xef0a5eab3af3662c818545b2e7fbd5f63bda48ada3e565dd75cc84e6424fffa1, block_number: 8503363, subgraph_id: QmTCvYzqcuR7FTiVHBDbMaaGLYtF3XuWgDhz3QjNnqzV2w, component: SubgraphInstanceManager
May 01 23:16:27.808 INFO @@@@@@ ########## updating poolToken balance (source, oldBalance, newBalance, poolId)  handleSwap.tokenIn 326.804286923710345506 329.770508979985592506 0x1281b400193477bf65c183bf1e83ac4d705cf078, data_source: Pool, runtime_host: 1/1, block_hash: 0xef0a5eab3af3662c818545b2e7fbd5f63bda48ada3e565dd75cc84e6424fffa1, block_number: 8503363, subgraph_id: QmTCvYzqcuR7FTiVHBDbMaaGLYtF3XuWgDhz3QjNnqzV2w, component: SubgraphInstanceManager
May 01 23:16:27.837 INFO @@@@@@ ########## updating poolToken balance (source, oldBalance, newBalance, poolId)  handleSwap.tokenOut 1.904761904761905 -3.095238095216287795 0x1281b400193477bf65c183bf1e83ac4d705cf078, data_source: Pool, runtime_host: 1/1, block_hash: 0xef0a5eab3af3662c818545b2e7fbd5f63bda48ada3e565dd75cc84e6424fffa1, block_number: 8503363, subgraph_id: QmTCvYzqcuR7FTiVHBDbMaaGLYtF3XuWgDhz3QjNnqzV2w, component: SubgraphInstanceManager
May 01 23:16:27.837 WARN EEEEEEEEEEEEEEEEE poolToken.balance < Zero: pool=0x1281b400193477bf65c183bf1e83ac4d705cf078, poolToken=0x6c5ec27a1e6f14ea45736fdf115d9b9279af617a, oldBalance=1.904761904761905, newBalance=-3.095238095216287795, data_source: Pool, runtime_host: 1/1, block_hash: 0xef0a5eab3af3662c818545b2e7fbd5f63bda48ada3e565dd75cc84e6424fffa1, block_number: 8503363, subgraph_id: QmTCvYzqcuR7FTiVHBDbMaaGLYtF3XuWgDhz3QjNnqzV2w, component: SubgraphInstanceManager
May 01 23:16:27.837 INFO @@@@@@ !!!!!!!!!!!!!!!!!!    SWAP SWAP SWAP : (tokenIn, tokenOut, amountIn, amountIn, amountOut, amountOut) 0x8967bcf84170c91b0d24d4302c2376283b0b3a07 0x6c5ec27a1e6f14ea45736fdf115d9b9279af617a 2.966222056275247 2966222056275247000 4.999999999978192795 4999999999978192795 0xd289c3bc892e3b558f8f8505d210961931b40e02901b2512d9530fb4b2586623 0x1281b400193477bf65c183bf1e83ac4d705cf078, data_source: Pool, runtime_host: 1/1, block_hash: 0xef0a5eab3af3662c818545b2e7fbd5f63bda48ada3e565dd75cc84e6424fffa1, block_number: 8503363, subgraph_id: QmTCvYzqcuR7FTiVHBDbMaaGLYtF3XuWgDhz3QjNnqzV2w, component: SubgraphInstanceManager
thread 'mapping-QmTCvYzqcuR7FTiVHBDbMaaGLYtF3XuWgDhz3QjNnqzV2w-1bb57b2d-d9dc-47da-b03d-5b9b85ec9927' panicked at 'negative value encountered for U256: -3095238095216287795', graph/src/data/store/scalar.rs:309:9
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
May 01 23:16:28.252 ERRO Subgraph instance failed to run: Failed to process trigger in block #8503363 (ef0a5eab3af3662c818545b2e7fbd5f63bda48ada3e565dd75cc84e6424fffa1), transaction d289c3bc892e3b558f8f8505d210961931b40e02901b2512d9530fb4b2586623: Mapping terminated before handling trigger: oneshot canceled, code: SubgraphSyncingFailure, id: QmTCvYzqcuR7FTiVHBDbMaaGLYtF3XuWgDhz3QjNnqzV2w, subgraph_id: QmTCvYzqcuR7FTiVHBDbMaaGLYtF3XuWgDhz3QjNnqzV2w, component: SubgraphInstanceManager

Testing: Pools

  • Create pools
  • Swap from pools
  • Add liquidity (both single side/both tokens)
  • Remove liquidity (both single side/both tokens)
  • TLV & other indicators

lockedValue based on pool share

In a query like this, poolShares.poolId.lockedValue returns the total locked value of the pool

Expected

When using poolId within a filtered poolShares query, lockedValue should reflect the user's locked value based on their pool share

Some of the logic for attributing user could be wrong if user is using a smart contract like Zapper an Furocombo

Just reviewing the code for the subgraph (if you don't mind me doing so), I noticed that:

https://github.com/oceanprotocol/ocean-subgraph/blob/main/src/mappings/pool.ts#L427 and https://github.com/oceanprotocol/ocean-subgraph/blob/main/src/mappings/pool.ts#L439 that it's using

event.params.(from|to).toHex()

This could be an issue if the user is using a smart contract to do the mint/burn/swap. As an example, look at this tx: https://etherscan.io/tx/0x09717991d7144b0c7ae61877ce3e25f2c44af752fa3d7fa8166ff3073cb95286

The correct attribution should be the original user that minted/burned the LP token, not the contract.

If you'd like, I can make a PR to address this!

Edit information related to datatoken

We have the following situation on the market #282 , related to the sub-graph:

  • for a data token, we can fetch the address, tx and createTime
  • we do not have the editTimestamps related to each data token

Would it be possible to have a list of editTimestamps related to each data token?
I think that would be useful in resolving this task.

Generate subgraphs automatically, based on ocean-contracts

Now we have subraph.xxxx files, which are maintained by hand.
Also when adding new networks, we have to be careful when copy/paste contract addresses, startBlocks, etc

New ocean-contracts has all this information, so we should be able to have a script that generates all the subgraph files based on address.json

Testing: Users

  • User history
  • Assets history
  • User transactions in pools
  • User dt balances

Datatoken price

For each datatoken can we have an average/aggregate price based on known/supported pools/fre ? We need to evaluate if this will have a performance impact.

pool shares can't be ordered by actual lockedValue

Ordering by count of pool shares is possible but that's kinda not useful cause 1 pool share can be worth $1, or $1000000. What we need is ordering pool shares by that poolId.lockedValue

Example:
In this query ordering by pool share count (balance) is possible but no way to order by the actual value

Wrong symbol, missing name and strange consumePrice

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.