Code Monkey home page Code Monkey logo

explorer's Introduction

Polkascan Open-Source

Polkascan Open-Source Application

Quick deployment (Use hosted Polkascan API endpoints)

Step 1: Clone repository:

git clone https://github.com/polkascan/polkascan-os.git

Step 2: Change directory:

cd polkascan-os

Step 3: Check available releases:

git tag

Step 4: Checkout latest releases:

git checkout v0.x.x

Step 5: Make sure to also clone submodules within the cloned directory:

git submodule update --init --recursive

Step 6: Then build the other docker containers

docker-compose -p kusama -f docker-compose.kusama-quick.yml up --build

Use public Substrate RPC endpoints

Step 1: Clone repository:

git clone https://github.com/polkascan/polkascan-os.git

Step 2: Change directory:

cd polkascan-os

Step 3: Check available releases:

git tag

Step 4: Checkout latest releases:

git checkout v0.x.x

Step 5: Make sure to also clone submodules within the cloned directory:

git submodule update --init --recursive

Step 6: During the first run let MySQL initialize (wait for about a minute)

docker-compose -p kusama -f docker-compose.kusama-public.yml up -d mysql

Step 7: Then build the other docker containers

docker-compose -p kusama -f docker-compose.kusama-public.yml up --build

Full deployment

The following steps will run a full Polkascan-stack that harvests blocks from a new local network.

Step 1: Clone repository:

git clone https://github.com/polkascan/polkascan-os.git

Step 2: Change directory:

cd polkascan-os

Step 3: Check available releases:

git tag

Step 4: Checkout latest releases:

git checkout v0.x.x

Step 5: Make sure to also clone submodules within the cloned directory:

git submodule update --init --recursive

Step 6: During the first run let MySQL initialize (wait for about a minute)

docker-compose -p kusama -f docker-compose.kusama-full.yml up -d mysql

Step 7: Then build the other docker containers

docker-compose -p kusama -f docker-compose.kusama-full.yml up --build

Links to applications

Other networks

Add custom types for Substrate Node Template

Cleanup Docker

Use the following commands with caution to cleanup your Docker environment.

Prune images

docker system prune

Prune images (force)

docker system prune -a

Prune volumes

docker volume prune

API specification

The Polkascan API implements the https://jsonapi.org/ specification. An overview of available endpoints can be found here: https://github.com/polkascan/polkascan-pre-explorer-api/blob/master/app/main.py#L60

Troubleshooting

When certain block are not being processed or no blocks at all then most likely there is a missing or invalid type definition in the type registry.

Some steps to check:

You can also dive into Python to pinpoint which types are failing to decode:

import json
from scalecodec.type_registry import load_type_registry_file
from substrateinterface import SubstrateInterface

substrate = SubstrateInterface(
    url='ws://127.0.0.1:9944',
    type_registry_preset='substrate-node-template',
    type_registry=load_type_registry_file('harvester/app/type_registry/custom_types.json'),
)

block_hash = substrate.get_block_hash(block_id=3899710)

extrinsics = substrate.get_block_extrinsics(block_hash=block_hash)

print('Extrinsincs:', json.dumps([e.value for e in extrinsics], indent=4))

events = substrate.get_events(block_hash)

print("Events:", json.dumps([e.value for e in events], indent=4))

explorer's People

Contributors

arjanz avatar gracecampo avatar wouterter avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

explorer's Issues

ValueError: No handler for method 'state_getRuntimeVersion'

explorer-harvester-1 | 2023-04-10 13:45:50 ๐ŸŸข Job "scale_decode" started
explorer-harvester-1 | Traceback (most recent call last):
explorer-harvester-1 | File "app/cli.py", line 141, in
explorer-harvester-1 | main()
explorer-harvester-1 | File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1130, in call
explorer-harvester-1 | return self.main(*args, **kwargs)
explorer-harvester-1 | File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1055, in main
explorer-harvester-1 | rv = self.invoke(ctx)
explorer-harvester-1 | File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1657, in invoke
explorer-harvester-1 | return _process_result(sub_ctx.command.invoke(sub_ctx))
explorer-harvester-1 | File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1404, in invoke
explorer-harvester-1 | return ctx.invoke(self.callback, **ctx.params)
explorer-harvester-1 | File "/usr/local/lib/python3.8/site-packages/click/core.py", line 760, in invoke
explorer-harvester-1 | return __callback(*args, **kwargs)
explorer-harvester-1 | File "app/cli.py", line 61, in run
explorer-harvester-1 | harvester.run(job)
explorer-harvester-1 | File "/usr/src/app/harvester.py", line 339, in run
explorer-harvester-1 | self.process_job('scale_decode')
explorer-harvester-1 | File "/usr/src/app/harvester.py", line 295, in process_job
explorer-harvester-1 | self.jobs[name].start()
explorer-harvester-1 | File "/usr/src/app/jobs.py", line 837, in start
explorer-harvester-1 | self.decode_extrinsic(node_extrinsic)
explorer-harvester-1 | File "/usr/src/app/jobs.py", line 916, in decode_extrinsic
explorer-harvester-1 | self.db_substrate.init_runtime(block_hash=f'0x{node_block_extrinsic.block_hash.hex()}')
explorer-harvester-1 | File "/usr/src/app/base.py", line 251, in init_runtime
explorer-harvester-1 | super().init_runtime(*args, **kwargs)
explorer-harvester-1 | File "/usr/local/lib/python3.8/site-packages/substrateinterface/base.py", line 667, in init_runtime
explorer-harvester-1 | runtime_info = self.get_block_runtime_version(block_hash=runtime_block_hash)
explorer-harvester-1 | File "/usr/local/lib/python3.8/site-packages/substrateinterface/base.py", line 571, in get_block_runtime_version
explorer-harvester-1 | response = self.rpc_request("state_getRuntimeVersion", [block_hash])
explorer-harvester-1 | File "/usr/src/app/base.py", line 248, in rpc_request
explorer-harvester-1 | raise ValueError("No handler for method '{}'".format(method))
explorer-harvester-1 | ValueError: No handler for method 'state_getRuntimeVersion'
explorer-harvester-1 exited with code 0

How to configure the docker-compose.yml for AWS RDS

After configuring docker-compose.yml for AWS RDS with the correct values, tables are not created in the database, and block data is missing. This means writing to DB is not working.

Can you share some documents on how to setup or configure docker-compose.yml for AWS RDS (MySQL)?

Screenshot 2022-11-10 at 9 45 37 AM

Historical balance of an account

Show balance history of an account, plus USD equivalent for current and historical price. Based on the indexed event data from the Polkascan API, we'll find all events that might have changed the balance. For every event, we'll retrieve the account balance from the state of the corresponding block. The state will be fetched from the Substrate node. This might not be a perfectly accurate picture of the balance history, but it's a pragmatic solution with a pretty granular result. To determine which event types are candidates for balance changes, we'll check the metadata to find attributes that specify a token amount. This will also depend on the runtime version.

Setup the own server DB.

Hi, I am going to setup the explorer in my local.
But I am going to use the DB of the Server.
Here is Server Database Infos.

Server Url: https://81.181.255.414
DB Name: test_explorer
DB PORT=3306

How should I setup the DB_CONNECTION to connect to server DB?
Current DB_CONNECTION environment is like this:
DB_CONNECTION=mysql+pymysql://root:root@mysql:3306/polkascan?charset=utf8mb4
How should I change this?

Please help me!

Runtime dates

Show runtime dates on the runtime page; when did the runtime upgrade?

Query performance improvements

Performance improvements in Polkascan search queries. Currently all queries search the entire set of records to match specific criteria. Waiting time can be dramatically reduced by searching in subsequent sets of a fixed amount of blocks. Requests in the front-end will be turned into limited range requests, that can deliver results in fragments. The front-end will be responsible for searching continuously until the paginated result set is fulfilled. The query batches will be processed sequentially. The search results will show a button to search in the next set of blocks, until genesis is reached. Pagination on the back-end will no longer use counts on the database. Consequently, we won't know if there's a next page or not.

Connecting to custom substrate node

Hello.

I have a seperate customized substrate node in a server and when I try to connect the endpoints using Polkscan explorer, it does not want to connect to the exposed endpoints.
*Seen screenshots below:

  • I commented the line - SUBSTRATE_RPC_URL=ws://substrate-node:9944 since I have customized node running on the server.
  • Changed to - SUBSTRATE_RPC_URL=ws://10.11.101.23:9944

Screenshot 2022-11-08 at 12 07 36

  • After the explorer run, on the browser it does not connect to the explosed endpoints from the node.
  • Instead, it shows the display in the screenshot below.

Screenshot 2022-11-08 at 12 07 07

  • Any advice on this?

Not connecting to ws://127.0.0.1:8000/graphql-ws

I am testing the standard docker compose file for polkadot with three modifications:

  • START_BLOCK=18000000
  • Substrate RPC is "wss://rpc.ibp.network/polkadot" (both in docker-compose.yml and explorer-ui-config.json)
  • Disabled the substrate-node in docker-compose.yml

Docker compose starts fine and I can go to the website http://localhost:8080/local. Blocks are appearing but only new ones. The old ones are not available, despite that the harvester processes them fine.

It seems that the connection to ws://127.0.0.1:8000/graphql-ws is not established. It keeps on connecting when I check its status in the website.

Is there something I can do to debug?

Files are attached:
docker-compose.yml.txt
explorer-ui-config.json.txt

What version of python are you using?

The python version is very important, and different versions may have different errors. What I mean is that the python version will affect the results of running. It would be perfect if you can provide the version number when you created the project!

Error: 2003, "Can't connect to MySQL server on 'localhost' ([WinError 10061] No connection could be made because the target machine actively refused it)"

Hi,
When I run python app/harvester.py --force-start command after I migrate db of Harvester, I am getting this error.

File "C:\Users\Home\AppData\Local\Programs\Python\Python310\lib\site-packages\sqlalchemy\engine\default.py", line 598, in connect
return self.dbapi.connect(*cargs, **cparams)
File "C:\Users\Home\AppData\Local\Programs\Python\Python310\lib\site-packages\pymysql\connections.py", line 353, in init
self.connect()
File "C:\Users\Home\AppData\Local\Programs\Python\Python310\lib\site-packages\pymysql\connections.py", line 664, in connect
raise exc
sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'localhost' ([WinError 10061] No connection could be made because the target machine actively refused it)")
(Background on this error at: https://sqlalche.me/e/14/e3q8)

How can I fix this problem?

Performance logging

New back-end performance logging functionality to keep track of query behavior, so we can improve the load tests.

All events for an account

Show all events for a specific account. We already show all events in the current version of Polkascan, however this time we want to harvest the relationship between these events and accounts they operate on. For this, the back-end needs a new ETL process and harvester logic to collect this data and an extension of the GraphQL service to expose it. The event attribute data will also be provided int the GraphQL query result. For more detailed information on the event, the state at a given block doesn't need to be harvested on the back-end, but can be fetched on the client using the SubstrateRPCAdapter. On the front-end (and thus in the GrapQl service) there will be a filter for the runtime version, pallet, type of event, date range, and block range. Creating the ETL process involves some complexity in determining how the JSON data is best collected and interpreted. We are only interested in specific events with certain attributes. Event specifications can be different between runtime versions, so we need to break down metadata for each version.

Some issues

Hi, How are you?
Till now, I have solved many issues by your helps (@arjanz , @wouterter , @matthijsb).
First, Thank so much for your helps.

Now, I am running this entire explorer on my own AWS Ubuntu Server.
Then, I am getting some warnings including SQL Database server yet.

Please check warning logs below carefully, and let me know how can I solve this problem kindly.

image

image

Please help me as soon as possible.
Best.

harvester_1 | socket.gaierror: [Errno -2] Name or service not known

In order to use the explorer with a localsubstrate node I have changed the docker compose to indicate its address ๐Ÿ‘
SUBSTRATE_RPC_URL=ws://127.0.0.1:9944
and I've removed the container substrate-node

But after running the explorer I am getting the following error continuously :

harvester_1  | INFO  [alembic.runtime.migration] Context impl MySQLImpl.
harvester_1  | INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
harvester_1  | Traceback (most recent call last):
harvester_1  |   File "/usr/local/lib/python3.8/site-packages/websocket/_http.py", line 155, in _get_addrinfo_list
harvester_1  |     addrinfo_list = socket.getaddrinfo(
harvester_1  |   File "/usr/local/lib/python3.8/socket.py", line 918, in getaddrinfo
harvester_1  |     for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
harvester_1  | socket.gaierror: [Errno -2] Name or service not known
harvester_1  | 
harvester_1  | During handling of the above exception, another exception occurred:
harvester_1  | 
harvester_1  | Traceback (most recent call last):
harvester_1  |   File "app/harvester.py", line 385, in <module>
harvester_1  |     harvester = Harvester(
harvester_1  |   File "app/harvester.py", line 71, in __init__
harvester_1  |     self.substrate = SubstrateInterface(
harvester_1  |   File "/usr/local/lib/python3.8/site-packages/substrateinterface/base.py", line 550, in __init__
harvester_1  |     self.connect_websocket()
harvester_1  |   File "/usr/local/lib/python3.8/site-packages/substrateinterface/base.py", line 588, in connect_websocket
harvester_1  |     self.websocket = create_connection(
harvester_1  |   File "/usr/local/lib/python3.8/site-packages/websocket/_core.py", line 605, in create_connection
harvester_1  |     websock.connect(url, **options)
harvester_1  |   File "/usr/local/lib/python3.8/site-packages/websocket/_core.py", line 246, in connect
harvester_1  |     self.sock, addrs = connect(url, self.sock_opt, proxy_info(**options),
harvester_1  |   File "/usr/local/lib/python3.8/site-packages/websocket/_http.py", line 122, in connect
harvester_1  |     addrinfo_list, need_tunnel, auth = _get_addrinfo_list(
harvester_1  |   File "/usr/local/lib/python3.8/site-packages/websocket/_http.py", line 167, in _get_addrinfo_list
harvester_1  |     raise WebSocketAddressException(e)
harvester_1  | websocket._exceptions.WebSocketAddressException: [Errno -2] Name or service not known
azexplorer_harvester_1 exited with code 1

Balance transfers

Show all balance transfers. This will be made possible with the above addition of indexed historical event data. Filters will be added for amount range, account, block range and date range.

How to connect a custom chain

Hey,
I was wondering how can I setup a connection to a custom chain, using the following wss:
wss://n1.polka.systems

Extra filters

New filters on current list pages: date range, block range, runtime version (influences pallet and call filters), account.

docker-compose up --build

[+] Building 2445.8s (45/71) docker:desktop-linux
=> [harvester internal] load .dockerignore 0.1s
=> => transferring context: 1.26kB 0.0s
=> [harvester internal] load build definition from Dockerfile 0.1s
=> => transferring dockerfile: 479B 0.0s
=> [api internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 698B 0.0s
=> [api internal] load .dockerignore 0.0s
=> => transferring context: 1.37kB 0.0s
=> [harvester internal] load metadata for docker.io/library/python:3.8-buster 3.3s
=> [polling internal] load .dockerignore 0.0s
=> => transferring context: 1.37kB 0.0s
=> [polling internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 763B 0.0s
=> [polling 1/7] FROM docker.io/library/python:3.8-buster@sha256:04c3f641c2254c229fd2f704c5199ff4bea57d26c1c29008ae3a4afddd 0.0s
=> [harvester internal] load build context 0.0s
=> => transferring context: 1.77kB 0.0s
=> [polling internal] load build context 0.0s
=> => transferring context: 142B 0.0s
=> [api internal] load build context 0.0s
=> => transferring context: 3.00kB 0.0s
=> CACHED [harvester 2/7] RUN mkdir -p /usr/src 0.0s
=> CACHED [harvester 3/7] WORKDIR /usr/src 0.0s
=> CACHED [harvester 4/7] RUN pip install --upgrade pip 0.0s
=> CACHED [harvester 5/7] COPY ./requirements.txt /usr/src/requirements.txt 0.0s
=> CACHED [harvester 6/7] RUN pip3 install -r requirements.txt 0.0s
=> CACHED [harvester 7/7] COPY . /usr/src 0.0s
=> CACHED [api 2/8] RUN apt-get update && apt-get upgrade -y && apt-get install -y git 0.0s
=> CACHED [api 3/8] RUN mkdir -p /usr/src 0.0s
=> CACHED [api 4/8] WORKDIR /usr/src 0.0s
=> CACHED [api 5/8] RUN pip install --upgrade pip 0.0s
=> CACHED [polling 6/10] COPY requirements_polling.txt /usr/src/requirements.txt 0.0s
=> CACHED [polling 7/10] RUN pip3 install -r requirements.txt 0.0s
=> CACHED [polling 8/10] COPY ./start-polling.sh /usr/src 0.0s
=> CACHED [polling 9/10] COPY ./polling.py /usr/src 0.0s
=> CACHED [polling 10/10] COPY ./gitcommit.py /usr/src 0.0s
=> CACHED [api 6/8] COPY ./requirements_api.txt /usr/src/requirements.txt 0.0s
=> CACHED [api 7/8] RUN pip3 install -r requirements.txt 0.0s
=> CACHED [api 8/8] COPY . /usr/src 0.0s
=> [harvester] exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:1cde5a28c5c2b85de426b9f8a0c47d03066cf8e2d528fc480351d9ab7d8736c1 0.0s
=> => naming to docker.io/polkascan/harvester 0.0s
=> [polling] exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:3eaea7912217a588fb9b891a931ae8a9903d14bcaaf785dab136b59682a26996 0.0s
=> => naming to docker.io/polkascan/explorer-polling 0.0s
=> [api] exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:ed8ab9968fd61fbcb5f7ace091b6bc764192905209dc811ab66c36d9bbca91b3 0.0s
=> => naming to docker.io/polkascan/explorer-api 0.0s
=> [gui internal] load .dockerignore 0.0s
=> => transferring context: 72B 0.0s
=> [gui internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 2.79kB 0.0s
=> [gui internal] load metadata for docker.io/library/nginx:stable-alpine 2.7s
=> [gui internal] load metadata for docker.io/library/node:lts 1.5s
=> [gui stage-1 1/7] FROM docker.io/library/nginx:stable-alpine@sha256:62cabd934cbeae6195e986831e4f745ee1646c1738dbd609b136 0.0s
=> [gui internal] load build context 0.0s
=> => transferring context: 33.41kB 0.0s
=> [gui builder 1/27] FROM docker.io/library/node:lts@sha256:5f21943fe97b24ae1740da6d7b9c56ac43fe3495acb47c1b232b0a352b02a 0.0s
=> CACHED [gui stage-1 2/7] RUN rm -rf /etc/nginx/conf.d/* 0.0s
=> CACHED [gui stage-1 3/7] COPY nginx/explorer-ui.conf /etc/nginx/conf.d/ 0.0s
=> CACHED [gui stage-1 4/7] RUN rm -rf /usr/share/nginx/html/* 0.0s
=> CACHED [gui builder 2/27] WORKDIR /app/polkadapt 0.0s
=> CACHED [gui builder 3/27] COPY polkadapt/package.json . 0.0s
=> ERROR [gui builder 4/27] RUN npm i 2439.3s

[gui builder 4/27] RUN npm i:
2439.2 npm notice
2439.2 npm notice New minor version of npm available! 10.1.0 -> 10.2.4
2439.2 npm notice Changelog: https://github.com/npm/cli/releases/tag/v10.2.4
2439.2 npm notice Run npm install -g [email protected] to update!
2439.2 npm notice
2439.2 npm ERR! code ECONNRESET
2439.2 npm ERR! errno ECONNRESET
2439.2 npm ERR! network Invalid response body while trying to fetch https://registry.npmjs.org/@typescript-eslint%2ftype-utils: aborted
2439.2 npm ERR! network This is a problem related to network connectivity.
2439.2 npm ERR! network In most cases you are behind a proxy or have bad network settings.
2439.2 npm ERR! network
2439.2 npm ERR! network If you are behind a proxy, please make sure that the
2439.2 npm ERR! network 'proxy' config is set properly. See: 'npm help config'
2439.2
2439.2 npm ERR! A complete log of this run can be found in: /root/.npm/_logs/2023-11-21T04_42_59_306Z-debug-0.log


failed to solve: process "/bin/sh -c npm i" did not complete successfully: exit code: 1

All blocks showing `Awaiting finalization`

Hi, We connected the explorer to run our parachain at https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Ftestnet.bitgreen.org#/explorer, the explorer is completely synced and also updating new blocks as they are being produced. But the block details display Awaiting Finalization on all blocks as below

Screenshot 2023-04-24 at 3 28 23 PM-cleaned

Just wondering if this has to do with the chain being a parachain or if its something to do with our config.
We have not modified anything except modify the url in the explorer-config yaml.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.