Code Monkey home page Code Monkey logo

bsc-erigon's Introduction

Erigon

Erigon is an implementation of Ethereum (execution layer with embeddable consensus layer), on the efficiency frontier. Archive Node by default.

An accessible and complete version of the documentation is available at erigon.gitbook.io.

Build status Coverage

Disclaimer: this software is currently a tech preview. We will do our best to keep it stable and make no breaking changes but we don't guarantee anything. Things can and will break.

Important defaults: Erigon is an Archive Node by default (to remove history see: --prune flags in erigon --help). We don't allow change this flag after first start.

In-depth links are marked by the microscope sign (🔬)

System Requirements

  • For an Archive node of Ethereum Mainnet we recommend >=3.5TB storage space: 2.3TiB state (as of March 2024), 643GiB snapshots (can symlink or mount folder <datadir>/snapshots to another disk), 200GB temp files (can symlink or mount folder <datadir>/temp to another disk). Ethereum Mainnet Full node ( see --prune* flags): 1.1TiB (March 2024).

  • Goerli Full node (see --prune* flags): 189GB on Beta, 114GB on Alpha (April 2022).

  • Gnosis Chain Archive: 1.7TiB (March 2024). Gnosis Chain Full node (--prune=hrtc flag): 530GiB (March 2024).

  • Polygon Mainnet Archive: 8.5TiB (December 2023). --prune.*.older 15768000: 5.1Tb (September 2023). Polygon Mumbai Archive: 1TB. (April 2022).

SSD or NVMe. Do not recommend HDD - on HDD Erigon will always stay N blocks behind chain tip, but not fall behind. Bear in mind that SSD performance deteriorates when close to capacity.

RAM: >=16GB, 64-bit architecture.

Golang version >= 1.21; GCC 10+ or Clang; On Linux: kernel > v4

🔬 more details on disk storage here and here.

Usage

Getting Started

Building erigon requires both a Go (version 1.21 or later) and a C compiler (GCC 10+ or Clang). For building the latest release (this will be suitable for most users just wanting to run a node):

git clone --branch release/<x.xx> --single-branch https://github.com/ledgerwatch/erigon.git
cd erigon
make erigon
./build/bin/erigon

You can check the list of releases for release notes.

For building the bleeding edge development branch:

git clone --recurse-submodules https://github.com/ledgerwatch/erigon.git
cd erigon
git checkout devel
make erigon
./build/bin/erigon

Default --snapshots for mainnet, goerli, gnosis, chiado. Other networks now have default --snapshots=false. Increase download speed by flag --torrent.download.rate=20mb. 🔬 See Downloader docs

Use --datadir to choose where to store data.

Use --chain=gnosis for Gnosis Chain, --chain=bor-mainnet for Polygon Mainnet, --chain=mumbai for Polygon Mumbai and --chain=amoy for Polygon Amoy. For Gnosis Chain you need a Consensus Layer client alongside Erigon (https://docs.gnosischain.com/node/manual/beacon).

Running make help will list and describe the convenience commands available in the Makefile.

Datadir structure

  • chaindata: recent blocks, state, recent state history. low-latency disk recommended.
  • snapshots: old blocks, old state history. can symlink/mount it to cheaper disk. mostly immutable. must have ~100gb free space (for merge recent files to bigger one).
  • temp: can grow to ~100gb, but usually empty. can symlink/mount it to cheaper disk.
  • txpool: pending transactions. safe to remove.
  • nodes: p2p peers. safe to remove.

Logging

Flags:

  • verbosity
  • log.console.verbosity (overriding alias for verbosity)
  • log.json
  • log.console.json (alias for log.json)
  • log.dir.path
  • log.dir.prefix
  • log.dir.verbosity
  • log.dir.json

In order to log only to the stdout/stderr the --verbosity (or log.console.verbosity) flag can be used to supply an int value specifying the highest output log level:

  LvlCrit = 0
  LvlError = 1
  LvlWarn = 2
  LvlInfo = 3
  LvlDebug = 4
  LvlTrace = 5

To set an output dir for logs to be collected on disk, please set --log.dir.path If you want to change the filename produced from erigon you should also set the --log.dir.prefix flag to an alternate name. The flag --log.dir.verbosity is also available to control the verbosity of this logging, with the same int value as above, or the string value e.g. ' debug' or 'info'. Default verbosity is 'debug' (4), for disk logging.

Log format can be set to json by the use of the boolean flags log.json or log.console.json, or for the disk output --log.dir.json.

Modularity

Erigon by default is "all in one binary" solution, but it's possible start TxPool as separated processes. Same true about: JSON RPC layer (RPCDaemon), p2p layer (Sentry), history download layer (Downloader), consensus. Don't start services as separated processes unless you have clear reason for it: resource limiting, scale, replace by your own implementation, security. How to start Erigon's services as separated processes, see in docker-compose.yml.

Embedded Consensus Layer

On Ethereum Mainnet, Görli, and Sepolia, the Engine API can be disabled in favour of the Erigon native Embedded Consensus Layer. If you want to use the internal Consensus Layer, run Erigon with flag --internalcl. Warning: Staking (block production) is not possible with the embedded CL.

Testnets

If you would like to give Erigon a try, but do not have spare 2TB on your drive, a good option is to start syncing one of the public testnets, Görli. It syncs much quicker, and does not take so much disk space:

git clone --recurse-submodules -j8 https://github.com/ledgerwatch/erigon.git
cd erigon
make erigon
./build/bin/erigon --datadir=<your_datadir> --chain=goerli

Please note the --datadir option that allows you to store Erigon files in a non-default location, in this example, in goerli subdirectory of the current directory. Name of the directory --datadir does not have to match the name of the chain in --chain.

Block Production (PoW Miner or PoS Validator)

Disclaimer: Not supported/tested for Gnosis Chain and Polygon Network (In Progress)

Support only remote-miners.

  • To enable, add --mine --miner.etherbase=... or --mine --miner.miner.sigkey=... flags.
  • Other supported options: --miner.extradata, --miner.notify, --miner.gaslimit, --miner.gasprice , --miner.gastarget
  • JSON-RPC supports methods: eth_coinbase , eth_hashrate, eth_mining, eth_getWork, eth_submitWork, eth_submitHashrate
  • JSON-RPC supports websocket methods: newPendingTransaction

🔬 Detailed explanation is here.

Windows

Windows users may run erigon in 3 possible ways:

  • Build executable binaries natively for Windows using provided wmake.ps1 PowerShell script. Usage syntax is the same as make command so you have to run .\wmake.ps1 [-target] <targetname>. Example: .\wmake.ps1 erigon builds erigon executable. All binaries are placed in .\build\bin\ subfolder. There are some requirements for a successful native build on windows :

    • Git for Windows must be installed. If you're cloning this repository is very likely you already have it
    • GO Programming Language must be installed. Minimum required version is 1.21
    • GNU CC Compiler at least version 13 (is highly suggested that you install chocolatey package manager - see following point)
    • If you need to build MDBX tools (i.e. .\wmake.ps1 db-tools) then Chocolatey package manager for Windows must be installed. By Chocolatey you need to install the following components : cmake, make, mingw by choco install cmake make mingw. Make sure Windows System "Path" variable has: C:\ProgramData\chocolatey\lib\mingw\tools\install\mingw64\bin

    Important note about Anti-Viruses During MinGW's compiler detection phase some temporary executables are generated to test compiler capabilities. It's been reported some anti-virus programs detect those files as possibly infected by Win64/Kryptic.CIS trojan horse (or a variant of it). Although those are false positives we have no control over 100+ vendors of security products for Windows and their respective detection algorithms and we understand this might make your experience with Windows builds uncomfortable. To workaround the issue you might either set exclusions for your antivirus specifically for build\bin\mdbx\CMakeFiles sub-folder of the cloned repo or you can run erigon using the following other two options

  • Use Docker : see docker-compose.yml

  • Use WSL (Windows Subsystem for Linux) strictly on version 2. Under this option you can build Erigon just as you would on a regular Linux distribution. You can point your data also to any of the mounted Windows partitions ( eg. /mnt/c/[...], /mnt/d/[...] etc) but in such case be advised performance is impacted: this is due to the fact those mount points use DrvFS which is a network file system and, additionally, MDBX locks the db for exclusive access which implies only one process at a time can access data. This has consequences on the running of rpcdaemon which has to be configured as Remote DB even if it is executed on the very same computer. If instead your data is hosted on the native Linux filesystem non limitations apply. Please also note the default WSL2 environment has its own IP address which does not match the one of the network interface of Windows host: take this into account when configuring NAT for port 30303 on your router.

Using TOML or YAML Config Files

You can set Erigon flags through a YAML or TOML configuration file with the flag --config. The flags set in the configuration file can be overwritten by writing the flags directly on Erigon command line

Example

./build/bin/erigon --config ./config.yaml --chain=goerli

Assuming we have chain : "mainnet" in our configuration file, by adding --chain=goerli allows the overwrite of the flag inside of the yaml configuration file and sets the chain to goerli

TOML

Example of setting up TOML config file

datadir = 'your datadir'
port = 1111
chain = "mainnet"
http = true
"private.api.addr"="localhost:9090"

"http.api" = ["eth","debug","net"]

YAML

Example of setting up a YAML config file

datadir : 'your datadir'
port : 1111
chain : "mainnet"
http : true
private.api.addr : "localhost:9090"

http.api : ["eth","debug","net"]

Beacon Chain (Consensus Layer)

Erigon can be used as an Execution Layer (EL) for Consensus Layer clients (CL). Default configuration is OK.

If your CL client is on a different device, add --authrpc.addr 0.0.0.0 (Engine API listens on localhost by default) as well as --authrpc.vhosts <CL host> where <CL host> is your source host or any.

In order to establish a secure connection between the Consensus Layer and the Execution Layer, a JWT secret key is automatically generated.

The JWT secret key will be present in the datadir by default under the name of jwt.hex and its path can be specified with the flag --authrpc.jwtsecret.

This piece of info needs to be specified in the Consensus Layer as well in order to establish connection successfully. More information can be found here.

Once Erigon is running, you need to point your CL client to <erigon address>:8551, where <erigon address> is either localhost or the IP address of the device running Erigon, and also point to the JWT secret path created by Erigon.

Caplin

Caplin is a full-fledged validating Consensus Client like Prysm, Lighthouse, Teku, Nimbus and Lodestar. Its goal is:

  • provide better stability
  • Validation of the chain
  • Stay in sync
  • keep the execution of blocks on chain tip
  • serve the Beacon API using a fast and compact data model alongside low CPU and memory usage.

The main reason why developed a new Consensus Layer is to experiment with the possible benefits that could come with it. For example, The Engine API does not work well with Erigon. The Engine API sends data one block at a time, which does not suit how Erigon works. Erigon is designed to handle many blocks simultaneously and needs to sort and process data efficiently. Therefore, it would be better for Erigon to handle the blocks independently instead of relying on the Engine API.

Caplin's Usage.

Caplin can be enabled through the --internalcl flag. from that point on, an external Consensus Layer will not be need anymore.

Caplin also has an archivial mode for historical states and blocks. it can be enabled through the --caplin.archive flag. In order to enable the caplin's Beacon API, the flag --beacon.api=<namespaces> must be added. e.g: --beacon.api=beacon,builder,config,debug,node,validator,lighthouse will enable all endpoints. **NOTE: Caplin is not staking-ready so aggregation endpoints are still to be implemented. Additionally enabling the Beacon API will lead to a 6 GB higher RAM usage.

Multiple Instances / One Machine

Define 6 flags to avoid conflicts: --datadir --port --http.port --authrpc.port --torrent.port --private.api.addr. Example of multiple chains on the same machine:

# mainnet
./build/bin/erigon --datadir="<your_mainnet_data_path>" --chain=mainnet --port=30303 --http.port=8545 --authrpc.port=8551 --torrent.port=42069 --private.api.addr=127.0.0.1:9090 --http --ws --http.api=eth,debug,net,trace,web3,erigon


# sepolia
./build/bin/erigon --datadir="<your_sepolia_data_path>" --chain=sepolia --port=30304 --http.port=8546 --authrpc.port=8552 --torrent.port=42068 --private.api.addr=127.0.0.1:9091 --http --ws --http.api=eth,debug,net,trace,web3,erigon

Quote your path if it has spaces.

Dev Chain

🔬 Detailed explanation is DEV_CHAIN.

Key features

🔬 See more detailed overview of functionality and current limitations. It is being updated on recurring basis.

More Efficient State Storage

Flat KV storage. Erigon uses a key-value database and storing accounts and storage in a simple way.

🔬 See our detailed DB walkthrough here.

Preprocessing. For some operations, Erigon uses temporary files to preprocess data before inserting it into the main DB. That reduces write amplification and DB inserts are orders of magnitude quicker.

🔬 See our detailed ETL explanation here.

Plain state.

Single accounts/state trie. Erigon uses a single Merkle trie for both accounts and the storage.

Faster Initial Sync

Erigon uses a rearchitected full sync algorithm from Go-Ethereum that is split into "stages".

🔬 See more detailed explanation in the Staged Sync Readme

It uses the same network primitives and is compatible with regular go-ethereum nodes that are using full sync, you do not need any special sync capabilities for Erigon to sync.

When reimagining the full sync, with focus on batching data together and minimize DB overwrites. That makes it possible to sync Ethereum mainnet in under 2 days if you have a fast enough network connection and an SSD drive.

Examples of stages are:

  • Downloading headers;

  • Downloading block bodies;

  • Recovering senders' addresses;

  • Executing blocks;

  • Validating root hashes and building intermediate hashes for the state Merkle trie;

  • [...]

JSON-RPC daemon

Most of Erigon's components (txpool, rpcdaemon, snapshots downloader, sentry, ...) can work inside Erigon and as independent process.

To enable built-in RPC server: --http and --ws (sharing same port with http)

Run RPCDaemon as separated process: this daemon can use local DB (with running Erigon or on snapshot of a database) or remote DB (run on another server). 🔬 See RPC-Daemon docs

For remote DB

This works regardless of whether RPC daemon is on the same computer with Erigon, or on a different one. They use TPC socket connection to pass data between them. To use this mode, run Erigon in one terminal window

make erigon
./build/bin/erigon --private.api.addr=localhost:9090 --http=false
make rpcdaemon
./build/bin/rpcdaemon --private.api.addr=localhost:9090 --http.api=eth,erigon,web3,net,debug,trace,txpool

gRPC ports

9090 erigon, 9091 sentry, 9092 consensus engine, 9093 torrent downloader, 9094 transactions pool

Supported JSON-RPC calls (eth, debug , net, web3):

For a details on the implementation status of each command, see this table.

Run all components by docker-compose

Docker allows for building and running Erigon via containers. This alleviates the need for installing build dependencies onto the host OS.

Optional: Setup dedicated user

User UID/GID need to be synchronized between the host OS and container so files are written with correct permission.

You may wish to setup a dedicated user/group on the host OS, in which case the following make targets are available.

# create "erigon" user
make user_linux
# or
make user_macos

Environment Variables

There is a .env.example file in the root of the repo.

  • DOCKER_UID - The UID of the docker user
  • DOCKER_GID - The GID of the docker user
  • XDG_DATA_HOME - The data directory which will be mounted to the docker containers

If not specified, the UID/GID will use the current user.

A good choice for XDG_DATA_HOME is to use the ~erigon/.ethereum directory created by helper targets make user_linux or make user_macos.

Check: Permissions

In all cases, XDG_DATA_HOME (specified or default) must be writeable by the user UID/GID in docker, which will be determined by the DOCKER_UID and DOCKER_GID at build time.

If a build or service startup is failing due to permissions, check that all the directories, UID, and GID controlled by these environment variables are correct.

Run

Next command starts: Erigon on port 30303, rpcdaemon on port 8545, prometheus on port 9090, and grafana on port 3000.

#
# Will mount ~/.local/share/erigon to /home/erigon/.local/share/erigon inside container
#
make docker-compose

#
# or
#
# if you want to use a custom data directory
# or, if you want to use different uid/gid for a dedicated user
#
# To solve this, pass in the uid/gid parameters into the container.
#
# DOCKER_UID: the user id
# DOCKER_GID: the group id
# XDG_DATA_HOME: the data directory (default: ~/.local/share)
#
# Note: /preferred/data/folder must be read/writeable on host OS by user with UID/GID given
#       if you followed above instructions
#
# Note: uid/gid syntax below will automatically use uid/gid of running user so this syntax
#       is intended to be run via the dedicated user setup earlier
#
DOCKER_UID=$(id -u) DOCKER_GID=$(id -g) XDG_DATA_HOME=/preferred/data/folder DOCKER_BUILDKIT=1 COMPOSE_DOCKER_CLI_BUILD=1 make docker-compose

#
# if you want to run the docker, but you are not logged in as the $ERIGON_USER
# then you'll need to adjust the syntax above to grab the correct uid/gid
#
# To run the command via another user, use
#
ERIGON_USER=erigon
sudo -u ${ERIGON_USER} DOCKER_UID=$(id -u ${ERIGON_USER}) DOCKER_GID=$(id -g ${ERIGON_USER}) XDG_DATA_HOME=~${ERIGON_USER}/.ethereum DOCKER_BUILDKIT=1 COMPOSE_DOCKER_CLI_BUILD=1 make docker-compose

Makefile creates the initial directories for erigon, prometheus and grafana. The PID namespace is shared between erigon and rpcdaemon which is required to open Erigon's DB from another process (RPCDaemon local-mode). See: https://github.com/ledgerwatch/erigon/pull/2392/files

If your docker installation requires the docker daemon to run as root (which is by default), you will need to prefix the command above with sudo. However, it is sometimes recommended running docker (and therefore its containers) as a non-root user for security reasons. For more information about how to do this, refer to this article.

Windows support for docker-compose is not ready yet. Please help us with .ps1 port.

Grafana dashboard

docker compose up prometheus grafana, detailed docs.

old data

Disabled by default. To enable see ./build/bin/erigon --help for flags --prune

Documentation

The ./docs directory includes a lot of useful but outdated documentation. For code located in the ./cmd directory, their respective documentation can be found in ./cmd/*/README.md. A more recent collation of developments and happenings in Erigon can be found in the Erigon Blog.

FAQ

How much RAM do I need

  • Baseline (ext4 SSD): 16Gb RAM sync takes 6 days, 32Gb - 5 days, 64Gb - 4 days
  • +1 day on "zfs compression=off". +2 days on "zfs compression=on" (2x compression ratio). +3 days on btrfs.
  • -1 day on NVMe

Detailed explanation: ./docs/programmers_guide/db_faq.md

Default Ports and Firewalls

erigon ports

Component Port Protocol Purpose Should Expose
engine 9090 TCP gRPC Server Private
engine 42069 TCP & UDP Snap sync (Bittorrent) Public
engine 8551 TCP Engine API (JWT auth) Private
sentry 30303 TCP & UDP eth/68 peering Public
sentry 30304 TCP & UDP eth/67 peering Public
sentry 9091 TCP incoming gRPC Connections Private
rpcdaemon 8545 TCP HTTP & WebSockets & GraphQL Private

Typically, 30303 and 30304 are exposed to the internet to allow incoming peering connections. 9090 is exposed only internally for rpcdaemon or other connections, (e.g. rpcdaemon -> erigon). Port 8551 (JWT authenticated) is exposed only internally for Engine API JSON-RPC queries from the Consensus Layer node.

caplin ports

Component Port Protocol Purpose Should Expose
sentinel 4000 UDP Peering Public
sentinel 4001 TCP Peering Public

If you are using --internalcl aka caplin as your consensus client, then also look at the chart above

beaconAPI ports

Component Port Protocol Purpose Should Expose
REST 5555 TCP REST Public

If you are using --internalcl aka caplin as your consensus client and --beacon.api then also look at the chart above

shared ports

Component Port Protocol Purpose Should Expose
all 6060 TCP pprof Private
all 6060 TCP metrics Private

Optional flags can be enabled that enable pprof or metrics (or both) - however, they both run on 6060 by default, so

you'll have to change one if you want to run both at the same time. use --help with the binary for more info.

other ports

Reserved for future use: gRPC ports: 9092 consensus engine, 9093 snapshot downloader, 9094 TxPool

Hetzner expecting strict firewall rules

0.0.0.0/8             "This" Network             RFC 1122, Section 3.2.1.3
10.0.0.0/8            Private-Use Networks       RFC 1918
100.64.0.0/10         Carrier-Grade NAT (CGN)    RFC 6598, Section 7
127.16.0.0/12         Private-Use Networks       RFC 1918
169.254.0.0/16        Link Local                 RFC 3927
172.16.0.0/12         Private-Use Networks       RFC 1918
192.0.0.0/24          IETF Protocol Assignments  RFC 5736
192.0.2.0/24          TEST-NET-1                 RFC 5737
192.88.99.0/24        6to4 Relay Anycast         RFC 3068
192.168.0.0/16        Private-Use Networks       RFC 1918
198.18.0.0/15         Network Interconnect
Device Benchmark Testing   RFC 2544
198.51.100.0/24       TEST-NET-2                 RFC 5737
203.0.113.0/24        TEST-NET-3                 RFC 5737
224.0.0.0/4           Multicast                  RFC 3171
240.0.0.0/4           Reserved for Future Use    RFC 1112, Section 4
255.255.255.255/32    Limited Broadcast          RFC 919, Section 7
RFC 922, Section 7

Same in IpTables syntax

How to run erigon as a separate user? (e.g. as a systemd daemon)

Running erigon from build/bin as a separate user might produce an error:

error while loading shared libraries: libsilkworm_capi.so: cannot open shared object file: No such file or directory

The library needs to be installed for another user using make DIST=<path> install. You could use $HOME/erigon or /opt/erigon as the installation path, for example:

make DIST=/opt/erigon install

and then run /opt/erigon/erigon.

How to get diagnostic for bug report?

  • Get stack trace: kill -SIGUSR1 <pid>, get trace and stop: kill -6 <pid>
  • Get CPU profiling: add --pprof flag run go tool pprof -png http://127.0.0.1:6060/debug/pprof/profile\?seconds\=20 > cpu.png
  • Get RAM profiling: add --pprof flag run go tool pprof -inuse_space -png http://127.0.0.1:6060/debug/pprof/heap > mem.png

How to run local devnet?

🔬 Detailed explanation is here.

Docker permissions error

Docker uses user erigon with UID/GID 1000 (for security reasons). You can see this user being created in the Dockerfile. Can fix by giving a host's user ownership of the folder, where the host's user UID/GID is the same as the docker's user UID/GID (1000). More details in post

How to run public RPC api

  • --txpool.nolocals=true
  • don't add admin in --http.api list
  • to increase throughput may need increase/decrease: --db.read.concurrency, --rpc.batch.concurrency, --rpc.batch.limit

Run RaspberyPI

https://github.com/mathMakesArt/Erigon-on-RPi-4

How to change db pagesize

post

Getting in touch

Erigon Discord Server

The main discussions are happening on our Discord server. To get an invite, send an email to bloxster [at] proton.me with your name, occupation, a brief explanation of why you want to join the Discord, and how you heard about Erigon.

Reporting security issues/concerns

Send an email to security [at] torquem.ch.

Known issues

htop shows incorrect memory usage

Erigon's internal DB (MDBX) using MemoryMap - when OS does manage all read, write, cache operations instead of Application (linux , windows)

htop on column res shows memory of "App + OS used to hold page cache for given App", but it's not informative, because if htop says that app using 90% of memory you still can run 3 more instances of app on the same machine - because most of that 90% is "OS pages cache". OS automatically frees this cache any time it needs memory. Smaller "page cache size" may not impact performance of Erigon at all.

Next tools show correct memory usage of Erigon:

  • vmmap -summary PID | grep -i "Physical footprint". Without grep you can see details

    • section MALLOC ZONE column Resident Size shows App memory usage, section REGION TYPE column Resident Size shows OS pages cache size.
  • Prometheus dashboard shows memory of Go app without OS pages cache (make prometheus, open in browser localhost:3000, credentials admin/admin)

  • cat /proc/<PID>/smaps

    Erigon uses ~4Gb of RAM during genesis sync and ~1Gb during normal work. OS pages cache can utilize unlimited amount of memory.

    Warning: Multiple instances of Erigon on same machine will touch Disk concurrently, it impacts performance - one of main Erigon optimisations: "reduce Disk random access". "Blocks Execution stage" still does many random reads - this is reason why it's slowest stage. We do not recommend running multiple genesis syncs on same Disk. If genesis sync passed, then it's fine to run multiple Erigon instances on same Disk.

Blocks Execution is slow on cloud-network-drives

Please read ledgerwatch#1516 (comment) In short: network-disks are bad for blocks execution - because blocks execution reading data from db non-parallel non-batched way.

Filesystem's background features are expensive

For example: btrfs's autodefrag option - may increase write IO 100x times

Gnome Tracker can kill Erigon

Gnome Tracker - detecting miners and kill them.

the --mount option requires BuildKit error

For anyone else that was getting the BuildKit error when trying to start Erigon the old way you can use the below...

XDG_DATA_HOME=/preferred/data/folder DOCKER_BUILDKIT=1 COMPOSE_DOCKER_CLI_BUILD=1 make docker-compose

bsc-erigon's People

Contributors

alexeyakhunov avatar askalexsharov avatar b00ris avatar bas-vk avatar battlmonstr avatar blxdyx avatar cjentzsch avatar debris avatar enriavil1 avatar fjl avatar gballet avatar giulio2002 avatar hexoscott avatar holiman avatar janos avatar jekamas avatar karalabe avatar mandrigin avatar mariusvanderwijden avatar mh0lt avatar nonsense avatar obscuren avatar revitteth avatar rjl493456442 avatar taratorio avatar tgerring avatar vorot93 avatar yperbasis avatar zelig avatar zsfelfoldi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

bsc-erigon's Issues

Stuck at 1/15 Snapshots: Waiting for torrents metadata

System information

Erigon version: ./erigon --version
erigon version 2.40.0-dev-3da15fcb

OS & Version: Windows/Linux/OSX
Linux

Commit hash: 3da15fc

Erigon Command (with flags/config):

/usr/local/bin/erigon --snapshots=true \
	--chain=bsc \
	--datadir=/mnt/data/.bsc \
	--http \
	--http.addr=0.0.0.0 \
	--http.port=8545 \
	--http.api=eth,debug,net,trace,web3,erigon \
	--log.dir.path=/home/ubuntu/bsc-mainnet/logs \
	--log.dir.verbosity=info \
	--torrent.upload.rate=16mb \
	--torrent.download.rate=250mb \
	--db.pagesize=16k

Concensus Layer:

Concensus Layer Command (with flags/config):

Chain/Network: 56

Expected behaviour

Can download snapshots

Actual behaviour

Always stuck at Waiting for torrents metadata: 1/138

t=2023-04-12T18:00:16+0700 lvl=info msg="[p2p] GoodPeers" eth66=31 eth68=3
t=2023-04-12T18:00:16+0700 lvl=info msg="[txpool] stat" pending=1 baseFee=0 queued=14 alloc=66.7MB sys=182.7MB
t=2023-04-12T18:00:16+0700 lvl=info msg="[1/15 Snapshots] Waiting for torrents metadata: 1/138"
t=2023-04-12T18:00:36+0700 lvl=info msg="[1/15 Snapshots] Waiting for torrents metadata: 1/138"
t=2023-04-12T18:00:56+0700 lvl=info msg="[1/15 Snapshots] Waiting for torrents metadata: 1/138"
t=2023-04-12T18:01:16+0700 lvl=info msg="[txpool] stat" pending=1 baseFee=0 queued=14 alloc=73.4MB sys=182.7MB
t=2023-04-12T18:01:16+0700 lvl=info msg="[1/15 Snapshots] Waiting for torrents metadata: 1/138"
t=2023-04-12T18:01:36+0700 lvl=info msg="[1/15 Snapshots] Waiting for torrents metadata: 1/138"
t=2023-04-12T18:01:56+0700 lvl=info msg="[1/15 Snapshots] Waiting for torrents metadata: 1/138"
t=2023-04-12T18:02:16+0700 lvl=info msg="[p2p] GoodPeers" eth66=31 eth68=3
t=2023-04-12T18:02:16+0700 lvl=info msg="[txpool] stat" pending=1 baseFee=0 queued=14 alloc=113.8MB sys=182.7MB
t=2023-04-12T18:02:16+0700 lvl=info msg="[1/15 Snapshots] Waiting for torrents metadata: 1/138"
t=2023-04-12T18:02:36+0700 lvl=info msg="[1/15 Snapshots] Waiting for torrents metadata: 1/138"
t=2023-04-12T18:02:56+0700 lvl=info msg="[1/15 Snapshots] Waiting for torrents metadata: 1/138"
t=2023-04-12T18:03:16+0700 lvl=info msg="[txpool] stat" pending=1 baseFee=0 queued=14 alloc=77.7MB sys=182.7MB
t=2023-04-12T18:03:16+0700 lvl=info msg="[1/15 Snapshots] Waiting for torrents metadata: 1/138"
t=2023-04-12T18:03:36+0700 lvl=info msg="[1/15 Snapshots] Waiting for torrents metadata: 1/138"

Steps to reproduce the behaviour

Backtrace

[backtrace]

Please change version numbers for fork

If possible it would be great if the version numbers when queried could the could be changed to reflect that it is a fork, this would make it much easier for version tracking vs your repo.

It could still have reference to the base Erigon version but could also reflect the version number in the repo, something like:

bsc-erigon 1.0.2 upstream version 2.40.0-dev-3da15fcb maybe

FAQ: Solution For Common Issues

Rationale

For some common issue encountered, such as the follows:

  • No block header/body write.
  • Body download speed is slow
  • Database is too large for current system

Solution

Please try to specify these flags in the command line for starting node:

  • --p2p.protocol=68(bsc remove 66&67 after v1.4.x)
  • --db.pagesize=16k

Also, for some cases that now work especially in reboot scenario, it'll help to unwind some blocks:

  1. Stop erigon node
  2. run make integration
  3. run ./build/bin/integration stage_exec --datadir ./data/ --chain bsc --unwind 10
  4. Start erigon node

FAQs

Q1: mdbx_env_open: MDBX_TOO_LARGE? refer: #38

Q2: Any suggested start up command line?

Here is our command line to start on BSC Mainnet, you can use on your need.

Note: we have plenty SSD disk storage to support archive node, which is 14TB, so we set --db.pagesize=16k for fast DB performance. You may configure it according to your device setting.

./erigon --bodies.cache=214748364800 --batchSize=4096M --txpool.disable --metrics.addr=0.0.0.0 --log.console.verbosity=eror --log.dir.verbosity=dbug --http --ws --http.api=web3,net,eth,debug,trace,txpool,admin --http.addr=0.0.0.0 --db.pagesize=16k --datadir ${workspace}/data --private.api.addr=localhost:9090 --chain=bsc --metrics 

Q3: OOM Crash?

Try not specify the body cache and batch size: --bodies.cache=214748364800 --batchSize=4096M.
These 2 flags will bring heavy to RAM caches. Similar issue: #39

Q4: For The Lagging Sync Issue, pls refer: #51

Q5: Log with "DumpBlocks: DumpHeaders: header missed in db:"

It won't affect sync, stop erigon and ./integration stage_headers --reset --chain=bsc --datadir=yourdata will fix.

[5/15 Bodies] No block bodies to write in this log period block number=22999999

System information

ubuntu@ip-172-31-0-234:/data/erigon$ git remote -v 
origin  https://github.com/node-real/bsc-erigon.git (fetch)
origin  https://github.com/node-real/bsc-erigon.git (push)
ubuntu@ip-172-31-0-234:/data/erigon$ git branch
* (HEAD detached at v1.0.2)
  devel
ubuntu@ip-172-31-0-234:/data/erigon$ ./build/bin/erigon --version
erigon version 2.40.0-dev-3da15fcb

OS & Version: Linux (Ubuntu 22.04)

Commit hash:

Erigon Command (with flags/config):

./erigon/build/bin/erigon \
    --datadir=/data/snapshot/erigon \
    --nat any \
    --http \
    --http.addr=0.0.0.0 \
    --http.port=8546 \
    --http.vhosts=* \
    --http.api=eth,trace,web3 \
    --networkid=56 \
    --chain=bsc \
    --log.dir.path /data/log \
    --metrics \
    --metrics.addr=0.0.0.0 \
    --metrics.port=6060

Concensus Layer: NA

Concensus Layer Command (with flags/config): NA

Chain/Network: bsc / 56

Expected behaviour

Syncing

Actual behaviour

stuck at bodies stage

Steps to reproduce the behaviour

launch the client, and sync stucks at a fixed block number.

Backtrace

INFO[04-13|09:39:15.654] HTTP endpoint opened                     url=[::]:8546 ws=false ws.compression=true grpc=false
INFO[04-13|09:39:15.657] [1/15 Snapshots] Fetching torrent files metadata 
INFO[04-13|09:39:15.665] Started P2P networking                   version=68 self=enode://d6671bee3ebdf08e248546301fb680ecbe33f7451aadef054bc09c772e108f239dccc98c3325054cb3b2e0f91c5de75d7266808894d48a00faa3a720d1ec667f@127.0.0.1:30304 name=erigon/v2.40.0-dev-3da15fcb/linux-amd64/go1.18.1
INFO[04-13|09:39:15.671] [snapshots] Blocks Stat                  blocks=23000k indices=23000k alloc=2.8GB sys=3.0GB
INFO[04-13|09:39:15.678] [2/15 Headers] Waiting for headers...    from=27305933
INFO[04-13|09:39:15.698] Started P2P networking                   version=67 self=enode://d6671bee3ebdf08e248546301fb680ecbe33f7451aadef054bc09c772e108f239dccc98c3325054cb3b2e0f91c5de75d7266808894d48a00faa3a720d1ec667f@127.0.0.1:30303 name=erigon/v2.40.0-dev-3da15fcb/linux-amd64/go1.18.1
INFO[04-13|09:39:28.713] [2/15 Headers] Processed                 highest inserted=27308655 age=42s
INFO[04-13|09:39:31.879] [5/15 Bodies] Processing bodies...       from=22999999 to=27308655
INFO[04-13|09:39:51.881] [5/15 Bodies] No block bodies to write in this log period block number=22999999
INFO[04-13|09:40:11.880] [5/15 Bodies] No block bodies to write in this log period block number=22999999
INFO[04-13|09:40:15.636] [txpool] stat                            pending=1 baseFee=0 queued=14 alloc=3.3GB sys=3.5GB
INFO[04-13|09:40:31.881] [5/15 Bodies] No block bodies to write in this log period block number=22999999

Didn't the docker image follow the release?

Our servers are all centos7, and the executable file packaged by release lacks relevant dependencies on my machine:

./erigon: /lib64/libstdc++.so.6: version `CXXABI_1.3.9' not found (required by ./erigon)
./erigon: /lib64/libc.so.6: version `GLIBC_2.18' not found (required by ./erigo

I don't want to make any destructive changes to my system.

Thanks~

No block headers to write in this log period block number=XXXX

System information

Erigon version: 1.0.3 and devel

OS & Version: Linux

Erigon Command (with flags/config): /root/bsc-erigon/build/bin/erigon --rpc.batch.limit=10000 --datadir=/data/erigon/ --private.api.addr=0.0.0.0:9090 --chain bsc --http.addr=0.0.0.0 --http.vhosts=* --http.corsdomain=* --http.api=eth,erigon,web3,net,debug,trace,txpool --db.pagesize=16kb --ws --snap.stop

Chain/Network: BSC

Apr 21 09:27:55 rpc-bsc-erigon-prod-01 erigon[3255069]: [INFO] [04-21|09:27:55.776] [snapshots] Blocks Stat                  blocks=23000k indices=23000k alloc=3.0GB sys=3.2GB
Apr 21 09:27:55 rpc-bsc-erigon-prod-01 erigon[3255069]: [INFO] [04-21|09:27:55.776] [2/15 Headers] Waiting for headers...    from=27538092
Apr 21 09:28:15 rpc-bsc-erigon-prod-01 erigon[3255069]: [INFO] [04-21|09:28:15.777] [2/15 Headers] Wrote block headers       number=27538127 blk/second=1.750 alloc=2.5GB sys=4.6GB
Apr 21 09:28:35 rpc-bsc-erigon-prod-01 erigon[3255069]: [INFO] [04-21|09:28:35.776] [2/15 Headers] No block headers to write in this log period block number=27538127
Apr 21 09:28:55 rpc-bsc-erigon-prod-01 erigon[3255069]: [INFO] [04-21|09:28:55.723] [txpool] stat                            pending=0 baseFee=0 queued=0 alloc=3.2GB sys=4.9GB
Apr 21 09:28:55 rpc-bsc-erigon-prod-01 erigon[3255069]: [INFO] [04-21|09:28:55.777] [2/15 Headers] No block headers to write in this log period block number=27538127
Apr 21 09:29:15 rpc-bsc-erigon-prod-01 erigon[3255069]: [INFO] [04-21|09:29:15.779] [2/15 Headers] No block headers to write in this log period block number=27538127
Apr 21 09:29:35 rpc-bsc-erigon-prod-01 erigon[3255069]: [INFO] [04-21|09:29:35.776] [2/15 Headers] No block headers to write in this log period block number=27538127

I have 2 nodes. Both stopped to sync. Usually, it restart after some restart, but now it looks not restart to sync. Any idea ?

Memory Leak? OOM Crash on v1.0.3

System information

Erigon version: ./erigon v1.0.3

OS & Version: Linux
Commit hash: 065538d5786ac0c83970c8213538924bc17e3c37

Erigon Command (with flags/config):

./erigon --p2p.protocol 66 --nodiscover --log.dir.path ./  --db.size.limit=400GB --bodies.cache=214748364800 --batchSize=4096M --db.pagesize=16k --datadir ./ --private.api.addr=localhost:9099 --log.console.verbosity 3

Concensus Layer:
NA
Concensus Layer Command (with flags/config):
NA
Chain/Network:
NA

Expected behaviour

BSC Erigon node can run without crash.

Actual behaviour

It crashed after stuck in stage Header for a while.
Could have memory leak?

Steps to reproduce the behaviour

Backtrace

...
[DBUG] [04-26|04:31:15.231] [txpool] Commit                          written_kb=0 in=1.940005903s[INFO] [04-26|04:31:22.295] [p2p] GoodPeers                          eth66=1[INFO] [04-26|04:31:22.825] [txpool] stat                            pending=0 baseFee=0 queued=0 alloc=5.6GB sys=6.1GB[INFO] [04-26|04:31:25.453] [2/15 Headers] No block headers to write in this log period block number=199
fatal error: runtime: out of memory

runtime stack:
runtime.throw({0x28f4efc?, 0x2030?})
        runtime/panic.go:1047 +0x5d fp=0x7fdd0d699ca8 sp=0x7fdd0d699c78 pc=0x47771d
runtime.sysMapOS(0xc173c00000, 0x400000?)
        runtime/mem_linux.go:187 +0x11b fp=0x7fdd0d699cf0 sp=0x7fdd0d699ca8 pc=0x45617b
runtime.sysMap(0xc3ffffffff?, 0x7fdd3d048000?, 0x46b460?)
        runtime/mem.go:142 +0x35 fp=0x7fdd0d699d20 sp=0x7fdd0d699cf0 pc=0x455b55
runtime.(*mheap).grow(0x4d6a580, 0xf?)        
        runtime/mheap.go:1468 +0x23d fp=0x7fdd0d699d90 sp=0x7fdd0d699d20 pc=0x46851d
runtime.(*mheap).allocSpan(0x4d6a580, 0xf, 0x0, 0x1)
        runtime/mheap.go:1199 +0x1be fp=0x7fdd0d699e28 sp=0x7fdd0d699d90 pc=0x467c5e
runtime.(*mheap).alloc.func1()
        runtime/mheap.go:918 +0x65 fp=0x7fdd0d699e70 sp=0x7fdd0d699e28 pc=0x4676e5
runtime.systemstack()        
        runtime/asm_amd64.s:492 +0x49 fp=0x7fdd0d699e78 sp=0x7fdd0d699e70 pc=0x4a9be9

Can't sync BSC testnet from scratch

System information

Erigon version: 1.0.8

OS & Version: Ubuntu 22.04

Erigon Command (with flags/config):

/home/blockchain/bsc-testnet/erigon \
 --chain=chapel \
 --snapshots=true \
 --datadir=/home/blockchain/bsc-testnet/chaindata \
 --db.pagesize=4kb \
   --port=30304 \
  --authrpc.port=8551 \
  --maxpeers=1000 \
    --private.api.addr=127.0.0.1:9090 \
  --bootnodes="enode://69a90b35164ef862185d9f4d2c5eff79b92acd1360574c0edf36044055dc766d87285a820233ae5700e11c9ba06ce1cf23c1c68a4556121109776ce2a3990bba@52.199.214.252:30311","enode://330d768f6de90e7825f0ea6fe59611ce9d50712e73547306846a9304663f9912bf1611037f7f90f21606242ded7fb476c7285cb7cd792836b8c0c5ef0365855c@18.181.52.189:30311","enode://df1e8eb59e42cad3c4551b2a53e31a7e55a2fdde1287babd1e94b0836550b489ba16c40932e4dacb16cba346bd442c432265a299c4aca63ee7bb0f832b9f45eb@52.51.80.128:30311","enode://0bd566a7fd136ecd19414a601bfdc530d5de161e3014033951dd603e72b1a8959eb5b70b06c87a5a75cbf45e4055c387d2a842bd6b1bd8b5041b3a61bab615cf@34.242.33.165:30311","enode://604ed87d813c2b884ff1dc3095afeab18331f3cc361e8fb604159e844905dfa6e4c627306231d48f46a2edceffcd069264a89d99cdbf861a04e8d3d8d7282e8a@3.209.122.123:30311","enode://4d358eca87c44230a49ceaca123c89c7e77232aeae745c3a4917e607b909c4a14034b3a742960a378c3f250e0e67391276e80c7beee7770071e13f33a5b0606a@52.72.123.113:30311" \
  --torrent.port=42069 \
 --metrics --metrics.port=8676 --metrics.addr=0.0.0.0

Chain/Network: Chapel Testnet

Expected behaviour

Node syncs from 0 to the latest block smoothly.

Actual behaviour

After downloading headers & bodies and recovering senders, execution fails at block 90:

Steps to reproduce the behaviour

Just start testnet node with empty chaindata dir.

Backtrace

[INFO] [05-19|17:17:43.137] [5/15 Bodies] Processed                  highest=29939319
[INFO] [05-19|17:17:43.137] [5/15 Bodies] DONE                       in=1h31m44.252406649s
[INFO] [05-19|17:17:43.139] [6/15 Senders] Started                   from=89 to=29939319
...
[INFO] [05-19|17:50:35.407] [6/15 Senders] ETL [2/2] Loading         into=TxSender block=25821735
[INFO] [05-19|17:50:39.886] [txpool] stat                            pending=0 baseFee=0 queued=31 alloc=5.5GB sys=14.4GB
[INFO] [05-19|17:50:41.011] [6/15 Senders] DONE                      in=32m57.873295154s
[INFO] [05-19|17:50:41.011] [7/15 Execution] Blocks execution        from=89 to=29939319
[WARN] [05-19|17:50:41.027] [7/15 Execution] Execution failed        block=90 hash=0xc137446336ab18878949947a44d234afeb37ebff7bf5b70c64eb221652b3a86e err="expected system tx (hash 0xc27cd46b32b596c89b7a625ec6aea651992bb330ee34d355c558aaaf5fee731d, nonce 2, to 0x0000000000000000000000000000000000001002, value 0x8a7c0b96a5c0, gas 9223372036854775807, gasPrice 0x0, data ), actual tx (hash 0xff3c7d2fd274b74dce5b23650bc42d5eded31b2729d366013b13ae45323f6fbc, nonce 2, to 0x0000000000000000000000000000000000001002, value 0x205375a50b480, gas 9223372036854775807, gasPrice 0x0, data )"
[INFO] [05-19|17:50:41.027] UnwindTo                                 block=89 bad_block_hash=0xc137446336ab18878949947a44d234afeb37ebff7bf5b70c64eb221652b3a86e
[INFO] [05-19|17:50:41.027] [7/15 Execution] Completed on            block=89
[INFO] [05-19|17:51:01.028] [5/15 Bodies] Unwinding transactions...  current block=2041233
...
[INFO] [05-19|18:16:20.455] [2/15 Headers] Unwind done               in=4m25.871385824s
[INFO] [05-19|18:16:20.457] Timings (slower than 50ms)               Headers=4m17.974s CumulativeIndex=62ms BlockHashes=910ms Bodies=1h31m44.252s Senders=32m57.873s Unwind Bodies=21m13.555s Unwind Headers=4m25.871s
[INFO] [05-19|18:16:35.065] RPC Daemon notified of new headers       from=0 to=29939319 hash=0x0000000000000000000000000000000000000000000000000000000000000000 header sending=14.606902245s log sending=241ns
[INFO] [05-19|18:16:35.066] [2/15 Headers] Waiting for headers...    from=89
[INFO] [05-19|18:16:39.885] [txpool] stat                            pending=0 baseFee=0 queued=31 alloc=11.5GB sys=18.5GB
[INFO] [05-19|18:16:55.066] [2/15 Headers] No block headers to write in this log period block number=89
[INFO] [05-19|18:17:09.223] New txs subscriber joined 
[INFO] [05-19|18:17:09.223] new subscription to newHeaders established 
[INFO] [05-19|18:17:15.066] [2/15 Headers] No block headers to write in this log period block number=89
[INFO] [05-19|18:17:35.066] [2/15 Headers] No block headers to write in this log period block number=89

[2/15 Headers] No block headers to write in this log period block number=27281023

hi

Thanks for your great work, I switched bsc to this branch, but still can't sync blocks normally, stuck at a block before the hard fork. please help me.

[INFO] [04-12|11:27:45.847] Req/resp stats                           req=0 reqMin=0 reqMax=0 skel=17 skelMin=27281022 skelMax=27318078 resp=103 respMin=27280978 respMax=27281022 dups=90
[INFO] [04-12|11:28:05.294] [p2p] GoodPeers                          eth67=1 eth66=25
[INFO] [04-12|11:28:05.811] [txpool] stat                            pending=11 baseFee=0 queued=22 alloc=2.7GB sys=4.0GB
[INFO] [04-12|11:28:05.848] [2/15 Headers] No block headers to write in this log period block number=27281023
[INFO] [04-12|11:28:05.848] Req/resp stats                           req=0 reqMin=0 reqMax=0 skel=17 skelMin=27281022 skelMax=27318078 resp=102 respMin=27281022 respMax=27281022 dups=99
[INFO] [04-12|11:28:25.848] [2/15 Headers] No block headers to write in this log period block number=27281023
[INFO] [04-12|11:28:25.848] Req/resp stats                           req=0 reqMin=0 reqMax=0 skel=17 skelMin=27281022 skelMax=27318078 resp=102 respMin=27281022 respMax=27281022 dups=88
[INFO] [04-12|11:28:45.848] [2/15 Headers] No block headers to write in this log period block number=27281023
[INFO] [04-12|11:28:45.848] Req/resp stats                           req=0 reqMin=0 reqMax=0 skel=17 skelMin=27281022 skelMax=27318078 resp=102 respMin=27281022 respMax=27281022 dups=95
[INFO] [04-12|11:29:05.812] [txpool] stat                            pending=11 baseFee=0 queued=24 alloc=3.0GB sys=4.0GB
[INFO] [04-12|11:29:05.847] [2/15 Headers] No block headers to write in this log period block number=27281023
[INFO] [04-12|11:29:05.847] Req/resp stats                           req=0 reqMin=0 reqMax=0 skel=17 skelMin=27281022 skelMax=27318078 resp=102 respMin=27281022 respMax=27281022 dups=90
[INFO] [04-12|11:29:25.848] [2/15 Headers] No block headers to write in this log period block number=27281023
[INFO] [04-12|11:29:25.848] Req/resp stats                           req=0 reqMin=0 reqMax=0 skel=17 skelMin=27281022 skelMax=27318078 resp=102 respMin=27281022 respMax=27281022 dups=92
[INFO] [04-12|11:29:45.848] [2/15 Headers] No block headers to write in this log period block number=27281023
[INFO] [04-12|11:29:45.848] Req/resp stats                           req=0 reqMin=0 reqMax=0 skel=17 skelMin=27281022 skelMax=27318078 resp=102 respMin=27281022 respMax=27281022 dups=88
[INFO] [04-12|11:30:05.294] [p2p] GoodPeers                          eth66=29 eth67=1
[INFO] [04-12|11:30:05.811] [txpool] stat                            pending=12 baseFee=0 queued=25 alloc=2.5GB sys=4.0GB
[INFO] [04-12|11:30:05.848] [2/15 Headers] No block headers to write in this log period block number=27281023
[INFO] [04-12|11:30:05.848] Req/resp stats                           req=0 reqMin=0 reqMax=0 skel=17 skelMin=27281022 skelMax=27318078 resp=106 respMin=27280978 respMax=27281022 dups=94

Wanted: Improve the log

Rationale

To make the log more efficient and accurate
like:
image

  • txpool: not need to print
  • GoodPeers: is the number shown correctly?

Implementation

NA

No block headers to write in this log period block number=957

When I synchronized the bsc chain, it was stuck in this position:

image

The command I used is ./build/bin/erigon --chain bsc --sentry.drop-useless-peers --datadir /media/Disk/chain/bsc/chaindata --torrent.download.rate=20mb --http.api=eth,erigon,web3,net,debug,trace,txpool --rpc.batch.concurrency 10

The version I used is v1.0.8.

I also try to remove the flag --sentry.drop-useless-peers, but it got the same issue.

The test network does not work properly on v1.0.3 (ARM64)

Hi,

Thanks for your great work!

My test network works well in v1.0.2, but just now when I upgrade v1.0.3, it has the following error, is this related to the machine being ARM64?

err="fail to open mdbx: mdbx_env_open: MDBX_CORRUPTED: Maybe free space is over on disk. Otherwise it's hardware failure. Before creating issue please use tools like https://www.memtest86.com to test RAM and tools like https://www.smartmontools.org to test Disk. To handle hardware risks: use ECC RAM, use RAID of disks, run multiple application instances (or do backups). If hardware checks passed - check FS settings - 'fsync' and 'flock' must be enabled.  Otherwise - please create issue in Application repo. On default DURABLE mode, power outage can't cause this error. On other modes - power outage may break last transaction and mdbx_chk can recover db in this case, see '-t' and '-0|1|2' options., label: consensus, trace: [kv_mdbx.go:266 kv_mdbx.go:397 db.go:20 config.go:75 backend.go:455 node.go:112 main.go:59 command.go:274 app.go:332 app.go:309 main.go:36 proc.go:250 asm_arm64.s:1172]" stack="[main.go:31 panic.go:884 kv_mdbx.go:399 db.go:20 config.go:75 backend.go:455 node.go:112 main.go:59 command.go:274 app.go:332 app.go:309 main.go:36 proc.go:250 asm_arm64.s:1172]"

Improve: verifyVoteAttestation & Snapshot recursive call

Rational

After plato, verifyVoteAttestation will call snapshot, it results a very deep recursive call.
Need to check, if it is reasonable and fix if needed.

Post the call stack here:

 0  0x0000000001538910 in github.com/ledgerwatch/erigon/consensus/parlia.(*Parlia).snapshot
    at github.com/ledgerwatch/erigon/consensus/parlia/parlia.go:705
 1  0x0000000001540be0 in github.com/ledgerwatch/erigon/consensus/parlia.(*Parlia).GetJustifiedNumberAndHash
    at github.com/ledgerwatch/erigon/consensus/parlia/parlia.go:1607
 2  0x0000000001536ae0 in github.com/ledgerwatch/erigon/consensus/parlia.(*Parlia).verifyVoteAttestation
    at github.com/ledgerwatch/erigon/consensus/parlia/parlia.go:447
 3  0x00000000015391b4 in github.com/ledgerwatch/erigon/consensus/parlia.(*Parlia).snapshot
    at github.com/ledgerwatch/erigon/consensus/parlia/parlia.go:791
 4  0x0000000001536c44 in github.com/ledgerwatch/erigon/consensus/parlia.(*Parlia).verifyVoteAttestation
    at github.com/ledgerwatch/erigon/consensus/parlia/parlia.go:462
 5  0x00000000015391b4 in github.com/ledgerwatch/erigon/consensus/parlia.(*Parlia).snapshot
    at github.com/ledgerwatch/erigon/consensus/parlia/parlia.go:791
 6  0x0000000001536c44 in github.com/ledgerwatch/erigon/consensus/parlia.(*Parlia).verifyVoteAttestation
    at github.com/ledgerwatch/erigon/consensus/parlia/parlia.go:462
 7  0x00000000015391b4 in github.com/ledgerwatch/erigon/consensus/parlia.(*Parlia).snapshot
    at github.com/ledgerwatch/erigon/consensus/parlia/parlia.go:791
 8  0x0000000001536c44 in github.com/ledgerwatch/erigon/consensus/parlia.(*Parlia).verifyVoteAttestation
    at github.com/ledgerwatch/erigon/consensus/parlia/parlia.go:462
 9  0x00000000015391b4 in github.com/ledgerwatch/erigon/consensus/parlia.(*Parlia).snapshot
    at github.com/ledgerwatch/erigon/consensus/parlia/parlia.go:791
10  0x0000000001536c44 in github.com/ledgerwatch/erigon/consensus/parlia.(*Parlia).verifyVoteAttestation
    at github.com/ledgerwatch/erigon/consensus/parlia/parlia.go:462
11  0x00000000015391b4 in github.com/ledgerwatch/erigon/consensus/parlia.(*Parlia).snapshot
    at github.com/ledgerwatch/erigon/consensus/parlia/parlia.go:791
12  0x0000000001536c44 in github.com/ledgerwatch/erigon/consensus/parlia.(*Parlia).verifyVoteAttestation
    at github.com/ledgerwatch/erigon/consensus/parlia/parlia.go:462
13  0x00000000015391b4 in github.com/ledgerwatch/erigon/consensus/parlia.(*Parlia).snapshot
    at github.com/ledgerwatch/erigon/consensus/parlia/parlia.go:791
14  0x0000000001536c44 in github.com/ledgerwatch/erigon/consensus/parlia.(*Parlia).verifyVoteAttestation
    at github.com/ledgerwatch/erigon/consensus/parlia/parlia.go:462
15  0x00000000015391b4 in github.com/ledgerwatch/erigon/consensus/parlia.(*Parlia).snapshot
    at github.com/ledgerwatch/erigon/consensus/parlia/parlia.go:791
16  0x0000000001536c44 in github.com/ledgerwatch/erigon/consensus/parlia.(*Parlia).verifyVoteAttestation
    at github.com/ledgerwatch/erigon/consensus/parlia/parlia.go:462
17  0x00000000015391b4 in github.com/ledgerwatch/erigon/consensus/parlia.(*Parlia).snapshot
    at github.com/ledgerwatch/erigon/consensus/parlia/parlia.go:791
18  0x0000000001536c44 in github.com/ledgerwatch/erigon/consensus/parlia.(*Parlia).verifyVoteAttestation
    at github.com/ledgerwatch/erigon/consensus/parlia/parlia.go:462
19  0x00000000015391b4 in github.com/ledgerwatch/erigon/consensus/parlia.(*Parlia).snapshot
    at github.com/ledgerwatch/erigon/consensus/parlia/parlia.go:791
20  0x0000000001536c44 in github.com/ledgerwatch/erigon/consensus/parlia.(*Parlia).verifyVoteAttestation
    at github.com/ledgerwatch/erigon/consensus/parlia/parlia.go:462
21  0x00000000015391b4 in github.com/ledgerwatch/erigon/consensus/parlia.(*Parlia).snapshot
    at github.com/ledgerwatch/erigon/consensus/parlia/parlia.go:791
22  0x0000000001536c44 in github.com/ledgerwatch/erigon/consensus/parlia.(*Parlia).verifyVoteAttestation
    at github.com/ledgerwatch/erigon/consensus/parlia/parlia.go:462
23  0x00000000015391b4 in github.com/ledgerwatch/erigon/consensus/parlia.(*Parlia).snapshot
    at github.com/ledgerwatch/erigon/consensus/parlia/parlia.go:791
24  0x0000000001536c44 in github.com/ledgerwatch/erigon/consensus/parlia.(*Parlia).verifyVoteAttestation
    at github.com/ledgerwatch/erigon/consensus/parlia/parlia.go:462
25  0x00000000015391b4 in github.com/ledgerwatch/erigon/consensus/parlia.(*Parlia).snapshot
    at github.com/ledgerwatch/erigon/consensus/parlia/parlia.go:791
26  0x0000000001536c44 in github.com/ledgerwatch/erigon/consensus/parlia.(*Parlia).verifyVoteAttestation
    at github.com/ledgerwatch/erigon/consensus/parlia/parlia.go:462
27  0x00000000015391b4 in github.com/ledgerwatch/erigon/consensus/parlia.(*Parlia).snapshot
    at github.com/ledgerwatch/erigon/consensus/parlia/parlia.go:791
28  0x0000000001536c44 in github.com/ledgerwatch/erigon/consensus/parlia.(*Parlia).verifyVoteAttestation
    at github.com/ledgerwatch/erigon/consensus/parlia/parlia.go:462

Implementation

NA

erigon atuo restart [13/15 LogIndex] processing

[INFO] [04-19|09:34:38.885] [13/15 LogIndex] processing from=27479547 to=27480566
[INFO] [04-19|09:35:02.137] Got interrupt, shutting down...
[INFO] [04-19|09:35:02.137] Exiting Engine...
[INFO] [04-19|09:35:02.137] RPC server shutting down
[INFO] [04-19|09:35:02.138] RPC server shutting down
[WARN] [04-19|09:35:02.138] Failed to serve http endpoint err="http: Server closed"
[INFO] [04-19|09:35:02.138] Exiting...
[INFO] [04-19|09:35:02.138] RPC server shutting down
[WARN] [04-19|09:35:02.138] Failed to serve http endpoint err="http: Server closed"
systemd[1]: Stopping erigon...
erigon[3825005]: [INFO] [04-19|09:35:02.139] Engine HTTP endpoint close url=127.0.0.1:8551
erigon[3825005]: [INFO] [04-19|09:35:02.139] HTTP endpoint closed url=[::]:8545
systemd[1]: erigon.service: Succeeded.
systemd[1]: Stopped erigon.
systemd[1]: Started erigon.
erigon[3922505]: [WARN] [04-19|09:35:04.796] no log dir set, console logging only

Issue with Transaction propagation

System information

hetzner AX101 - AMD 5950X 8TB NVME SSD

Erigon version:

erigon version 2.40.0-dev-3da15fcb

OS & Version: Windows/Linux/OSX

Ubuntu 20.04 LTS

Commit hash:

3da15fc ( release 1.0.2 )

Erigon Command:

/opt/erigon/bin/erigon --datadir=/opt/erigon/data-bsc --log.console.verbosity=4 --p2p.protocol=66 --db.pagesize=16k --chain bsc --batchSize=512M --nat extip:XX.XX.XX.XX --torrent.download.rate=50mb --torrent.upload.rate=1mb --maxpeers=400 --sentry.drop-useless-peers=true --txpool.locals=XXX --prune=hrtc --private.api.addr=127.0.0.1:9090 --http=false --trustedpeers=enode://f56dcbe59ddcf52e2abe5d5f5fded28bf823e7e2fb887cebbfe3c540ed0dfbbd778872e6b0c9c6243fcb79fdf3e1805ae98a7c389091e9cc55bfe6dedfce04b8@3.115.208.145:30311,enode://b4feb14a8247917f25a4603a0a3a58827e6e3954fa1fc0499f3e084476dcb2dc32e444e7c51cecbc1066d2c94062fc16aa80da1a008c94e576b67b84a3a111c5@13.112.103.141:30311,enode://7fed0d5ebfec2d68106cf91d4bbf2c794a22f12a11c18ef643818e8b8a5022f63abccfa50cb34fd30343530f67a70523525d94247b4f8d143dca7524d2ba8630@52.194.28.137:30311,enode://64e87612bf91e145e019a2cf877891973151ba0acfe822346d5f6876feb4b031f80b6ff2334d9fccc7522d4c27f4a0003cfc29e20db25f6eb89fc72f5d058d89@99.80.96.58:30311,enode://c67e08daecbef6e78832a1fb7eef09725ee6671aeb6dd63cd880b9a2075b945df64b4a6181bf8ed31d43bd7b77587c5380d61095e6d7989e3880656b2fb9448d@54.76.80.25:30311,enode://45ad31700cfd9bce487b912d4b10d8f657a6b4a12f46a71707a351f350a28ea9183fe38f8e4cbd4371972bd6f096072fa65bcf59c0ffb719a8ef83f403b4d656@52.18.62.124:30311,enode://935d02d00d9c5ecdc3bee7a56201eb68c9a9e2fc684ff1e606d56bebcb45722b3812df2c408bd74495140b92214e4bd28a00853641e581cbc3ebbe6ee6b2f794@50.17.94.194:30311,enode://8e68f76aef70929084fbcdc527357aa97cf0091ed80639ba8e5c35933e50034c22a0c6d30775ec9bfdeef21fc029bb895ac2221b97e1595d35110a5a27589089@54.157.26.59:30311,enode://3da255f8abdeaafe3e8acd8e861314782aec365216948f203b5da5fa5457e92ac7dd7519e2e95487d99d7158a1b47e276c6a23efefbe8da423dfe090578d3bec@3.218.173.35:30311,enode://47de9d7808f339b55c5d958ba3a644c2423731de269fa926d8c78eb0b864e4c78734314dd1fc6439a99f1d4c0dab48d57f8a0bfa4b82ffcbf6547f880c41d079@52.202.229.96:30311,enode://7287960657a7cd5a9e0e0cc6b4bb74110155979604d103929c5fcbfe6afc705c441d4b4cd2bdd0009f2ebb8185dab9fd78ef839af965a92c3ca5d45bd0303224@34.226.221.113:30311,enode://627a1cb2c4712cce439026da0c2f599b97628c90c8ccc55526574a944b7455827544130b3003e79399cd79bd73a06a1d6bbd018fcf9ffc5297d3b731aa1b40ab@3.91.73.29:30311,enode://16c7e98f78017dafeaa4129647d1ec66b32ee9be5ec753708820b7363091ceb310f575e7abd9603005e0e34d7b3316c1a4b6c8c42d7f074ed2eb4d073f800a03@3.85.216.212:30311,enode://accbc0a5af0af03e1ec3b5e80544bdceea48011a6928cd82d2c1a9c38b65fd48ec970ba17bd8c0b0ec21a28faec9efe1d1ce55134784b9207146e2f62d8932ba@54.162.32.1:30311,enode://c64c864572dae7ea25225a412c026ced0de66ae429b40c545be8f524a1aeb70b3441710dbfed19e3ba9ef08ce13b00a58daa7a7510924da8e6f4f412d8b45fd5@3.92.160.2:30311,enode://5a838185d4b91eb42cbe3a60bb9f706484d8ec5041fa97b557d10e8ca10a459db0271e06e8b85cad57f1d2c7b05aa4319c0300b2936eefcb2302e10b253cf7d6@23.20.67.34:30311,enode://3438d60bcb628ba33b0adf5e653751436fdc393a869fab136dec5ec6b2ed06d8ea30e4fec061f4f4a67bb01644897dbc3d14db44afc052eb69f102340aff70f9@18.215.252.114:30311,enode://c307b4cddec0aea2188eafddedb0a076b9289402c63217b4c81eb7f34761c7cfaf6b075e93d7357169e226ff1bb4aa3bd71869b4c76cf261e2991005ddb4d4aa@3.81.81.182:30311,enode://d69853daf3057cc191514afdf56df4769238fde4f261fab80c6e089480abb9916d61180e783d1cc9e5ae56d30ce6261d9954702dc73c41cd47e4b3961830b2dc@184.73.34.17:30311,enode://ba88d1a8a5e849bec0eb7df9eabf059f8edeae9a9eb1dcf51b7768276d78b10d4ceecf0cde2ef191ced02f66346d96a36ca9da7d73542757d9677af8da3bad3f@54.198.97.197:30311,enode://f7dc512940ca4a8f6858632abbdfc59cea6c4ed7a8da41ddfc4e4dac74e2664e74355fd7c688b285a22295e0053a800f759c9123ec741285a5bd602f89720cea@54.198.51.232:30311,enode://bdbcb42ff17a52af7ac120f23ee86f525ffbd854ce76e54bad858cf741fcd524d0f810be399d437bea939682a919c5af5df31811c43ccc270d04485caf4ddaeb@52.206.226.126:30311,enode://5fa49c3fc694fcba46199c4ac932a84a89435d545b04a3a68d47747fee41d417d8033c953f9c54ca943cb3d7eb82f936ab1f6ec93bb14ce466de4bcd50d410a5@44.201.87.43:30311,enode://ace8e3b7e96290392a9c6238e005539dd1d1ca7c18aeedd979087789812e0656407e2c4baabcd83b50d583b583aa959ff42e5c95571d76b84995aad722a9a85c@44.198.55.182:30311,enode://458c0e85ef43581557535e9fba2c8edef575737fd36476cb6b711461d74a9080fc38514e705311a788c0f034b2613839e0bd8ef82eafeb62d52cb5e845dd3e8f@3.250.75.234:30311,enode://fe0bb07eae29e8cfaa5bb15b0db8c386a45b7da2c94e1dabd7ca58b6327eee0c27bdcea4f08db19ea07b9a1391e5496a28c675c6eee578154edae4fa44640c5d@54.228.2.74:30311,enode://ca078d6849de674fe7fa0a7ca55057978566499d2c7401739d8ee6a8933a3ac3e3c29cfc6f8474e86dd576035ba0d92038115917f928d43c86e01eb761cac912@63.33.196.130:30311,enode://a62ab1c9bbe97d8258a8944761933ad33891193e439feb84066e0fbe526a34aa7d3c5488f31f045c01890c111eff768cfff937e2edff18b824e47030a73add94@3.250.220.197:30311,enode://1adabe43b638ec1fcd6559d4d4b765aae2826eae8a271418ff61c418e360da7e991c4b3099f1725fa9b157da3c8adf66117f918177367d59e679b99cb647003b@52.211.52.101:30311,enode://322a42a08959aefd3423d17d8aeb802e0dbfb8bb0096aa712b6bf3036c91a80b0abc45c7a3d1320eda9a9c0337dd028967e4b84357080c258c8d0a3aaa02a821@34.245.12.138:30311,enode://8ab18a0ad2872165710fdf907aa6c61ba163835d87475f6aa058c8e877cf2261ed93087e426d35fe2c10eee63d4e8dd6fb35cbb4b22a7346511a1024f87055a9@3.250.46.12:30311,enode://a88322fa7db1958c4ce1c04e4980b7fdd23d2ea09ede072ffb487931dc62109cfad9defc2087568f625b4b5ac931c8f6f0baef37c988772efae2e12df3a30a70@52.19.216.114:30311,enode://3aaaa0e0c7961ef3a9bf05f879f84308ca59651327cf94b64252f67448e582dcd6a6dbe996264367c8aa27fc302736db0283a3516c7406d48f268c5e317b9d49@34.250.1.192:30311,enode://57824d2d9b5f39681bee265d56ec98a17fa4af343debdeba18596837f776f7c6370d8a33354e2b1750c41b221778e05c4189b93aca0d4cb1d45d32dc3b2d63f1@34.240.198.163:30311,enode://67ec1f3df346e0aef401175119172e86a20e7ee1442cba4a2074519405cdae3708be3fdcb5e139094408b5d6f6c8e85f89ebb77d04833f7aa251c91344dbd4c9@3.249.178.199:30311,enode://1afc9727301dcd8d2c5aef067031639ae3d3c7a23f8ba6c588a6a1b2c3cbcd738b4ccc53c07d08690ef591b99fd12f00a005f38d820354a91f418ab0939b9072@34.253.216.225:30311,enode://3c13113538f3ca7d898d99f9656e0939451558758fd9c9475cff29f020187a56e8140bd24bd57164b07c3d325fc53e1ef622f793851d2648ed93d9d5a7ce975c@34.254.238.155:30311,enode://5d54b9a5af87c3963cc619fe4ddd2ed7687e98363bfd1854f243b71a2225d33b9c9290e047d738e0c7795b4bc78073f0eb4d9f80f572764e970e23d02b3c2b1f@34.247.177.253:30311,enode://1bb269476f62e99d17da561b1a6b0d0269b10afee029e1e9fdee9ac6a0e342ae562dfa8578d783109b80c0f100a19e03b057f37b2aff22d8a0aceb62020018fe@54.78.102.178:30311 --bootnodes=enode://1cc4534b14cfe351ab740a1418ab944a234ca2f702915eadb7e558a02010cb7c5a8c295a3b56bcefa7701c07752acd5539cb13df2aab8ae2d98934d712611443@52.71.43.172:30311,enode://28b1d16562dac280dacaaf45d54516b85bc6c994252a9825c5cc4e080d3e53446d05f63ba495ea7d44d6c316b54cd92b245c5c328c37da24605c4a93a0d099c4@34.246.65.14:30311,enode://5a7b996048d1b0a07683a949662c87c09b55247ce774aeee10bb886892e586e3c604564393292e38ef43c023ee9981e1f8b335766ec4f0f256e57f8640b079d5@35.73.137.11:30311

Concensus Layer:

NA

Concensus Layer Command (with flags/config):

NA

Chain/Network:

56

Expected behaviour

Transaction Propagation works with ~ +2 blocks

Actual behaviour

Transactions take minutes to hours after ~ 24 hours of uptime on the node.

Seems like the geth client has the same issue -> bnb-chain/bsc#1413

Could it be that parts of the network get isolated from the validators from time to time?
See bnb-chain/bsc#1419 -> about 318 of 390 connections match the erigon version on startup.

Erigon currently broadcast to a subset of connected nodes ( sqrt ), changing this to all available seems to help a bit, but the error still occurs, just later.

Steps to reproduce the behaviour

Start a node using the config posted above, try to broadcast after 24 hours or more.

Consensus snapshot state corruption since v1.0.7 on node restart at certain heights

I've been running a bsc-mainnet node on v1.0.8, with a watchdog to restart it if it hits "no block bodies...", and it's been mostly stable; but every once in a while — seemingly at random (until I figured out the pattern) — it would come back up after a watchdog restart in a corrupted state, where execution would fail with mismatching validator list on epoch block.

Fixing this when it happens, requires stopping the node and unwinding state_stages by one block.

After seeing this a few times, I eventually noticed that this happens when, after a restart, the execution stage starts at a block height ending in {1,3,5,7,9}99 (i.e. one block before a consensus snapshot.)

I assume that the current logic is written to expect that some in-memory state in parlia will already be available from block ...98, to be used during execution of block ...99, to set up the snapshot for block ...00. Where if execution starts at block ...99, then that state won't be available, so the snapshot for block ...00 will end up calculated incorrectly.

EROR: GetJustifiedNumberAndHash snapshot error="unknown ancestor"

System information

Erigon version: v1.0.7

Expected behaviour

Should not print so many error logs.

Actual behaviour

Saw lots of error message after Plato:

[EROR] [05-17|08:44:46.953] GetJustifiedNumberAndHash snapshot       error="unknown ancestor" blockNumber=29871616 blockHash=0xe149bffe9da5aa36e41181a709fee5df2fa00c0875ebc55618e19fb6c1d887d2
[EROR] [05-17|08:44:46.953] GetJustifiedNumberAndHash snapshot       error="unknown ancestor" blockNumber=29871616 blockHash=0xe149bffe9da5aa36e41181a709fee5df2fa00c0875ebc55618e19fb6c1d887d2
[EROR] [05-17|08:44:46.958] GetJustifiedNumberAndHash snapshot       error="unknown ancestor" blockNumber=29871617 blockHash=0x51c188a8df9a6537a82bb417e58a80e3cd3fc4c8a841a0de15da6edc83191d29

Need to check if these logs are reasonable or not.

Steps to reproduce the behaviour

It can be easily reproduced, when the node reached the Plato hard fork height.

Backtrace

NA

Downloading block bodies second iteration doesn't download all bodies

System information

Erigon version: ./erigon --version

ghcr.io/node-real/bsc-erigon:1.0.8

OS & Version: Windows/Linux/OSX

Ubuntu 22.04, but running docker

Commit hash:

ghcr.io/node-real/bsc-erigon:1.0.8

Erigon Command (with flags/config):

      --chain bsc --snapshots=true --db.pagesize='16kb'
      --datadir /home/erigon/.local/share/erigon
      --http --http.addr 0.0.0.0 --http.port ${ERIGON_RPC_PORT:-8548} --http.compression --http.corsdomain '*' --http.vhosts '*' --http.api 'eth,net,web3,trace,debug,erigon,ots'
      --ws --ws.compression
      --rpc.accessList /erigon_config/rpc_rules.json
      --torrent.download.rate 512mb --torrent.upload.rate 4mb
      --port ${ERIGON_P2P_PORT:-30306} --authrpc.addr 0.0.0.0 --authrpc.port 8551 --private.api.addr 127.0.0.1:9090 --torrent.port 42069
      --p2p.allowed-ports ${ERIGON_P2P_PORT:-30306},${ERIGON_P2P_PORT_B:-31306},${ERIGON_P2P_PORT_C:-32306},${ERIGON_P2P_PORT_D:-33306},${ERIGON_P2P_PORT_E:-34306}
      --p2p.protocol=66
      --maxpeers 400
      --nat extip:${ERIGON_EXTIP}
      --batchSize 4096M
      --metrics --metrics.addr=0.0.0.0 --metrics.port=6060
      --pprof --pprof.addr=0.0.0.0 --pprof.port=6061
      --rpc.returndata.limit 5000000
      --sentry.drop-useless-peers
      --bootnodes enode://1cc4534b14cfe351ab740a1418ab944a234ca2f702915eadb7e558a02010cb7c5a8c295a3b56bcefa7701c07752acd5539cb13df2aab8ae2d98934d712611443@52.71.43.172:30311,enode://28b1d16562dac280dacaaf45d54516b85bc6c994252a9825c5cc4e080d3e53446d05f63ba495ea7d44d6c316b54cd92b245c5c328c37da24605c4a93a0d099c4@34.246.65.14:30311,enode://5a7b996048d1b0a07683a949662c87c09b55247ce774aeee10bb886892e586e3c604564393292e38ef43c023ee9981e1f8b335766ec4f0f256e57f8640b079d5@35.73.137.11:30311
      --staticpeers enode://47de9d7808f339b55c5d958ba3a644c2423731de269fa926d8c78eb0b864e4c78734314dd1fc6439a99f1d4c0dab48d57f8a0bfa4b82ffcbf6547f880c41d079@52.202.229.96:30311,enode://7287960657a7cd5a9e0e0cc6b4bb74110155979604d103929c5fcbfe6afc705c441d4b4cd2bdd0009f2ebb8185dab9fd78ef839af965a92c3ca5d45bd0303224@34.226.221.113:30311,enode://627a1cb2c4712cce439026da0c2f599b97628c90c8ccc55526574a944b7455827544130b3003e79399cd79bd73a06a1d6bbd018fcf9ffc5297d3b731aa1b40ab@3.91.73.29:30311,enode://16c7e98f78017dafeaa4129647d1ec66b32ee9be5ec753708820b7363091ceb310f575e7abd9603005e0e34d7b3316c1a4b6c8c42d7f074ed2eb4d073f800a03@3.85.216.212:30311,enode://accbc0a5af0af03e1ec3b5e80544bdceea48011a6928cd82d2c1a9c38b65fd48ec970ba17bd8c0b0ec21a28faec9efe1d1ce55134784b9207146e2f62d8932ba@54.162.32.1:30311,enode://c64c864572dae7ea25225a412c026ced0de66ae429b40c545be8f524a1aeb70b3441710dbfed19e3ba9ef08ce13b00a58daa7a7510924da8e6f4f412d8b45fd5@3.92.160.2:30311,enode://5a838185d4b91eb42cbe3a60bb9f706484d8ec5041fa97b557d10e8ca10a459db0271e06e8b85cad57f1d2c7b05aa4319c0300b2936eefcb2302e10b253cf7d6@23.20.67.34:30311,enode://3438d60bcb628ba33b0adf5e653751436fdc393a869fab136dec5ec6b2ed06d8ea30e4fec061f4f4a67bb01644897dbc3d14db44afc052eb69f102340aff70f9@18.215.252.114:30311,enode://c307b4cddec0aea2188eafddedb0a076b9289402c63217b4c81eb7f34761c7cfaf6b075e93d7357169e226ff1bb4aa3bd71869b4c76cf261e2991005ddb4d4aa@3.81.81.182:30311,enode://d69853daf3057cc191514afdf56df4769238fde4f261fab80c6e089480abb9916d61180e783d1cc9e5ae56d30ce6261d9954702dc73c41cd47e4b3961830b2dc@184.73.34.17:30311,enode://ba88d1a8a5e849bec0eb7df9eabf059f8edeae9a9eb1dcf51b7768276d78b10d4ceecf0cde2ef191ced02f66346d96a36ca9da7d73542757d9677af8da3bad3f@54.198.97.197:30311,enode://f7dc512940ca4a8f6858632abbdfc59cea6c4ed7a8da41ddfc4e4dac74e2664e74355fd7c688b285a22295e0053a800f759c9123ec741285a5bd602f89720cea@54.198.51.232:30311,enode://bdbcb42ff17a52af7ac120f23ee86f525ffbd854ce76e54bad858cf741fcd524d0f810be399d437bea939682a919c5af5df31811c43ccc270d04485caf4ddaeb@52.206.226.126:30311,enode://5fa49c3fc694fcba46199c4ac932a84a89435d545b04a3a68d47747fee41d417d8033c953f9c54ca943cb3d7eb82f936ab1f6ec93bb14ce466de4bcd50d410a5@44.201.87.43:30311,enode://ace8e3b7e96290392a9c6238e005539dd1d1ca7c18aeedd979087789812e0656407e2c4baabcd83b50d583b583aa959ff42e5c95571d76b84995aad722a9a85c@44.198.55.182:30311,enode://458c0e85ef43581557535e9fba2c8edef575737fd36476cb6b711461d74a9080fc38514e705311a788c0f034b2613839e0bd8ef82eafeb62d52cb5e845dd3e8f@3.250.75.234:30311,enode://fe0bb07eae29e8cfaa5bb15b0db8c386a45b7da2c94e1dabd7ca58b6327eee0c27bdcea4f08db19ea07b9a1391e5496a28c675c6eee578154edae4fa44640c5d@54.228.2.74:30311,enode://ca078d6849de674fe7fa0a7ca55057978566499d2c7401739d8ee6a8933a3ac3e3c29cfc6f8474e86dd576035ba0d92038115917f928d43c86e01eb761cac912@63.33.196.130:30311,enode://a62ab1c9bbe97d8258a8944761933ad33891193e439feb84066e0fbe526a34aa7d3c5488f31f045c01890c111eff768cfff937e2edff18b824e47030a73add94@3.250.220.197:30311,enode://1adabe43b638ec1fcd6559d4d4b765aae2826eae8a271418ff61c418e360da7e991c4b3099f1725fa9b157da3c8adf66117f918177367d59e679b99cb647003b@52.211.52.101:30311,enode://322a42a08959aefd3423d17d8aeb802e0dbfb8bb0096aa712b6bf3036c91a80b0abc45c7a3d1320eda9a9c0337dd028967e4b84357080c258c8d0a3aaa02a821@34.245.12.138:30311,enode://8ab18a0ad2872165710fdf907aa6c61ba163835d87475f6aa058c8e877cf2261ed93087e426d35fe2c10eee63d4e8dd6fb35cbb4b22a7346511a1024f87055a9@3.250.46.12:30311,enode://a88322fa7db1958c4ce1c04e4980b7fdd23d2ea09ede072ffb487931dc62109cfad9defc2087568f625b4b5ac931c8f6f0baef37c988772efae2e12df3a30a70@52.19.216.114:30311,enode://3aaaa0e0c7961ef3a9bf05f879f84308ca59651327cf94b64252f67448e582dcd6a6dbe996264367c8aa27fc302736db0283a3516c7406d48f268c5e317b9d49@34.250.1.192:30311,enode://57824d2d9b5f39681bee265d56ec98a17fa4af343debdeba18596837f776f7c6370d8a33354e2b1750c41b221778e05c4189b93aca0d4cb1d45d32dc3b2d63f1@34.240.198.163:30311,enode://67ec1f3df346e0aef401175119172e86a20e7ee1442cba4a2074519405cdae3708be3fdcb5e139094408b5d6f6c8e85f89ebb77d04833f7aa251c91344dbd4c9@3.249.178.199:30311,enode://1afc9727301dcd8d2c5aef067031639ae3d3c7a23f8ba6c588a6a1b2c3cbcd738b4ccc53c07d08690ef591b99fd12f00a005f38d820354a91f418ab0939b9072@34.253.216.225:30311,enode://3c13113538f3ca7d898d99f9656e0939451558758fd9c9475cff29f020187a56e8140bd24bd57164b07c3d325fc53e1ef622f793851d2648ed93d9d5a7ce975c@34.254.238.155:30311,enode://5d54b9a5af87c3963cc619fe4ddd2ed7687e98363bfd1854f243b71a2225d33b9c9290e047d738e0c7795b4bc78073f0eb4d9f80f572764e970e23d02b3c2b1f@34.247.177.253:30311,enode://1bb269476f62e99d17da561b1a6b0d0269b10afee029e1e9fdee9ac6a0e342ae562dfa8578d783109b80c0f100a19e03b057f37b2aff22d8a0aceb62020018fe@54.78.102.178:30311

Concensus Layer:

Erigon

Concensus Layer Command (with flags/config):

n/a

Chain/Network: bsc

Expected behaviour

Normal Sync

Actual behaviour

Syncing works properly, then after the first iteration, new block bodies need to be downloaded as some time passed already (step 5). But instead of downloading all block bodies, only 100-200 are downloaded and step 5 is marked as DONE. All the other steps work properly.

Because of this the node falls behind as this is too slow.

Temporary solution is to restart the node. This is far from ideal, but the first iteration after restart downloads all block bodies again.

Steps to reproduce the behaviour

Happens every single time. Use the above docker image mentioned and my flags and use the latest official snapshot.

After planck hardfork, sync doesn't work

version: v2.40.0-dev-3da15fcb
logs:

[INFO] [04-12|11:23:33.893] [2/15 Headers] Waiting for headers...    from=27281023
[INFO] [04-12|11:23:53.893] [2/15 Headers] No block headers to write in this log period block number=27281023
[INFO] [04-12|11:24:13.894] [2/15 Headers] No block headers to write in this log period block number=27281023
[INFO] [04-12|11:24:33.643] [txpool] stat                            pending=4722 baseFee=0 queued=22073 alloc=3.3GB sys=3.5GB
[INFO] [04-12|11:24:33.893] [2/15 Headers] No block headers to write in this log period block number=27281023
[INFO] [04-12|11:24:53.894] [2/15 Headers] No block headers to write in this log period block number=27281023
[INFO] [04-12|11:25:13.893] [2/15 Headers] No block headers to write in this log period block number=27281023
[INFO] [04-12|11:25:13.893] Req/resp stats                           req=0 reqMin=0 reqMax=0 skel=19 skelMin=27281022 skelMax=27318078 resp=952 respMin=27281022 respMax=27281022 dups=106
[INFO] [04-12|11:25:33.133] [p2p] GoodPeers                          eth66=19 eth67=1
[INFO] [04-12|11:25:33.660] [txpool] stat                            pending=4880 baseFee=0 queued=22110 alloc=3.3GB sys=3.9GB
[INFO] [04-12|11:25:33.893] [2/15 Headers] No block headers to write in this log period block number=27281023
[INFO] [04-12|11:25:33.893] Req/resp stats                           req=0 reqMin=0 reqMax=0 skel=19 skelMin=27281022 skelMax=27318078 resp=1014 respMin=27281022 respMax=27281022 dups=100
[INFO] [04-12|11:25:53.893] [2/15 Headers] No block headers to write in this log period block number=27281023
[INFO] [04-12|11:25:53.893] Req/resp stats                           req=0 reqMin=0 reqMax=0 skel=20 skelMin=27281022 skelMax=27318078 resp=974 respMin=27280978 respMax=27281022 dups=111

v1.0.4 sync failure for bsc mainnet at block 28035800

(I know running v1.0.3+ is not currently recommended for bsc mainnet; I was running a v1.0.4 node on bsc mainnet as a test, to find sync bugs like this one.)

[INFO] [05-08|17:13:53.950] [5/15 Bodies] Processed                  highest=28036306
[INFO] [05-08|17:13:53.950] [6/15 Senders] Started                   from=28035799 to=28036306
[INFO] [05-08|17:13:54.263] [7/15 Execution] Blocks execution        from=28035799 to=28036306
[WARN] [05-08|17:13:54.321] [7/15 Execution] Execution failed        block=28035800 hash=0xad1e9a9a2c1372b40a5a0e65209347edc01896a7830cfc87ad5b029266fdea58 err="mismatching validator list on epoch block"
[INFO] [05-08|17:13:54.321] UnwindTo                                 block=28035799 bad_block_hash=0xad1e9a9a2c1372b40a5a0e65209347edc01896a7830cfc87ad5b029266fdea58
[INFO] [05-08|17:13:54.321] [7/15 Execution] Completed on            block=28035799
[INFO] [05-08|17:13:54.388] [2/15 Headers] Waiting for headers...    from=28035799

Visually, this is the same error as v1.0.3 had for chapel. Presumably has a different cause, given that the luban fork is inactive for bsc mainnet.

Epoch block https://bscscan.com/block/28035800 doesn't look any different than the previous epoch block https://bscscan.com/block/28035600, so I'm not sure what would trigger this for 28035800 but not 28035600.

It is likely that the not-inLuban consensus logic in v1.0.3+ is not functionally identical to the consensus logic in ≤v1.0.2.

Downgrading to v1.0.2 allows sync to proceed from 28035799 without any need for unwinding, so it seems that the validator state was not corrupted by some block between 28035600 and 28035799, but rather the failure is limited to the processing of 28035800.

debug_traceCall using reconstructed msg from real transaction execution reverted

System information

Erigon version: ./erigon --version

OS & Version: Windows/Linux/OSX

Commit hash:

Erigon Command (with flags/config):

Concensus Layer:

Concensus Layer Command (with flags/config):

Chain/Network: chapel

I reconstruct a request of debug_traceCall from real tx 0x4569fec4b067d4ef214e025bca090a167a249e4a81f8068b03789ced082412d3, but erigon return "execution Reverted", but our Meganode Archive can execute this transaction, I want to know the reason of Execution Reverted, and which one is correct?

Request

curl --location 'localhost:8545' \ --header 'Content-Type: application/json' \ --data '{ "jsonrpc": "2.0", "id": 1, "method": "debug_traceCall", "params": [ { "from": "0xb4dd66d7c2c7e57f628210187192fb89d4b99dd4", "gas": "0x7fffffffffffffff", "gasPrice": "0x0", "callType": "call", "input": "0xf340fa01000000000000000000000000b4dd66d7c2c7e57f628210187192fb89d4b99dd4", "to": "0x0000000000000000000000000000000000001000", "value": "0x36ef88513e3f00" }, "0x1aeb816", { "tracer": "callTracer" } ] }'

Expected behaviour

{ "jsonrpc": "2.0", "id": 1, "result": { "from": "0x1284214b9b9c85549ab3d2b972df0deef66ac2c9", "gas": "0x2fa9cc8", "gasUsed": "0xc0fd", "to": "0x0000000000000000000000000000000000001000", "input": "0xf340fa010000000000000000000000001284214b9b9c85549ab3d2b972df0deef66ac2c9", "calls": [ { "from": "0x0000000000000000000000000000000000001000", "gas": "0x8fc", "gasUsed": "0x0", "to": "0x000000000000000000000000000000000000dead", "input": "0x", "value": "0x57e5a6e863980", "type": "CALL" } ], "value": "0x36ef88513e3f00", "type": "CALL" } }

Actual behaviour

{ "jsonrpc": "2.0", "id": 1, "result": { "from": "0x1284214b9b9c85549ab3d2b972df0deef66ac2c9", "gas": "0x23c2f3f8", "gasUsed": "0x5236", "to": "0x0000000000000000000000000000000000001000", "input": "0x", "error": "execution reverted", "value": "0x36ef88513e3f00", "type": "CALL" } }

Steps to reproduce the behaviour

Backtrace

[backtrace]

Syncing is still slow, often 1 hour behind

System information

Erigon version: ./erigon --version
v1.0.2
OS & Version: Windows/Linux/OSX
Ubuntu 20.04.5 LTS (GNU/Linux 5.15.0-1034-aws aarch64)

Erigon Command (with flags/config):
./build/bin/erigon --config="/root/config.toml"

chain="bsc"
datadir = "/erigon/bsc"
port="30303"
http="true"
ws="true"

"http.port"="8545"
"authrpc.port"="8551"
"torrent.port"="30303"
"private.api.addr"="127.0.0.1:9090"
"http.api"="web3,net,eth,debug,trace,txpool"
"http.corsdomain"="*"
"http.addr"="0.0.0.0"
"http.vhosts"="*"
"maxpeers"="100"
"p2p.protocol"="66"
"bodies.cache"="21474836480"
"batchSize"="8192M"
"rpc.gascap"="80000000"
"db.pagesize"="32k"
"bootnodes"=["enode://1cc4534b14cfe351ab740a1418ab944a234ca2f702915eadb7e558a02010cb7c5a8c295a3b56bcefa7701c07752acd5539cb13df2aab8ae2d98934d712611443@52.71.43.172:30311","enode://28b1d16562dac280dacaaf45d54516b85bc6c994252a9825c5cc4e080d3e53446d05f63ba495ea7d44d6c316b54cd92b245c5c328c37da24605c4a93a0d099c4@34.246.65.14:30311","enode://5a7b996048d1b0a07683a949662c87c09b55247ce774aeee10bb886892e586e3c604564393292e38ef43c023ee9981e1f8b335766ec4f0f256e57f8640b079d5@35.73.137.11:30311"]
"staticpeers"=["enode://1cc4534b14cfe351ab740a1418ab944a234ca2f702915eadb7e558a02010cb7c5a8c295a3b56bcefa7701c07752acd5539cb13df2aab8ae2d98934d712611443@52.71.43.172:30311","enode://28b1d16562dac280dacaaf45d54516b85bc6c994252a9825c5cc4e080d3e53446d05f63ba495ea7d44d6c316b54cd92b245c5c328c37da24605c4a93a0d099c4@34.246.65.14:30311","enode://5a7b996048d1b0a07683a949662c87c09b55247ce774aeee10bb886892e586e3c604564393292e38ef43c023ee9981e1f8b335766ec4f0f256e57f8640b079d5@35.73.137.11:30311"]
"staticpeers"=["enode://47de9d7808f339b55c5d958ba3a644c2423731de269fa926d8c78eb0b864e4c78734314dd1fc6439a99f1d4c0dab48d57f8a0bfa4b82ffcbf6547f880c41d079@52.202.229.96:30311", "enode://7287960657a7cd5a9e0e0cc6b4bb74110155979604d103929c5fcbfe6afc705c441d4b4cd2bdd0009f2ebb8185dab9fd78ef839af965a92c3ca5d45bd0303224@34.226.221.113:30311", "enode://627a1cb2c4712cce439026da0c2f599b97628c90c8ccc55526574a944b7455827544130b3003e79399cd79bd73a06a1d6bbd018fcf9ffc5297d3b731aa1b40ab@3.91.73.29:30311", "enode://16c7e98f78017dafeaa4129647d1ec66b32ee9be5ec753708820b7363091ceb310f575e7abd9603005e0e34d7b3316c1a4b6c8c42d7f074ed2eb4d073f800a03@3.85.216.212:30311", "enode://accbc0a5af0af03e1ec3b5e80544bdceea48011a6928cd82d2c1a9c38b65fd48ec970ba17bd8c0b0ec21a28faec9efe1d1ce55134784b9207146e2f62d8932ba@54.162.32.1:30311", "enode://c64c864572dae7ea25225a412c026ced0de66ae429b40c545be8f524a1aeb70b3441710dbfed19e3ba9ef08ce13b00a58daa7a7510924da8e6f4f412d8b45fd5@3.92.160.2:30311", "enode://5a838185d4b91eb42cbe3a60bb9f706484d8ec5041fa97b557d10e8ca10a459db0271e06e8b85cad57f1d2c7b05aa4319c0300b2936eefcb2302e10b253cf7d6@23.20.67.34:30311", "enode://3438d60bcb628ba33b0adf5e653751436fdc393a869fab136dec5ec6b2ed06d8ea30e4fec061f4f4a67bb01644897dbc3d14db44afc052eb69f102340aff70f9@18.215.252.114:30311", "enode://c307b4cddec0aea2188eafddedb0a076b9289402c63217b4c81eb7f34761c7cfaf6b075e93d7357169e226ff1bb4aa3bd71869b4c76cf261e2991005ddb4d4aa@3.81.81.182:30311", "enode://d69853daf3057cc191514afdf56df4769238fde4f261fab80c6e089480abb9916d61180e783d1cc9e5ae56d30ce6261d9954702dc73c41cd47e4b3961830b2dc@184.73.34.17:30311", "enode://ba88d1a8a5e849bec0eb7df9eabf059f8edeae9a9eb1dcf51b7768276d78b10d4ceecf0cde2ef191ced02f66346d96a36ca9da7d73542757d9677af8da3bad3f@54.198.97.197:30311", "enode://f7dc512940ca4a8f6858632abbdfc59cea6c4ed7a8da41ddfc4e4dac74e2664e74355fd7c688b285a22295e0053a800f759c9123ec741285a5bd602f89720cea@54.198.51.232:30311", "enode://bdbcb42ff17a52af7ac120f23ee86f525ffbd854ce76e54bad858cf741fcd524d0f810be399d437bea939682a919c5af5df31811c43ccc270d04485caf4ddaeb@52.206.226.126:30311", "enode://5fa49c3fc694fcba46199c4ac932a84a89435d545b04a3a68d47747fee41d417d8033c953f9c54ca943cb3d7eb82f936ab1f6ec93bb14ce466de4bcd50d410a5@44.201.87.43:30311", "enode://ace8e3b7e96290392a9c6238e005539dd1d1ca7c18aeedd979087789812e0656407e2c4baabcd83b50d583b583aa959ff42e5c95571d76b84995aad722a9a85c@44.198.55.182:30311", "enode://458c0e85ef43581557535e9fba2c8edef575737fd36476cb6b711461d74a9080fc38514e705311a788c0f034b2613839e0bd8ef82eafeb62d52cb5e845dd3e8f@3.250.75.234:30311", "enode://fe0bb07eae29e8cfaa5bb15b0db8c386a45b7da2c94e1dabd7ca58b6327eee0c27bdcea4f08db19ea07b9a1391e5496a28c675c6eee578154edae4fa44640c5d@54.228.2.74:30311", "enode://ca078d6849de674fe7fa0a7ca55057978566499d2c7401739d8ee6a8933a3ac3e3c29cfc6f8474e86dd576035ba0d92038115917f928d43c86e01eb761cac912@63.33.196.130:30311", "enode://a62ab1c9bbe97d8258a8944761933ad33891193e439feb84066e0fbe526a34aa7d3c5488f31f045c01890c111eff768cfff937e2edff18b824e47030a73add94@3.250.220.197:30311", "enode://1adabe43b638ec1fcd6559d4d4b765aae2826eae8a271418ff61c418e360da7e991c4b3099f1725fa9b157da3c8adf66117f918177367d59e679b99cb647003b@52.211.52.101:30311", "enode://322a42a08959aefd3423d17d8aeb802e0dbfb8bb0096aa712b6bf3036c91a80b0abc45c7a3d1320eda9a9c0337dd028967e4b84357080c258c8d0a3aaa02a821@34.245.12.138:30311", "enode://8ab18a0ad2872165710fdf907aa6c61ba163835d87475f6aa058c8e877cf2261ed93087e426d35fe2c10eee63d4e8dd6fb35cbb4b22a7346511a1024f87055a9@3.250.46.12:30311", "enode://a88322fa7db1958c4ce1c04e4980b7fdd23d2ea09ede072ffb487931dc62109cfad9defc2087568f625b4b5ac931c8f6f0baef37c988772efae2e12df3a30a70@52.19.216.114:30311", "enode://3aaaa0e0c7961ef3a9bf05f879f84308ca59651327cf94b64252f67448e582dcd6a6dbe996264367c8aa27fc302736db0283a3516c7406d48f268c5e317b9d49@34.250.1.192:30311", "enode://57824d2d9b5f39681bee265d56ec98a17fa4af343debdeba18596837f776f7c6370d8a33354e2b1750c41b221778e05c4189b93aca0d4cb1d45d32dc3b2d63f1@34.240.198.163:30311", "enode://67ec1f3df346e0aef401175119172e86a20e7ee1442cba4a2074519405cdae3708be3fdcb5e139094408b5d6f6c8e85f89ebb77d04833f7aa251c91344dbd4c9@3.249.178.199:30311", "enode://1afc9727301dcd8d2c5aef067031639ae3d3c7a23f8ba6c588a6a1b2c3cbcd738b4ccc53c07d08690ef591b99fd12f00a005f38d820354a91f418ab0939b9072@34.253.216.225:30311", "enode://3c13113538f3ca7d898d99f9656e0939451558758fd9c9475cff29f020187a56e8140bd24bd57164b07c3d325fc53e1ef622f793851d2648ed93d9d5a7ce975c@34.254.238.155:30311", "enode://5d54b9a5af87c3963cc619fe4ddd2ed7687e98363bfd1854f243b71a2225d33b9c9290e047d738e0c7795b4bc78073f0eb4d9f80f572764e970e23d02b3c2b1f@34.247.177.253:30311", "enode://1bb269476f62e99d17da561b1a6b0d0269b10afee029e1e9fdee9ac6a0e342ae562dfa8578d783109b80c0f100a19e03b057f37b2aff22d8a0aceb62020018fe@54.78.102.178:30311"]

Hi guys,

Thank you for your great work,

Since I started using node-real, it has never really synchronized in real time. I tried all possible solutions in the issue, but it still lags behind.

Can anyone actually sync in real time all the time? I'm wondering if there's something wrong with my configuration, or if it's just the program.

I've tried tinkering with the config, or using integration, but to no avail, it doesn't improve the lag a bit.

My machine has a 16-core 64G and has an 8T nvme, so performance should not be a problem.

Below is my running log, please help me, thanks.

[Feature] support ETH67

Rationale

Why should this feature exist?
What are the use-cases?

Implementation

Do you have ideas regarding the implementation of this feature?
Are you willing to implement this feature?

mdbx_env_open: MDBX_TOO_LARGE

System information

Erigon version: ./erigon v1.0.3

OS & Version: Linux
Commit hash: 065538d5786ac0c83970c8213538924bc17e3c37

Erigon Command (with flags/config):

./erigon --p2p.protocol 66 --nodiscover --log.dir.path ./ --bodies.cache=214748364800 --batchSize=4096M --db.pagesize=16k --datadir ./ --private.api.addr=localhost:9099 --log.console.verbosity 3

Concensus Layer:
NA
Concensus Layer Command (with flags/config):
NA
Chain/Network:
NA

Expected behaviour

BSC Erigon node can start

Actual behaviour

[EROR] [04-26|04:08:07.617] Erigon startup                           err="mdbx_env_open: MDBX_TOO_LARGE: Database is too large for current system, e.g. could NOT be mapped into RAM, label: chaindata, trace: [kv_mdbx.go:266 node.go:323 node.go:326 backend.go:204 node.go:112 main.go:59 command.go:274 app.go:332 app.go:309 main.go:36 proc.go:250 asm_amd64.s:1594]"

Print the error, then exit.

Steps to reproduce the behaviour

Backtrace

NA

Syncing at execution layer is too slow

This should only be used in very rare cases e.g. if you are not 100% sure if something is a bug or asking a question that leads to improving the documentation. For general questions please use Erigon's discord.

Hi Team,

I have started the node using snapshots & it syncs really slow only 7-8 blk/sec. Here are the hardware details for more information:
Machine SKU: Standard_L16
vCPU: 16
RAM: 128GB

I am running the service with the following command:
exec erigon --chain bsc --snapshots=true --db.pagesize=16k --datadir=/bnb-backup/data --txpool.disable --rpc.batch.concurrency=1500 --rpc.batch.limit=1500 --torrent.upload.rate=512mb --torrent.download.rate=512mb --http.addr=0.0.0.0 --http.port=8545 --rpc.returndata.limit=1024000 --p2p.protocol=66

that way it takes infinite time to sync with the bsc mainnet.

image

1.0.4 Spins around block 26999999

Hi,

I have now run this node for well over a week, and I've noticed a repeating behavior of writing block bodies around block 26999999. It will work itself down the remaining=1199715, then spin backup again to repeat the work. It has done this 3-4 times now

image

the stage of execution too slow

This should only be used in very rare cases e.g. if you are not 100% sure if something is a bug or asking a question that leads to improving the documentation. For general questions please use Erigon's discord.
image
blk only 2-4, forever can't catch up lastest block

[5/15 Bodies] No block bodies to write in this log period block number=27910849

System information

OS: linux (amd64)
OS Image: Amazon Linux 2
Kernel version: 5.4.226-129.415.amzn2.x86_64
Container runtime: docker://20.10.17

Erigon version: 2.40.0-dev-3da15fcb

Erigon Command (with flags/config):

erigon --chain=bsc --datadir=/data/erigon --config=/home/erigon/config.toml --prune=hrtc --db.pagesize=16k --log.console.verbosity=info --nat=extip:$(POD_IP) --maxpeers=1000 --p2p.protocol=66 --torrent.download.slots=20 --torrent.download.rate=512mb --bodies.cache=214748364800 --batchSize=4096M

Concensus Layer:

Concensus Layer Command (with flags/config):

Chain/Network: bsc / 56

Expected behaviour

node sync to head and remain synced consistently

Actual behaviour

The node stopped syncing (here, stucked at bodies stage) and drifted away from chain's head without any changes of configuration.
Also, please note that we already use the config flags mentioned in #41.
The solutions proposed there using the integration command do not work either.

Steps to reproduce the behaviour

launch the client, and sync stucks at a fixed block number.

Backtrace

[INFO] [05-04|12:41:20.217] [5/15 Bodies] No block bodies to write in this log period block number=27910849
[INFO] [05-04|12:41:40.217] [5/15 Bodies] Downloading block bodies   block_num=27910849 delivery/sec=334.0KB wasted/sec=0B remaining=4935 delivered=264 cache=4.8GB alloc=9.2GB sys=16.2GB
[INFO] [05-04|12:42:00.217] [5/15 Bodies] No block bodies to write in this log period block number=27910849
[INFO] [05-04|12:42:00.221] [5/15 Bodies] DONE                       in=1m40.005033072s
[INFO] [05-04|12:42:03.978] Commit cycle                             in=3.756665787s
[INFO] [05-04|12:42:03.978] Timings (slower than 50ms)               Headers=6.616s Bodies=1m40.005s
[INFO] [05-04|12:42:03.978] Tables                                   PlainState=233.4GB AccountChangeSet=1.7GB StorageChangeSet=3.6GB BlockTransaction=5.1GB TransactionLog=9.1GB FreeList=45.2MB ReclaimableSpace=181.0GB
[INFO] [05-04|12:42:03.980] RPC Daemon notified of new headers       from=27914761 to=27915785 hash=0x684047a81b5a66d11e06e1fc4d5f432d1de4b8a194f1e20c20c9241fe5c98fc0 header sending=1.239498ms log sending=324ns
[INFO] [05-04|12:42:03.981] [2/15 Headers] Waiting for headers...    from=27915785
[INFO] [05-04|12:42:06.711] [parlia] snapshots build, gather headers block=27900000
[INFO] [05-04|12:42:06.713] [parlia] snapshots build, recover from headers block=27900000
[INFO] [05-04|12:42:07.850] [parlia] snapshots build, gather headers block=27900000
[INFO] [05-04|12:42:07.852] [parlia] snapshots build, recover from headers block=27900000
[INFO] [05-04|12:42:09.759] [parlia] snapshots build, gather headers block=27900000
[INFO] [05-04|12:42:09.769] [parlia] snapshots build, recover from headers block=27900000
[INFO] [05-04|12:42:09.866] [2/15 Headers] Processed                 highest inserted=27915821 age=7s
[INFO] [05-04|12:42:09.885] [5/15 Bodies] Processing bodies...       from=27910849 to=27915821
[INFO] [05-04|12:42:10.248] [txpool] stat                            pending=10000 baseFee=268 queued=30000 alloc=6.1GB sys=16.2GB
[INFO] [05-04|12:42:29.886] [5/15 Bodies] Downloading block bodies   block_num=27910849 delivery/sec=313.3KB wasted/sec=102.5KB remaining=4971 delivered=103 cache=4.8GB alloc=6.3GB sys=16.2GB
[INFO] [05-04|12:42:49.886] [5/15 Bodies] Downloading block bodies   block_num=27910849 delivery/sec=417.2KB wasted/sec=0B remaining=4971 delivered=249 cache=4.8GB alloc=5.8GB sys=16.2GB
[INFO] [05-04|12:43:07.902] [p2p] GoodPeers                          eth66=708
[INFO] [05-04|12:43:09.885] [5/15 Bodies] No block bodies to write in this log period block number=27910849

Always a few hundred blocks behind

System information

Erigon version: ./erigon --version
v1.0.2 3da15fc
OS & Version: Windows/Linux/OSX
Linux
Commit hash:
./build/bin/erigon --datadir="/data/bsc" --chain=bsc --port=30303 --http.port=8545 --authrpc.port=8551 --torrent.port=42069 --private.api.addr=127.0.0.1:9090 --http --ws --http.api=web3,net,eth,debug,trace,txpool --http.corsdomain=* --http.addr=0.0.0.0 --http.vhosts=* --snapshots=false bootnodes=enode://1cc4534b14cfe351ab740a1418ab944a234ca2f702915eadb7e558a02010cb7c5a8c295a3b56bcefa7701c07752acd5539cb13df2aab8ae2d98934d712611443@52.71.43.172:30311,enode://28b1d16562dac280dacaaf45d54516b85bc6c994252a9825c5cc4e080d3e53446d05f63ba495ea7d44d6c316b54cd92b245c5c328c37da24605c4a93a0d099c4@34.246.65.14:30311,enode://5a7b996048d1b0a07683a949662c87c09b55247ce774aeee10bb886892e586e3c604564393292e38ef43c023ee9981e1f8b335766ec4f0f256e57f8640b079d5@35.73.137.11:30311 --bodies.cache=214748364800 --batchSize=4096M --db.pagesize=16k --maxpeers=500

Hi,
Thank you for your great work,

In the last week, it has been lagging behind by about 500 blocks, and it was only about 50 blocks behind in the previous week. It feels like it is always chasing blocks, and it has never been completed. I have used erigon for several months, and I remember that it has always been relatively stable. What is the reason for this? Is there any hope of getting this to work?

INFO] [04-23|07:59:24.270] [txpool] stat                            pending=2378 baseFee=8 queued=530 alloc=3.4GB sys=9.1GB
[INFO] [04-23|07:59:28.827] [2/15 Headers] No block headers to write in this log period block number=27593858
[INFO] [04-23|07:59:48.827] [2/15 Headers] No block headers to write in this log period block number=27593858
[INFO] [04-23|08:00:00.131] [2/15 Headers] Processed                 highest inserted=27593889 age=52s
[INFO] [04-23|08:00:00.132] [2/15 Headers] DONE                      in=9m31.304740802s
[INFO] [04-23|08:00:00.423] [5/15 Bodies] Processing bodies...       from=27593480 to=27593889
[INFO] [04-23|08:00:20.424] [5/15 Bodies] Downloading block bodies   block_num=27593582 delivery/sec=909.8KB wasted/sec=0B remaining=306 delivered=288 cache=11.6MB alloc=3.8GB sys=9.1GB
[INFO] [04-23|08:00:24.268] [txpool] stat                            pending=2456 baseFee=8 queued=533 alloc=3.8GB sys=9.1GB
[INFO] [04-23|08:00:30.391] [5/15 Bodies] Processed                  highest=27593889
[INFO] [04-23|08:00:30.391] [6/15 Senders] Started                   from=27593480 to=27593889
[INFO] [04-23|08:00:31.883] [7/15 Execution] Blocks execution        from=27593480 to=27593889
[INFO] [04-23|08:00:51.905] [7/15 Execution] Executed blocks         number=27593531 blk/s=2.5 tx/s=447.2 Mgas/s=70.5 gasState=0.00 batch=1.8MB alloc=3.7GB sys=9.1GB
[INFO] [04-23|08:01:12.331] [7/15 Execution] Executed blocks         number=27593593 blk/s=3.0 tx/s=527.6 Mgas/s=83.8 gasState=0.01 batch=4.3MB alloc=4.3GB sys=9.1GB
[INFO] [04-23|08:01:23.758] [p2p] GoodPeers                          eth66=15 eth68=14
[INFO] [04-23|08:01:24.270] [txpool] stat                            pending=2520 baseFee=9 queued=539 alloc=4.6GB sys=9.1GB
[INFO] [04-23|08:01:31.896] [7/15 Execution] Executed blocks         number=27593658 blk/s=3.3 tx/s=602.8 Mgas/s=90.7 gasState=0.01 batch=7.3MB alloc=4.9GB sys=9.1GB
[INFO] [04-23|08:01:52.406] [7/15 Execution] Executed blocks         number=27593723 blk/s=3.2 tx/s=568.2 Mgas/s=87.2 gasState=0.01 batch=10.3MB alloc=5.4GB sys=9.1GB
[INFO] [04-23|08:02:11.937] [7/15 Execution] Executed blocks         number=27593792 blk/s=3.5 tx/s=628.9 Mgas/s=93.4 gasState=0.02 batch=13.3MB alloc=3.7GB sys=9.1GB
[INFO] [04-23|08:02:27.267] [txpool] stat                            pending=2590 baseFee=9 queued=544 alloc=4.1GB sys=9.1GB
[INFO] [04-23|08:02:32.043] [7/15 Execution] Executed blocks         number=27593868 blk/s=3.8 tx/s=634.1 Mgas/s=88.2 gasState=0.02 batch=16.4MB alloc=4.2GB sys=9.1GB
[INFO] [04-23|08:03:02.403] [7/15 Execution] Completed on            block=27593889
[INFO] [04-23|08:03:02.403] [7/15 Execution] DONE                    in=2m30.519920731s
[INFO] [04-23|08:03:02.403] [8/15 HashState] Promoting plain state   from=27593480 to=27593889
[INFO] [04-23|08:03:02.403] [8/15 HashState] Incremental promotion   from=27593480 to=27593889 codes=true csbucket=AccountChangeSet
[INFO] [04-23|08:03:20.987] [8/15 HashState] Incremental promotion   from=27593480 to=27593889 codes=false csbucket=AccountChangeSet
[INFO] [04-23|08:03:23.758] [p2p] GoodPeers                          eth66=15 eth68=14
[INFO] [04-23|08:03:24.270] [txpool] stat                            pending=2684 baseFee=9 queued=559 alloc=5.0GB sys=9.1GB
[INFO] [04-23|08:03:51.794] [8/15 HashState] ETL [2/2] Loading       into=HashedAccount current_prefix=ea56aac2
[INFO] [04-23|08:03:54.919] [8/15 HashState] Incremental promotion   from=27593480 to=27593889 codes=false csbucket=StorageChangeSet
[INFO] [04-23|08:04:24.268] [txpool] stat                            pending=2767 baseFee=9 queued=605 alloc=3.5GB sys=9.1GB
[INFO] [04-23|08:04:27.289] [8/15 HashState] ETL [2/2] Loading       into=HashedStorage current_prefix=d97dd5b8
[INFO] [04-23|08:04:57.290] [8/15 HashState] ETL [2/2] Loading       into=HashedStorage current_prefix=e9dae3d7
[INFO] [04-23|08:05:18.390] [8/15 HashState] DONE                    in=2m15.987694342s
[INFO] [04-23|08:05:18.391] [9/15 IntermediateHashes] Generating intermediate hashes from=27593480 to=27593889
[INFO] [04-23|08:05:23.757] [p2p] GoodPeers                          eth66=15 eth68=14
[INFO] [04-23|08:05:24.270] [txpool] stat                            pending=2818 baseFee=9 queued=670 alloc=4.3GB sys=9.1GB
[INFO] [04-23|08:05:52.250] [9/15 IntermediateHashes] Calculating Merkle root current key=4fef3f1c
[INFO] [04-23|08:06:22.254] [9/15 IntermediateHashes] Calculating Merkle root current key=9e68529e
[INFO] [04-23|08:06:24.268] [txpool] stat                            pending=3586 baseFee=9 queued=765 alloc=3.9GB sys=9.1GB
[INFO] [04-23|08:07:00.632] [9/15 IntermediateHashes] Calculating Merkle root current key=d97dd5b8
[INFO] [04-23|08:07:23.758] [p2p] GoodPeers                          eth66=15 eth68=14
[INFO] [04-23|08:07:33.281] [txpool] stat                            pending=3665 baseFee=9 queued=812 alloc=5.8GB sys=9.1GB
[INFO] [04-23|08:07:41.072] [9/15 IntermediateHashes] Calculating Merkle root current key=e9dae3d7
[INFO] [04-23|08:07:52.505] [9/15 IntermediateHashes] Calculating Merkle root current key=fe1c2c3b
[INFO] [04-23|08:08:14.671] [9/15 IntermediateHashes] DONE           in=2m56.280062354s
[INFO] [04-23|08:08:15.242] [10/15 CallTraces] Pruned call trace intermediate table from=27503480 to=27503888
[INFO] [04-23|08:08:24.268] [txpool] stat                            pending=3757 baseFee=9 queued=854 alloc=4.1GB sys=9.1GB
[INFO] [04-23|08:08:45.332] [10/15 CallTraces] ETL [2/2] Loading     into=CallFromIndex current_prefix=bd902132
[INFO] [04-23|08:09:23.758] [p2p] GoodPeers                          eth68=14 eth66=15
[INFO] [04-23|08:09:24.272] [txpool] stat                            pending=3838 baseFee=9 queued=906 alloc=4.9GB sys=9.1GB
[INFO] [04-23|08:09:25.567] [10/15 CallTraces] DONE                  in=1m10.896425372s
[INFO] [04-23|08:09:55.879] [11/15 AccountHistoryIndex] ETL [2/2] Loading into=AccountHistory current_prefix=bf2a2fa0
[INFO] [04-23|08:10:24.268] [txpool] stat                            pending=3945 baseFee=9 queued=941 alloc=3.9GB sys=9.1GB
[INFO] [04-23|08:10:39.026] [12/15 StorageHistoryIndex] ETL [2/2] Loading into=StorageHistory current_prefix=55d39832
[INFO] [04-23|08:11:09.026] [12/15 StorageHistoryIndex] ETL [2/2] Loading into=StorageHistory current_prefix=e3b1d32e
[INFO] [04-23|08:11:15.907] [12/15 StorageHistoryIndex] DONE         in=1m8.279033927s
[INFO] [04-23|08:11:15.907] [13/15 LogIndex] processing              from=27593481 to=27593889

Node is lagging to sync

System information

Erigon version: ./erigon --version
erigon version 2.40.0-dev

OS & Version: Windows/Linux/OSX
Linux

Commit hash:

Erigon Command (with flags/config):
erigon --chain=bsc --datadir=/srv/svc --metrics --metrics.addr=0.0.0.0 --metrics.port=6060 --private.api.addr=0.0.0.0:9090 --pprof --pprof.addr=0.0.0.0 --pprof.port=6061 --http.api=eth,erigon,web3,net,debug,trace,txpool,parity,bor --prune htrc --ws

Concensus Layer:

Concensus Layer Command (with flags/config):

Chain/Network: 56 Mainnet

Expected behaviour

it should sync to the currentblock

Actual behaviour

it is lagging a few blocks to sync

Steps to reproduce the behaviour

just download and sync

Backtrace

[backtrace]

Hello there,
My nodereal erigon build is lagging to sync for about 2 weeks now.
once in every few blocks sync it displays like this

[INFO] [04-11|03:35:14.497] [5/15 Bodies] No block bodies to write in this log period block number=27241889
[INFO] [04-11|03:35:34.498] [5/15 Bodies] No block bodies to write in this log period block number=27241889

and waits for a few seconds and catchup to the current block

and then it occurs again.

it is delaying

BSC node has a very high memory usage issue

System information

Erigon version: 2.40.0-dev-3da15fcb

OS & Version: Linux
Commit hash:
Erigon Command (with flags/config): node/archive/bsc/bin/erigon --datadir=/node/archive/bsc/erigon/ --chain=bsc --db.read.concurrency 3000 --rpc.batch.concurrency 10000 --rpc.gascap=600000000 --rpc.evmtimeout=30s --rpc.batch.limit=100 --rpc.returndata.limit=104857600 --http=true --http.api eth,net,web3,debug,trace,txpool --http.addr 0.0.0.0 --http.corsdomain '*' --http.vhosts '*' --ws --authrpc.addr 127.0.0.1 --port=21040 --p2p.allowed-ports 21046,21048 --http.port 21041 --private.api.addr="" --authrpc.port=21043 --torrent.port=21044 --p2p.protocol=66 --maxpeers=150
Concensus Layer:
Concensus Layer Command (with flags/config):
Chain/Network: bsc

Hi, Since the BSC hard fork, memory usage has been very high, After half an hour and one hour of startup, the memory usage reaches over 60G.
I have already posted the specific situation in other posts.

ledgerwatch#7320

Sometimes the synchronization doesn't progress because the 5 stage keeps repeating.

System information

Erigon version: release 1.0.2

OS & Version: ubuntu

Commit hash: 3da15fc

Erigon Command (with flags/config):

nohup ./bin/erigon \
         --datadir=./data \
         --chain=bsc \
         --http.addr=0.0.0.0 \
         --http.port=8545 \
         --http.corsdomain=* \
         --ws \
         --http.vhosts=* \
         --http.api=eth,net,trace,web3,erigon \
         --torrent.download.rate 1024mb \
         --prune hrtc \
         --batchSize "1024M" \
         --etl.bufferSize "1024M" \
         --snap.stop \
         --maxpeers 1000 \
         --p2p.protocol=66 &

Logs

[INFO] [04-27|10:36:16.558] [2/15 Headers] Wrote block headers       number=27711963 blk/second=2.650 alloc=4.3GB sys=9.6GB
[INFO] [04-27|10:36:18.280] [2/15 Headers] Processed                 highest inserted=27711998 age=2s
[INFO] [04-27|10:36:18.280] [2/15 Headers] DONE                      in=11m21.722558317s
14412 [INFO] [04-27|10:36:18.437] [5/15 Bodies] Processing bodies...       from=27711681 to=27711998
[INFO] [04-27|10:36:35.081] [txpool] stat                            pending=2191 baseFee=55 queued=4828 alloc=4.4GB sys=9.6GB
[INFO] [04-27|10:36:38.438] [5/15 Bodies] Downloading block bodies   block_num=27711681 delivery/sec=307.6KB wasted/sec=0B remaining=316 delivered=100 cache=35.0MB alloc=4.5GB sys=9.6GB
[INFO] [04-27|10:36:58.437] [5/15 Bodies] No block bodies to write in this log period block number=27711681
[INFO] [04-27|10:37:18.437] [5/15 Bodies] No block bodies to write in this log period block number=27711681
14417 [INFO] [04-27|10:37:33.597] [p2p] GoodPeers                          eth66=996
[INFO] [04-27|10:37:35.082] [txpool] stat                            pending=2193 baseFee=55 queued=4829 alloc=4.2GB sys=9.6GB
[INFO] [04-27|10:37:38.437] [5/15 Bodies] Downloading block bodies   block_num=27711681 delivery/sec=173.5KB wasted/sec=0B remaining=316 delivered=162 cache=38.4MB alloc=4.2GB sys=9.6GB
[INFO] [04-27|10:37:58.438] [5/15 Bodies] No block bodies to write in this log period block number=27711681
[INFO] [04-27|10:37:58.438] [5/15 Bodies] DONE                       in=1m40.000854917s
14422 [INFO] [04-27|10:37:58.446] Commit cycle                             in=7.830907ms
[INFO] [04-27|10:37:58.446] Timings (slower than 50ms)               Headers=11m21.722s BlockHashes=154ms Bodies=1m40s14424 [INFO] [04-27|10:37:58.446] Tables                                   PlainState=229.7GB AccountChangeSet=2.2GB StorageChangeSet=5.9GB BlockTransaction=144.9GB TransactionLog=9.3GB FreeList=11.3MB Reclaimab      leSpace=45.1GB
[INFO] [04-27|10:37:58.446] RPC Daemon notified of new headers       from=27711681 to=27711998 hash=0x12f5542c45f33d42d30ce266bf7c8c93e604be71690c8f1a165817aa4e1e18f6 header sending=294.605µs log sending=2      60ns
14426 [INFO] [04-27|10:37:58.446] [2/15 Headers] Waiting for headers...    from=27711998
[INFO] [04-27|10:37:59.331] [2/15 Headers] Processed                 highest inserted=27712014 age=55s
14428 [INFO] [04-27|10:37:59.338] [5/15 Bodies] Processing bodies...       from=27711681 to=27712014
[INFO] [04-27|10:38:19.339] [5/15 Bodies] Downloading block bodies   block_num=27711681 delivery/sec=520.0KB wasted/sec=0B remaining=332 delivered=175 cache=50.0MB alloc=4.5GB sys=9.6GB
[INFO] [04-27|10:38:35.081] [txpool] stat                            pending=2194 baseFee=55 queued=4830 alloc=4.6GB sys=9.6GB
[INFO] [04-27|10:38:39.339] [5/15 Bodies] No block bodies to write in this log period block number=27711681
[INFO] [04-27|10:38:59.339] [5/15 Bodies] No block bodies to write in this log period block number=27711681
[INFO] [04-27|10:39:19.339] [5/15 Bodies] No block bodies to write in this log period block number=27711681
14434 [INFO] [04-27|10:39:33.596] [p2p] GoodPeers                          eth66=991
[INFO] [04-27|10:39:35.082] [txpool] stat                            pending=2333 baseFee=55 queued=5024 alloc=4.4GB sys=9.6GB
[INFO] [04-27|10:39:39.339] [5/15 Bodies] No block bodies to write in this log period block number=27711681
[INFO] [04-27|10:39:39.339] [5/15 Bodies] DONE                       in=1m40.000868168s14438 [INFO] [04-27|10:39:39.340] Commit cycle                             in=851.89µs
[INFO] [04-27|10:39:39.340] Timings (slower than 50ms)               Headers=884ms Bodies=1m40s14440 [INFO] [04-27|10:39:39.340] Tables                                   PlainState=229.7GB AccountChangeSet=2.2GB StorageChangeSet=5.9GB BlockTransaction=144.9GB TransactionLog=9.3GB FreeList=11.3MB Reclaimab      leSpace=45.1GB
[INFO] [04-27|10:39:39.341] RPC Daemon notified of new headers       from=27711681 to=27712014 hash=0x8a979390fc5d5398735038c52252af33b77a12c0e275d61ce440618eb39ef32a header sending=330.602µs log sending=2      57ns
14442 [INFO] [04-27|10:39:39.341] [2/15 Headers] Waiting for headers...    from=27712014
[INFO] [04-27|10:39:39.528] [2/15 Headers] Processed                 highest inserted=27712046 age=59s
14444 [INFO] [04-27|10:39:39.543] [5/15 Bodies] Processing bodies...       from=27711681 to=27712046
[INFO] [04-27|10:39:59.544] [5/15 Bodies] Downloading block bodies   block_num=27711681 delivery/sec=326.1KB wasted/sec=0B remaining=364 delivered=112 cache=58.0MB alloc=4.9GB sys=9.6GB
[INFO] [04-27|10:40:19.545] [5/15 Bodies] Downloading block bodies   block_num=27711681 delivery/sec=277.1KB wasted/sec=0B remaining=364 delivered=204 cache=63.4MB alloc=5.2GB sys=9.6GB
[INFO] [04-27|10:40:35.082] [txpool] stat                            pending=2746 baseFee=55 queued=5560 alloc=5.5GB sys=9.6GB
[INFO] [04-27|10:40:39.544] [5/15 Bodies] No block bodies to write in this log period block number=2771168114449 [INFO] [04-27|10:40:59.544] [5/15 Bodies] No block bodies to write in this log period block number=27711681
[INFO] [04-27|10:41:19.544] [5/15 Bodies] No block bodies to write in this log period block number=27711681
[INFO] [04-27|10:41:19.544] [5/15 Bodies] DONE                       in=1m40.001151674s
14452 [INFO] [04-27|10:41:19.546] Commit cycle                             in=1.375297ms

Plato migration failure

Erigon version: v1.0.5

OS & Version: Linux/amd64 (Ubuntu 22.04)

Node reaches upgrade block, upgrade fails; node is restarted; consensus-engine in upgraded regime seems to fail to process existing snapshot state data, then fails again:

[INFO] [05-16|22:28:40.143] [2/15 Headers] Waiting for headers...    from=29861022
[INFO] [05-16|22:28:43.076] [2/15 Headers] Processed                 highest inserted=29861023 age=0
[INFO] [05-16|22:28:43.094] [7/15 Execution] Completed on            block=29861023
[INFO] [05-16|22:28:43.102] Commit cycle                             in=2.170581ms
[INFO] [05-16|22:28:43.102] Timings (slower than 50ms)               Headers=2.933s
[INFO] [05-16|22:28:43.102] Tables                                   PlainState=32.8GB AccountChangeSet=16.4GB StorageChangeSet=69.0GB BlockTransaction=132.1GB TransactionLog=87.7GB Fr>
[INFO] [05-16|22:28:43.102] RPC Daemon notified of new headers       from=29861022 to=29861023 hash=0x566c57e3b41a07e504bc3305516eb50797814a766e6b9ddb0c1ecffc2b50e12a header sending=10>
[INFO] [05-16|22:28:43.103] [2/15 Headers] Waiting for headers...    from=29861023
[EROR] [05-16|22:28:46.112] Unexpected error when getting snapshot   error="unknown ancestor" blockNumber=29861024 blockHash=0x5d78b2201c37e72a5d3d33a4182aaa6cb0bf4132ad5aa01afe179759a>
[INFO] [05-16|22:28:46.113] [2/15 Headers] Processed                 highest inserted=29861024 age=0
[INFO] [05-16|22:28:46.114] [7/15 Execution] Completed on            block=29861024
[EROR] [05-16|22:28:46.114] [9/15 IntermediateHashes] Wrong trie root of block 29861024: 74c3fec75ed11d77d2d3f5c9285f1262cf262217204899218bdcc8632d50c2a2, expected (from header): 7d0c4>
[WARN] [05-16|22:28:46.114] Unwinding due to incorrect root hash     to=29861023
[INFO] [05-16|22:28:46.114] UnwindTo                                 block=29861023 bad_block_hash=0x5d78b2201c37e72a5d3d33a4182aaa6cb0bf4132ad5aa01afe179759a8a4d3ff
[INFO] [05-16|22:28:46.114] [8/15 HashState] Unwinding started       from=29861024 to=29861023 storage=false codes=true
[INFO] [05-16|22:28:46.114] [8/15 HashState] Unwinding started       from=29861024 to=29861023 storage=false codes=false
[INFO] [05-16|22:28:46.114] [8/15 HashState] Unwinding started       from=29861024 to=29861023 storage=true codes=false
[INFO] [05-16|22:28:46.115] [7/15 Execution] Unwind Execution        from=29861024 to=29861023
[INFO] [05-16|22:28:46.115] [2/15 Headers] Waiting for headers...    from=29861023
[WARN] [05-16|22:28:46.129] [downloader] Rejected header marked as bad hash=0x5d78b2201c37e72a5d3d33a4182aaa6cb0bf4132ad5aa01afe179759a8a4d3ff height=29861024
[WARN] [05-16|22:28:46.171] [downloader] Rejected header marked as bad hash=0x5d78b2201c37e72a5d3d33a4182aaa6cb0bf4132ad5aa01afe179759a8a4d3ff height=29861024
[WARN] [05-16|22:28:46.174] [downloader] Rejected header marked as bad hash=0x5d78b2201c37e72a5d3d33a4182aaa6cb0bf4132ad5aa01afe179759a8a4d3ff height=29861024

...

Stopping Erigon blockchain node...
[INFO] [05-16|22:31:47.323] Got interrupt, shutting down...
[INFO] [05-16|22:31:47.323] Exiting Engine...
[INFO] [05-16|22:31:47.323] Exiting...
[INFO] [05-16|22:31:47.323] RPC server shutting down
[WARN] [05-16|22:31:47.323] Failed to serve http endpoint            err="http: Server closed"
[INFO] [05-16|22:31:47.323] HTTP endpoint closed                     url=127.0.0.1:9545
[INFO] [05-16|22:31:47.323] RPC server shutting down
[INFO] [05-16|22:31:47.323] RPC server shutting down
[WARN] [05-16|22:31:47.323] Failed to serve http endpoint            err="http: Server closed"
[INFO] [05-16|22:31:47.323] Engine HTTP endpoint close               url=127.0.0.1:9551
[email protected]: Deactivated successfully.
Stopped Erigon blockchain node.
[email protected]: Consumed 30min 23.843s CPU time.
Started Erigon blockchain node.
[INFO] [05-16|22:31:57.980] logging to file system                   log dir=/scratch/node/chapel/logs file prefix=erigon log level=crit json=false
[INFO] [05-16|22:31:57.985] Build info                               git_branch= git_tag= git_commit=
[INFO] [05-16|22:31:57.985] Starting Erigon on Chapel testnet...
[INFO] [05-16|22:31:57.986] Maximum peer count                       ETH=200 total=200
[INFO] [05-16|22:31:57.986] starting HTTP APIs                       APIs=admin,debug,eth,net,web3,erigon
[INFO] [05-16|22:31:57.986] torrent verbosity                        level=WRN
[INFO] [05-16|22:31:57.986] [torrent] Public IP                      ip=15.235.54.154
[INFO] [05-16|22:31:57.987] Set global gas cap                       cap=50000000
[INFO] [05-16|22:31:57.991] [Downloader] Runnning with               ipv6-enabled=true ipv4-enabled=true download.rate=64mb upload.rate=4mb
[INFO] [05-16|22:31:57.991] Opening Database                         label=chaindata path=/scratch/node/chapel/chaindata
[INFO] [05-16|22:31:57.991] [db] params: growStep=2GB, mapsSize=7TB, shrinkThreshold=-1, pageSize=16KB, label=chaindata, WriteMap=false, Durable=false, NoReadahead=true,
[INFO] [05-16|22:31:57.992] Initialised chain configuration          config="{ChainID: 97 Ramanujan: 1010000, Niels: 1014369, MirrorSync: 5582500, Bruno: 13837000, Euler: 19203503, Gib>
[WARN] [05-16|22:31:57.992] Incorrect snapshot enablement            got=true change_to=false
[INFO] [05-16|22:31:57.992] Effective                                prune_flags="--prune.c.older=90000" snapshot_flags= history.v3=false
[INFO] [05-16|22:31:57.994] Initialising Ethereum protocol           network=97
[INFO] [05-16|22:31:58.408] Starting private RPC server              on=127.0.0.1:10097
[INFO] [05-16|22:31:58.408] new subscription to logs established
[INFO] [05-16|22:31:58.408] rpc filters: subscribing to Erigon events
[INFO] [05-16|22:31:58.409] new subscription to newHeaders established
[INFO] [05-16|22:31:58.409] New txs subscriber joined
[INFO] [05-16|22:31:58.409] Reading JWT secret                       path=/scratch/node/chapel/jwt.hex
[INFO] [05-16|22:31:58.410] HTTP endpoint opened for Engine API      url=127.0.0.1:9551 ws=true ws.compression=true
[INFO] [05-16|22:31:58.410] HTTP endpoint opened                     url=127.0.0.1:9545 ws=true ws.compression=true grpc=false
[INFO] [05-16|22:31:58.414] Started P2P networking                   version=67 self=enode://412e32aca65b49a8f35c29214fbb5b0b4a7a44554d0e10fc7c71ffd32caa00ca6e46cb2f0d0f23a6b3f1f8abf7c>
[INFO] [05-16|22:31:58.414] Started P2P networking                   version=66 self=enode://412e32aca65b49a8f35c29214fbb5b0b4a7a44554d0e10fc7c71ffd32caa00ca6e46cb2f0d0f23a6b3f1f8abf7c>
[INFO] [05-16|22:31:58.417] Started P2P networking                   version=68 self=enode://412e32aca65b49a8f35c29214fbb5b0b4a7a44554d0e10fc7c71ffd32caa00ca6e46cb2f0d0f23a6b3f1f8abf7c>
[INFO] [05-16|22:31:58.419] [txpool] Started
[INFO] [05-16|22:31:58.419] [2/15 Headers] Waiting for headers...    from=29861023
[INFO] [05-16|22:32:01.333] New txs subscriber joined
[INFO] [05-16|22:32:01.333] new subscription to newHeaders established
[INFO] [05-16|22:32:58.409] [txpool] stat                            pending=13 baseFee=0 queued=12 alloc=91.9MB sys=208.4MB
[INFO] [05-16|22:33:57.994] [p2p] GoodPeers                          eth67=2 eth66=7
[INFO] [05-16|22:33:58.409] [txpool] stat                            pending=45 baseFee=0 queued=34 alloc=83.7MB sys=212.9MB
[INFO] [05-16|22:34:12.726] [2/15 Headers] Inserting headers         progress=29861023 queue=1
[EROR] [05-16|22:34:12.728] Unexpected error when getting snapshot   error="unknown ancestor" blockNumber=29861025 blockHash=0xcff232a0021469422520093a26ad86a95e8c58fc533c9d97dbceb5810>
[EROR] [05-16|22:34:12.731] Unexpected error when getting snapshot   error="unknown ancestor" blockNumber=29861026 blockHash=0x82506ae10091cdaf01811487b8c2a3ae5be81f7c99120009baa1e0b53>
[EROR] [05-16|22:34:12.731] Unexpected error when getting snapshot   error="unknown ancestor" blockNumber=29861026 blockHash=0x82506ae10091cdaf01811487b8c2a3ae5be81f7c99120009baa1e0b53>
[EROR] [05-16|22:34:12.733] Unexpected error when getting snapshot   error="unknown ancestor" blockNumber=29861027 blockHash=0x7f5f5a7760aff9ec6dae628f0893c17454ef64fb89b5b058822ad2a86>
[EROR] [05-16|22:34:12.733] Unexpected error when getting snapshot   error="unknown ancestor" blockNumber=29861027 blockHash=0x7f5f5a7760aff9ec6dae628f0893c17454ef64fb89b5b058822ad2a86>
[EROR] [05-16|22:34:12.735] Unexpected error when getting snapshot   error="unknown ancestor" blockNumber=29861028 blockHash=0xfba701379efc76254003876ae4aad7e37c707aa06e23cae64753d5514>
[EROR] [05-16|22:34:12.735] Unexpected error when getting snapshot   error="unknown ancestor" blockNumber=29861028 blockHash=0xfba701379efc76254003876ae4aad7e37c707aa06e23cae64753d5514>
[EROR] [05-16|22:34:12.737] Unexpected error when getting snapshot   error="unknown ancestor" blockNumber=29861029 blockHash=0xc04294d88837a0a21a6e829ae28bc512707fcc8cd3210dfcb9f5a4d7e>

...

[EROR] [05-16|22:34:12.863] Unexpected error when getting snapshot   error="unknown ancestor" blockNumber=29861092 blockHash=0x8c6b3e699d16ee084a53ce1a8e0923e6802d49a9709632db275fa7f18>
[EROR] [05-16|22:34:12.863] Unexpected error when getting snapshot   error="unknown ancestor" blockNumber=29861092 blockHash=0x8c6b3e699d16ee084a53ce1a8e0923e6802d49a9709632db275fa7f18>
[EROR] [05-16|22:34:12.866] Unexpected error when getting snapshot   error="unknown ancestor" blockNumber=29861093 blockHash=0x148ecf0c8955137cf4d45ca49b8779123201052cfd808b08d1b3df158>
[EROR] [05-16|22:34:12.866] Unexpected error when getting snapshot   error="unknown ancestor" blockNumber=29861093 blockHash=0x148ecf0c8955137cf4d45ca49b8779123201052cfd808b08d1b3df158>
[INFO] [05-16|22:34:12.867] [2/15 Headers] Processed                 highest inserted=29861093 age=1m59s
[INFO] [05-16|22:34:12.867] [2/15 Headers] DONE                      in=2m14.448336512s
[INFO] [05-16|22:34:12.889] [5/15 Bodies] Processing bodies...       from=29861023 to=29861093
[INFO] [05-16|22:34:12.937] [5/15 Bodies] Processed                  highest=29861093
[INFO] [05-16|22:34:12.937] [6/15 Senders] Started                   from=29861023 to=29861093
[INFO] [05-16|22:34:12.940] [7/15 Execution] Blocks execution        from=29861023 to=29861093
[WARN] [05-16|22:34:12.941] [7/15 Execution] Execution failed        block=29861025 hash=0xcff232a0021469422520093a26ad86a95e8c58fc533c9d97dbceb581026f2628 err="mismatched receipt head>
[INFO] [05-16|22:34:12.941] UnwindTo                                 block=29861024 bad_block_hash=0xcff232a0021469422520093a26ad86a95e8c58fc533c9d97dbceb581026f2628
[INFO] [05-16|22:34:12.941] [7/15 Execution] Completed on            block=29861024
[INFO] [05-16|22:34:12.942] Timings (slower than 50ms)               Headers=2m14.448s
[INFO] [05-16|22:34:12.942] RPC Daemon notified of new headers       from=29861023 to=29861093 hash=0x0000000000000000000000000000000000000000000000000000000000000000 header sending=42>
[INFO] [05-16|22:34:12.942] [2/15 Headers] Waiting for headers...    from=29861024
[WARN] [05-16|22:34:12.959] [downloader] Rejected header marked as bad hash=0xcff232a0021469422520093a26ad86a95e8c58fc533c9d97dbceb581026f2628 height=29861025
[WARN] [05-16|22:34:12.971] [downloader] Rejected header marked as bad hash=0xcff232a0021469422520093a26ad86a95e8c58fc533c9d97dbceb581026f2628 height=29861025
[WARN] [05-16|22:34:12.973] [downloader] Rejected header marked as bad hash=0xcff232a0021469422520093a26ad86a95e8c58fc533c9d97dbceb581026f2628 height=29861025
[WARN] [05-16|22:34:12.975] [downloader] Rejected header marked as bad hash=0xcff232a0021469422520093a26ad86a95e8c58fc533c9d97dbceb581026f2628 height=29861025
[WARN] [05-16|22:34:12.984] [downloader] Rejected header marked as bad hash=0xcff232a0021469422520093a26ad86a95e8c58fc533c9d97dbceb581026f2628 height=29861025

code of address 0x0000000000000000000000000000000000001000 is not correct after Luban hardfork

System information

Erigon version: ./erigon --version

OS & Version: Windows/Linux/OSX

Commit hash:

Erigon Command (with flags/config):

Concensus Layer:

Concensus Layer Command (with flags/config):

Chain/Network: Chapel

Expected behaviour

the code of address 0x0000000000000000000000000000000000001000 has changed after Luban hardfork (0x1bf01ca), but erigon did not update the code. but if getCode from "latest", the result is correct.

request:
{ "jsonrpc": "2.0", "id": 1, "method": "eth_getCode", "params": [ "0x0000000000000000000000000000000000001000", "0x1bf01cc" ] }

expected:

{ "jsonrpc": "2.0", "id": 1, "result": "0x60806040526004361061046c57600.........." }

Actual behaviour

{ "jsonrpc": "2.0", "id": 1, "result": "0x60806040526004361061040557..........." }

Steps to reproduce the behaviour

Backtrace

[backtrace]

v1.0.4 fresh sync fails

Hi,

Decided to spin up a fresh bsc-erigon node today without using snapshots. Using v1.0.4 binary

At around 668.1GB chaindata size this error occured:

WARN[05-05|21:18:04.418] Index file has timestamp before segment file, will be recreated segfile=/root/dev/bsc-erigon/bsc-erigon-1.0.4/build/bin/node/snapshots/v1-021500-022000-headers.seg segtime=2023-05-05T21:18:04+0200 idxfile=v1-021500-022000-headers.idx idxtime=2023-05-05T20:49:58+0200
WARN[05-05|21:18:04.420] Index file has timestamp before segment file, will be recreated segfile=/root/dev/bsc-erigon/bsc-erigon-1.0.4/build/bin/node/snapshots/v1-022000-022500-headers.seg segtime=2023-05-05T21:10:16+0200 idxfile=v1-022000-022500-headers.idx idxtime=2023-05-05T19:30:22+0200
WARN[05-05|21:18:04.420] Index file has timestamp before segment file, will be recreated segfile=/root/dev/bsc-erigon/bsc-erigon-1.0.4/build/bin/node/snapshots/v1-022500-023000-headers.seg segtime=2023-05-05T21:11:16+0200 idxfile=v1-022500-023000-headers.idx idxtime=2023-05-05T12:15:07+0200
WARN[05-05|21:18:04.496] Index file has timestamp before segment file, will be recreated segfile=/root/dev/bsc-erigon/bsc-erigon-1.0.4/build/bin/node/snapshots/v1-022000-022500-bodies.seg segtime=2023-05-05T21:10:17+0200 idxfile=v1-022000-022500-bodies.idx idxtime=2023-05-05T20:49:57+0200
WARN[05-05|21:18:04.497] Index file has timestamp before segment file, will be recreated segfile=/root/dev/bsc-erigon/bsc-erigon-1.0.4/build/bin/node/snapshots/v1-022500-023000-bodies.seg segtime=2023-05-05T21:09:11+0200 idxfile=v1-022500-023000-bodies.idx idxtime=2023-05-05T20:49:57+0200
EROR[05-05|21:18:06.436] Staged Sync err="[1/15 Snapshots] BuildMissedIndices: HeadersIdx: at=v1-021500-022000-headers.seg, file: v1-021500-022000-headers.seg, runtime error: slice bounds out of range [:-1], [decompress.go:532 panic.go:884 panic.go:139 decompress.go:553 block_snapshots.go:1899 block_snapshots.go:1823 block_snapshots.go:869 block_snapshots.go:912 errgroup.go:75 asm_amd64.s:1594], [block_snapshots.go:1806 panic.go:884 decompress.go:532 panic.go:884 panic.go:139 decompress.go:553 block_snapshots.go:1899 block_snapshots.go:1823 block_snapshots.go:869 block_snapshots.go:912 errgroup.go:75 asm_amd64.s:1594]"
INFO[05-05|21:18:06.940] [1/15 Snapshots] Fetching torrent files metadata
INFO[05-05|21:18:26.945] [1/15 Snapshots] download progress="98.71% 659.5GB/668.1GB" download-time-left=0hrs:15m total-download-time=20s download=9.2MB/s upload=0B/s
INFO[05-05|21:18:26.946] [1/15 Snapshots] download peers=8 connections=10 files=170 alloc=4.4GB sys=6.6GB
INFO[05-05|21:18:46.944] [1/15 Snapshots] download progress="98.77% 659.9GB/668.1GB" download-time-left=0hrs:6m total-download-time=40s download=21.4MB/s upload=0B/s
INFO[05-05|21:18:46.944] [1/15 Snapshots] download peers=7 connections=8 files=170 alloc=2.7GB sys=6.6GB
INFO[05-05|21:18:54.998] [txpool] stat pending=1 baseFee=0 queued=184 alloc=3.0GB sys=6.6GB
INFO[05-05|21:19:06.945] [1/15 Snapshots] download progress="98.84% 660.4GB/668.1GB" download-time-left=0hrs:5m total-download-time=1m0s download=24.0MB/s upload=0B/s
INFO[05-05|21:19:06.945] [1/15 Snapshots] download peers=9 connections=10 files=170 alloc=3.3GB sys=6.6GB

It will spin like this in circles, deleting and re-downloading only to fail.

Unwinding execution by 10 blocks did not help

Node specs:
2TB SSD,
64gb ram,
14 cores
Netcup root dedicated server

BSC Testnet stuck v1.0.4

System information

Erigon version: ./erigon --version

erigon version 2.43.0-dev-7bfcc2bb
OS & Version: Windows/Linux/OSX
Ubuntu22.04
Commit hash:
7bfcc2b
Erigon Command (with flags/config):
/usr/local/bin/erigon --chain=chapel --datadir=/node/data --metrics --metrics.addr=0.0.0.0 --metrics.port=6060 --private.api.addr=0.0.0.0:9090 --pprof --pprof.addr=0.0.0.0 --pprof.port=6061 --torrent.download.rate=1000mb --bodies.cache=214748364800 --batchSize=4096M --p2p.protocol=66 --db.size.limit=1800GB --http=false

Chain/Network:
Chapel

May 12 09:18:58 bsc-archive-testnet-0-eu erigon[870216]: [INFO] [05-12|09:18:58.872] [2/15 Headers] Waiting for headers...    from=29614555
May 12 09:18:59 bsc-archive-testnet-0-eu erigon[870216]: [INFO] [05-12|09:18:59.366] [parlia] snapshots build, gather headers block=29600000
May 12 09:18:59 bsc-archive-testnet-0-eu erigon[870216]: [INFO] [05-12|09:18:59.545] New txs subscriber joined
May 12 09:18:59 bsc-archive-testnet-0-eu erigon[870216]: [INFO] [05-12|09:18:59.546] new subscription to newHeaders established
May 12 09:18:59 bsc-archive-testnet-0-eu erigon[870216]: [INFO] [05-12|09:18:59.773] [txpool] Started
May 12 09:19:01 bsc-archive-testnet-0-eu erigon[870216]: [INFO] [05-12|09:19:01.833] local tx propagated                      tx_hash=f2f37c3bcf91e0a766cd16a900522fbad0c7c47de287ff2f6226f65299f2d4d2 announced to peers=3 broadcast to peers=1 baseFee=0
May 12 09:19:20 bsc-archive-testnet-0-eu erigon[870216]: [INFO] [05-12|09:19:20.345] local tx propagated                      tx_hash=d4f4f159b12f7639a07d364cf1c176df1fb03380d5eb1f086ed6b828d3c41ae4 announced to peers=3 broadcast to peers=1 baseFee=0
May 12 09:19:58 bsc-archive-testnet-0-eu erigon[870216]: [INFO] [05-12|09:19:58.858] [txpool] stat                            pending=10000 baseFee=0 queued=29915 alloc=241.6MB sys=277.0MB
May 12 09:20:45 bsc-archive-testnet-0-eu erigon[870216]: [INFO] [05-12|09:20:45.468] local tx propagated                      tx_hash=c4aa0629eb35a60458200f4ec3808d720d929171206b3da574f587349b203a54 announced to peers=5 broadcast to peers=2 baseFee=0
May 12 09:20:51 bsc-archive-testnet-0-eu erigon[870216]: [INFO] [05-12|09:20:51.566] local tx propagated                      tx_hash=ff2dcc715ca429e13b639c49135a3b338854586f2641fbd04564565ef4291cdf announced to peers=5 broadcast to peers=2 baseFee=0
May 12 09:20:58 bsc-archive-testnet-0-eu erigon[870216]: [INFO] [05-12|09:20:58.240] [p2p] GoodPeers                          eth66=5
May 12 09:20:58 bsc-archive-testnet-0-eu erigon[870216]: [INFO] [05-12|09:20:58.885] [txpool] stat                            pending=10000 baseFee=0 queued=30000 alloc=293.0MB sys=331.5MB
May 12 09:20:59 bsc-archive-testnet-0-eu erigon[870216]: [INFO] [05-12|09:20:59.609] local tx propagated                      tx_hash=98f4632116188036891d9e9ef231e2733f8571f4f1a6a973f003e74e21d4594c announced to peers=4 broadcast to peers=2 baseFee=0
May 12 09:21:58 bsc-archive-testnet-0-eu erigon[870216]: [INFO] [05-12|09:21:58.858] [txpool] stat                            pending=10000 baseFee=0 queued=30000 alloc=236.6MB sys=332.4MB
May 12 09:22:05 bsc-archive-testnet-0-eu erigon[870216]: [INFO] [05-12|09:22:05.772] local tx propagated                      tx_hash=319f07cf8bd43f2654e07ff76472e44e9769723c14a74856e02c5626e924c6ef announced to peers=4 broadcast to peers=2 baseFee=0
May 12 09:22:47 bsc-archive-testnet-0-eu erigon[870216]: [INFO] [05-12|09:22:47.127] local tx propagated                      tx_hash=7106493a14acca31e146f7e8852333b06912cb8aa1390a8e23432c5dacb4738c announced to peers=4 broadcast to peers=2 baseFee=0
May 12 09:22:47 bsc-archive-testnet-0-eu erigon[870216]: [INFO] [05-12|09:22:47.397] local tx propagated                      tx_hash=7106493a14acca31e146f7e8852333b06912cb8aa1390a8e23432c5dacb4738c announced to peers=4 broadcast to peers=2 baseFee=0
May 12 09:22:47 bsc-archive-testnet-0-eu erigon[870216]: [INFO] [05-12|09:22:47.397] local tx propagated                      tx_hash=a30722dfac82d249fa8481aea50f11f7898f88e36446cd1a89914e8bfdea5b58 announced to peers=4 broadcast to peers=2 baseFee=0
May 12 09:22:58 bsc-archive-testnet-0-eu erigon[870216]: [INFO] [05-12|09:22:58.240] [p2p] GoodPeers                          eth66=5
May 12 09:22:58 bsc-archive-testnet-0-eu erigon[870216]: [INFO] [05-12|09:22:58.887] [txpool] stat                            pending=10000 baseFee=0 queued=30000 alloc=299.2MB sys=340.6MB
May 12 09:23:58 bsc-archive-testnet-0-eu erigon[870216]: [INFO] [05-12|09:23:58.859] [txpool] stat                            pending=10000 baseFee=0 queued=30000 alloc=227.3MB sys=344.6MB

Updated to the latest version v1.0.4, worked for 3 days, and after that, the node remained stuck with no errors.
We've already tried restarting the node multiple times and even attempted to unwind it, but no luck.

Senders recovery MdbxKV error

System information

Erigon version: 2.40.0-dev-2af7ec56 (builded from 1.0.2 tag)

OS & Version: Linux (Docker)

Commit hash: 3da15fc

Erigon Command (with flags/config):

command:
  - "--datadir"
  - "/srv/bsc/data/"
  - "--ethash.dagdir"
  - "/srv/bsc/data/"
  - "--http"
  - "--http.port"
  - "8545"
  - "--http.addr"
  - "0.0.0.0"
  - "--http.vhosts"
  - "*"
  - "--http.corsdomain"
  - "bsc"
  - "--http.api"
  - "eth,net,web3,debug,admin,personal"
  - "--authrpc.addr"
  - "0.0.0.0"
  - "--authrpc.port"
  - "8551"
  - "--authrpc.vhosts"
  - "*"
  - "--authrpc.jwtsecret"
  - "/srv/bsc/data/jwt.hex"
  - "--metrics"
  - "--metrics.addr"
  - "0.0.0.0"
  - "--metrics.port"
  - "6061"
  - "--ws"
  - "--maxpeers"
  - "200"
  - "--chain"
  - "bsc"
  - "--p2p.protocol"
  - "66"
  - "--bodies.cache"
  - "214748364800"
  - "--batchSize"
  - "4096M"
  - "--db.pagesize"
  - "16k"

Concensus Layer: None (internal)

Concensus Layer Command (with flags/config): None

Chain/Network: bsc

Expected behaviour

Pass synchronization

Actual behaviour

Fail somewhere in MdbxKV at Senders stage.

Steps to reproduce the behaviour

  1. Start syncing
  2. Try a lot of advises from internet for pass Headers syncing stage (unwind + some additional arguments)
  3. Wait
  4. Got stuck at Senders syncing stage

Backtrace

[INFO] [04-30|07:13:26.479] [6/15 Senders] Flushed buffer file       name=/srv/bsc/data/temp/erigon-sortable-buf-1490780490
[INFO] [04-30|07:13:50.951] [6/15 Senders] Recovery                  block_number=25226906 ch=0/10000
[INFO] [04-30|07:14:12.188] [txpool] stat                            pending=1 baseFee=0 queued=91 alloc=4.4GB sys=6.0GB
[INFO] [04-30|07:14:20.951] [6/15 Senders] Recovery                  block_number=25278648 ch=0/10000
[INFO] [04-30|07:14:31.752] [6/15 Senders] Flushed buffer file       name=/srv/bsc/data/temp/erigon-sortable-buf-206732764
[INFO] [04-30|07:14:50.951] [6/15 Senders] Recovery                  block_number=25333132 ch=0/10000
[INFO] [04-30|07:15:11.399] [p2p] GoodPeers                          eth66=59
[INFO] [04-30|07:15:12.188] [txpool] stat                            pending=1 baseFee=0 queued=95 alloc=5.2GB sys=6.0GB
[INFO] [04-30|07:15:20.951] [6/15 Senders] Recovery                  block_number=25389859 ch=0/10000
[INFO] [04-30|07:15:36.363] [6/15 Senders] Flushed buffer file       name=/srv/bsc/data/temp/erigon-sortable-buf-2527265756
[INFO] [04-30|07:15:50.951] [6/15 Senders] Recovery                  block_number=25446385 ch=0/10000
[INFO] [04-30|07:16:12.188] [txpool] stat                            pending=1 baseFee=0 queued=99 alloc=3.5GB sys=6.0GB
[EROR] [04-30|07:16:14.945] failed ReadTransactionByHash             hash=0x6fec7acf1d28abbe42b1a51307f3036637c0fc5f7b13b2749f68ef19b9a93965 block=25487600 err="failed MdbxKV cursor.Next(): mdbx_cursor_get: MDBX_PAGE_NOTFOUND: Requested page not found"
[WARN] [04-30|07:16:14.945] [6/15 Senders] ReadCanonicalBodyWithTransactions can't find block num=25487600 hash=0x6fec7acf1d28abbe42b1a51307f3036637c0fc5f7b13b2749f68ef19b9a93965
[EROR] [04-30|07:16:15.307] Staged Sync                              err="[6/15 Senders] failed MdbxKV cursor.Next(): mdbx_cursor_get: MDBX_BAD_TXN: Transaction is not valid for requested operation, e.g. had errored and be must aborted, has a child, or is invalid"
[INFO] [04-30|07:16:16.062] [6/15 Senders] Started                   from=22999999 to=27793614
[INFO] [04-30|07:16:46.063] [6/15 Senders] Recovery                  block_number=23073075 ch=10000/10000
[INFO] [04-30|07:16:59.611] [6/15 Senders] Flushed buffer file       name=/srv/bsc/data/temp/erigon-sortable-buf-2354223747
[INFO] [04-30|07:17:11.399] [p2p] GoodPeers                          eth66=60
[INFO] [04-30|07:17:12.189] [txpool] stat                            pending=1 baseFee=0 queued=105 alloc=4.7GB sys=8.4GB
[INFO] [04-30|07:17:16.062] [6/15 Senders] Recovery                  block_number=23145546 ch=9863/10000
[INFO] [04-30|07:17:41.839] [6/15 Senders] Flushed buffer file       name=/srv/bsc/data/temp/erigon-sortable-buf-1626203942
[INFO] [04-30|07:17:46.062] [6/15 Senders] Recovery                  block_number=23223562 ch=9983/10000

Additionally

I already checked out all my disks, raid array, memory and all what I can to do with hardware, platform seems alive. Dmesg and syslog doesn't reports about any issue. Also I tried to use snapshot - but actually it's broken and I did not decompress it.

Support the upcoming plato upgrade on BSC testnet

Rationale

The next upgrade on BSC testnet will be plato, which coming soon.
You may refer: https://forum.bnbchain.org/t/bnb-chain-upgrades-testnet/934#platoupcoming-3
BSC Erigon need port some hard fork changes to support it.

Implementation

According to the BSC Plato testnet release: bnb-chain/bsc#1596, need to port several PRs to Erigon.
Here is the list:

The Broken Sync Issue

v1.0.6 failed to sync the BSC testnet upgrade: Plato, refer: https://github.com/node-real/bsc-erigon/issues
It is caused by a miss handle of the receipt of FastFinality reward distribution.
https://testnet.bscscan.com/txs?block=29861000

// Plato on 29861024
https://testnet.bscscan.com/txs?block=29861200
after Plato, there is a new system transaction: distributeFastFinalityReward, it was not handled correctly.
image

How To Fix

a.get the latest release: v1.0.7(not yet)
b.reset your node to make sure it clear up the dirty state

//== check you latest stages:
./build/bin/integration print_stages --chain=chapel --datadir=<datadir>

//== reset hash & trie
./build/bin/integration stage_hash_state --datadir=<datadir> --reset --chain=chapel
./build/bin/integration stage_trie --datadir=<datadir> --reset --chain=chapel

//== unwind to the block before Plato(29861024)
./build/bin/integration state_stages --unwind=20 --datadir=<datadir> --chain=chapel 

//== start with the latest release
<just use your original command line>

Rejected header marked as bad on BSC Testnet

System information

Erigon version: ./erigon --version

erigon version 2.43.0-dev-065538d5
OS & Version: Windows/Linux/OSX

Ubuntu22.04
Commit hash:
065538d

Erigon Command (with flags/config):
/usr/local/bin/erigon --chain=chapel --datadir=/node/data --metrics --metrics.addr=0.0.0.0 --metrics.port=6060 --private.api.addr=0.0.0.0:9090 --pprof --pprof.addr=0.0.0.0 --pprof.port=6061 --torrent.download.rate=1000mb --http.addr=0.0.0.0 --ws --http.api=eth,erigon,web3,net,debug,trace,txpool --db.size.limit=1000GB --rpc.returndata.limit=900000 --bodies.cache=214748364800 --batchSize=4096M --p2p.protocol=66

Chain/Network:
BSC Tesnet

Backtrace

 May 05 09:36:20 bsc-archive-testnet-0-eu erigon[821182]: [WARN] [05-05|09:36:20.161] [downloader] Rejected header marked as bad hash=0xdcbeae9b9587cbecf20e21a44940f3ba1a46e8e5509f1be4d8806eb76c9734e2 height=29517600        
May 05 09:36:20 bsc-archive-testnet-0-eu erigon[821182]: [WARN] [05-05|09:36:20.201] [downloader] Rejected header marked as bad hash=0xdcbeae9b9587cbecf20e21a44940f3ba1a46e8e5509f1be4d8806eb76c9734e2 height=29517600        
May 05 09:36:20 bsc-archive-testnet-0-eu erigon[821182]: [WARN] [05-05|09:36:20.205] [downloader] Rejected header marked as bad hash=0xdcbeae9b9587cbecf20e21a44940f3ba1a46e8e5509f1be4d8806eb76c9734e2 height=29517600        
May 05 09:36:20 bsc-archive-testnet-0-eu erigon[821182]: [WARN] [05-05|09:36:20.275] [downloader] Rejected header marked as bad hash=0xdcbeae9b9587cbecf20e21a44940f3ba1a46e8e5509f1be4d8806eb76c9734e2 height=29517600        
May 05 09:36:20 bsc-archive-testnet-0-eu erigon[821182]: [WARN] [05-05|09:36:20.290] [downloader] Rejected header marked as bad hash=0xdcbeae9b9587cbecf20e21a44940f3ba1a46e8e5509f1be4d8806eb76c9734e2 height=29517600        
May 05 09:36:20 bsc-archive-testnet-0-eu erigon[821182]: [WARN] [05-05|09:36:20.293] [downloader] Rejected header marked as bad hash=0xdcbeae9b9587cbecf20e21a44940f3ba1a46e8e5509f1be4d8806eb76c9734e2 height=29517600        
May 05 09:36:20 bsc-archive-testnet-0-eu erigon[821182]: [WARN] [05-05|09:36:20.352] [downloader] Rejected header marked as bad hash=0xdcbeae9b9587cbecf20e21a44940f3ba1a46e8e5509f1be4d8806eb76c9734e2 height=29517600        
May 05 09:36:20 bsc-archive-testnet-0-eu erigon[821182]: [WARN] [05-05|09:36:20.356] [downloader] Rejected header marked as bad hash=0xdcbeae9b9587cbecf20e21a44940f3ba1a46e8e5509f1be4d8806eb76c9734e2 height=29517600        
May 05 09:36:20 bsc-archive-testnet-0-eu erigon[821182]: [WARN] [05-05|09:36:20.388] [downloader] Rejected header marked as bad hash=0xdcbeae9b9587cbecf20e21a44940f3ba1a46e8e5509f1be4d8806eb76c9734e2 height=29517600  

Tried to unwind the node multiple times and different ranges. The node gets stuck at the same block.

Roadmap: Long Term Support Of BSC

Rationale

BSC Erigon must follow the BSC upgrades, but currently only support full sync mode, validator mode is not supported since it has relatively lower priority, but cost more extra work and we do not have the resource to do it right now.

Besides the BSC hard forks, new features or strategies on how to support Erigon in long term can also be discussed here

Implementation

BSC Hardfork Support

Other Topics

  • only support for a limited block range?
    the full archive node is too large, think about a more economy efficient support for it

Command line parameter suggestion to run a stable archival node

Hi, Just curious to see if anyone would be willing to share their command line if they are running a stable archival node. My node is constantly trailing behind and I find myself frequently rebooting it to stay synced.

My current setup
Docker image ghcr.io/node-real/bsc-erigon:1.0.3

command:
      - --chain=bsc
      - --datadir=/root/.local/share/erigon
      - --port=30303
      - --private.api.addr=0.0.0.0:9090
      - --torrent.port=42069
      - --torrent.download.rate=1000mb
      - --torrent.download.slots=6
      - --p2p.protocol=66
      - --downloader.verify
      - --batchSize=512M
      - --etl.bufferSize=512M
      - --db.pagesize=16k
      - --healthcheck
      - --log.console.verbosity=info
   

why curl eth_blockNumber result 0x0

nohup ./build/bin/erigon --chain=bsc --datadir=/erigon_chain --http=true --http.addr=0.0.0.0 --http.port=9001 --http.api=eth,erigon,web3,net,debug,trace,txpool --ws --torrent.port=42069 --torrent.download.rate=32mb --authrpc.port=8551 --private.api.addr=127.0.0.1:9090 --db.pagesize=16kb --snapshots=true --rpc.batch.limit=10000 --prune=htc --prune.h.older=90000 --prune.t.older=90000 --prune.c.older=90000 &

curl -s -H "Content-Type:application/json" -X POST --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}' 0.0.0.0:9001

{"jsonrpc":"2.0","id":1,"result":"0x0"}

Synchronized to the latest block:
[INFO] [05-22|02:57:18.958] [2/15 Headers] Wrote block headers number=28421656 blk/second=9.600 alloc=2.6GB sys=3.3GB
[INFO] [05-22|02:57:38.958] [2/15 Headers] No block headers to write in this log period block number=28421656
[INFO] [05-22|02:57:58.496] [p2p] GoodPeers eth66=31 eth68=1
[INFO] [05-22|02:57:58.905] [txpool] stat pending=1 baseFee=0 queued=14 alloc=2.7GB sys=3.3GB
[INFO] [05-22|02:57:58.958] [2/15 Headers] No block headers to write in this log period block number=28421656
[INFO] [05-22|02:58:18.958] [2/15 Headers] No block headers to write in this log period block number=28421656
[INFO] [05-22|02:58:38.958] [2/15 Headers] No block headers to write in this log period block number=28421656
[INFO] [05-22|02:58:58.905] [txpool] stat pending=1 baseFee=0 queued=14 alloc=2.5GB sys=3.3GB
[INFO] [05-22|02:58:58.958] [2/15 Headers] No block headers to write in this log period block number=28421656
[INFO] [05-22|02:59:18.958] [2/15 Headers] No block headers to write in this log period block number=28421656

Can't get erigon node synced on BSC Testnet

System information

Erigon version: erigon version 2.40.0-dev-3da15fcb

OS & Version: Linux

Erigon Command (with flags/config): nohup ./build/bin/erigon --chain chapel --db.pagesize=16k --datadir=/server/bsc-erigon/data --http --ws --http.api=eth,debug,net,trace,web3,erigon --log.dir.path=/server/bsc-erigon/logs

Chain/Network: Chapel

Expected behaviour

Get syned properly

Actual behaviour

Never started to sync

Steps to reproduce the behaviour

Simply run the above erigon command

logs/erigon-user.log:
图片

console log:
图片

Can't get erigon started properly on bsc mainnet

System information

Erigon version: ./erigon --version
erigon version 2.40.0-dev-3da15fcb

OS & Version: Linux

Erigon Command (with flags/config):
nohup ./build/bin/erigon --chain bsc --db.pagesize=16k --datadir=/server/bsc-erigon/data --http --ws --http.api=eth,debug,net,trace,web3,erigon --log.dir.path=/server/bsc-erigon/logs

Chain/Network: BSC

Expected behaviour

Start importing new blocks after downloading snapshot completes

Actual behaviour

Stuck at stat and goes nowhere, keep receiving error message:
keeps logging message like this: t=2023-03-31T03:19:08+0000 lvl=dbug msg="[p2p] Handshake failure" peer=c3276083eebc96a44003 err="network id does not match: theirs 1971, ours 56"
Tried adding --staticpeers flag but doesn’t work.

Steps to reproduce the behaviour

Just start erigon using above command, you will see the issue.

Backtrace

[backtrace]

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.