Code Monkey home page Code Monkey logo

hyperion-history-api's Introduction



Hyperion Logo

Scalable Full History API Solution for Antelope (former EOSIO) based blockchains


Made with โ™ฅ by Rio Blocks / EOS Rio

CI

How to use:

Official plugins:

1. Overview

Hyperion is a full history solution for indexing, storing and retrieving Antelope blockchain's historical data. Antelope protocol is highly scalable reaching up to tens of thousands of transactions per second demanding high performance indexing and optimized storage and querying solutions. Hyperion is developed to tackle those challenges providing open source software to be operated by block producers, infrastructure providers and dApp developers.

Focused on delivering faster search times, lower bandwidth overhead and easier usability for UI/UX developers, Hyperion implements an improved data structure. Actions are stored in a flattened format, transaction ids are added to all inline actions, allowing to group by transaction without storing a full transaction index. Besides that if the inline action data is identical to the parent, it is considered a notification and thus removed from the database. No full block or transaction data is stored, all information can be reconstructed from actions and deltas, only a block header index is stored.

2. Architecture

The following components are required in order to have a fully functional Hyperion API deployment.

  • For small use cases, it is absolutely fine to run all components on a single machine.
  • For larger chains and production environments, we recommend setting them up into different servers under a high-speed local network.

2.1 Elasticsearch Cluster

The ES cluster is responsible for storing all indexed data. Direct access to the Hyperion API and Indexer must be provided. We recommend nodes in the cluster to have at least 32 GB of RAM and 8 cpu cores. SSD/NVME drives are recommended for maximum indexing throughput, although HDDs can be used for cold storage nodes. For production environments, a multi-node cluster is highly recommended.

2.2 Hyperion Indexer

The Indexer is a Node.js based app that process data from the state history plugin and allows it to be indexed. The PM2 process manager is used to launch and operate the indexer. The configuration flexibility is very extensive, so system recommendations will depend on the use case and data load. It will require access to at least one ES node, RabbitMQ and the state history node.

2.3 Hyperion API

Parallelizable API server that provides the V2 and V1 (legacy history plugin) endpoints. It is launched by PM2 and can also operate in cluster mode. It requires direct access to at least one ES node for the queries and all other services for full healthcheck

2.4 RabbitMQ

Used as messaging queue and data transport between the indexer stages and for real-time data streaming

2.5 Redis

Used for transient data storage across processes and for the preemptive transaction caching used on the v2/history/get_transaction and v2/history/check_transaction endpoints

2.6 Leap State History

Leap / Nodeos plugin used to collect action traces and state deltas. Provides data via websocket to the indexer

2.7 Hyperion Stream Client (optional)

Web and Node.js client for real-time streaming on enabled hyperion providers. Documentation

2.8 Hyperion Plugins (optional)

Hyperion includes a flexible plugin architecture to allow further customization. Plugins are managed by the hpm (hyperion plugin manager) command line tool.

hyperion-history-api's People

Contributors

ankh2054 avatar dependabot[bot] avatar domiscd avatar felipeasf avatar fschoell avatar igorls avatar joaoocb avatar lealbrunocalhau avatar n8d avatar robertkowalski avatar villesundell avatar xebb82 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hyperion-history-api's Issues

the amount shown in the `get_tokens` endpoint is always wrong

Take for example the following account:
https://wax.eosrio.io/v2/state/get_account?account=lurk24dotcom
image

https://wax.eosrio.io/v2/state/get_tokens?account=lurk24dotcom
image

we see the accurate account.core_liquid_balance:
image

and a corresponding value in the token array within get_account plus the same within the get_tokens endpoint:
image

however, the amount displayed within the token array and get_tokens endpoint is always wrong.

We can see the correct data by looking up the value using get_table_rows with this data:
{"code":"eosio.token","scope":"lurk24dotcom","table":"accounts"}

or like this: wax.rpc.get_currency_balance('eosio.token', 'lurk24dotcom', 'WAX')

Looking into the source code it seems like this is a cached value and is never really updated...

It seems like the correct value is obtained on line 42: token_data = await fastify.eosjs.rpc.get_currency_balance(data.code, request.query.account, data.symbol)

token_data = await fastify.eosjs.rpc.get_currency_balance(data.code, request.query.account, data.symbol);

however that line of code is not hit if the token is found in the tokenCache on line 37

if (fastify.tokenCache.has(key)) {

but the amount is returned on line 60

so it is unclear to me what is the purpose of the amount datapoint within this API endpoint if it is always wrong?

[BUG] Invalid 0 precision token handling /get_tokens

For 0 precision tokens missing precision field at /get_tokens response data

Example:

curl -X 'GET' \
  'https://hyperion.paycash.online/v2/state/get_tokens?account=i.list' \
  -H 'accept: application/json'

Logical estimate that field will be present:

{
      "symbol": "LQAG",
      "precision": 0,
      "amount": 0,
      "contract": "swap.pcash"
}

API auto stop

2|eos-indexer | 2022-01-10T19:56:59:
2|eos-indexer | 2022-01-10T19:56:59: -------- BLOCK RANGE COMPLETED -------------
2|eos-indexer | 2022-01-10T19:56:59: | Range: 225291927 >> 225301416
2|eos-indexer | 2022-01-10T19:56:59: | Total time: 55 seconds
2|eos-indexer | 2022-01-10T19:56:59: | Blocks: 9489
2|eos-indexer | 2022-01-10T19:56:59: | Actions: 174539
2|eos-indexer | 2022-01-10T19:56:59: | Deltas: 340605
2|eos-indexer | 2022-01-10T19:56:59: | ABIs: 1
2|eos-indexer | 2022-01-10T19:56:59: --------------------------------------------

The eos-indexer auto stop after completed the syncing, even the block update,
It will auto stop after 300s.

support POST request for get_key_accounts

Hi folks,

when using scatter 11, it requires the old history api running with an eos node: GetScatter/ScatterDesktop#410 (comment) to make a request to get_key_accounts

currently hyperionโ€™s get_key_accounts uses GET- while the original history API works with POST.

on a load balancer level, rewriting the POST request to a GET becomes messy and requires some custom scripting, given the LB supports custom scripts.

with a custom proxy, e.g. written in nodejs, more moving parts are added to the infrastructure, which require maintanance in the long term.

would you merge a PR to support the POST route additionally to the current GET request?

I could prepare a patch tomorrow. let me know what you think.

Unable to use /v1/history/get_transaction

My problem: I get transactions that have been indexed blocks, but the transactions are not found

curl -s -X POST -H "Content-Type: application/json" -d '{"id":"d1b6a00544ebf66165d92714bb3ec60ce8759cfae07ab393a518816cdb648547"}' http://127.0.0.1:7000/v1/history/get_transaction | jq

image

The components I get through the /v2/health interface are normal
image

The versions of the components I'm running are as follows:

  • OS๏ผšUbuntu 20.04 Docker
  • EOS๏ผšLEAP 3.1.0
  • Elasticsearch๏ผš7.16.3
  • RabbitMQ๏ผš3.9
  • Redis๏ผš6.2.7
  • Hyperion๏ผš3.3.4-rc8

My EOS LEAP configuration file๏ผš

# the location of the blocks directory (absolute path or relative to application data dir) (eosio::chain_plugin)
blocks-dir = "/mnt/eosmain/node"

# the location of the protocol_features directory (absolute path or relative to application config dir) (eosio::chain_plugin)
# protocol-features-dir = "protocol_features"

# Pairs of [BLOCK_NUM,BLOCK_ID] that should be enforced as checkpoints. (eosio::chain_plugin)
# checkpoint =

# Override default WASM runtime ( "eos-vm-jit", "eos-vm")
# "eos-vm-jit" : A WebAssembly runtime that compiles WebAssembly code to native x86 code prior to execution.
# "eos-vm" : A WebAssembly interpreter.
#  (eosio::chain_plugin)
wasm-runtime = eos-vm-jit

# The name of an account whose code will be profiled (eosio::chain_plugin)
# profile-account =

# Override default maximum ABI serialization time allowed in ms (eosio::chain_plugin)
abi-serializer-max-time-ms = 60000

# Maximum size (in MiB) of the chain state database (eosio::chain_plugin)
chain-state-db-size-mb = 40960

# Safely shut down node when free space remaining in the chain state database drops below this size (in MiB). (eosio::chain_plugin)
chain-state-db-guard-size-mb = 128

# Percentage of actual signature recovery cpu to bill. Whole number percentages, e.g. 50 for 50% (eosio::chain_plugin)
# signature-cpu-billable-pct = 50

# Number of worker threads in controller thread pool (eosio::chain_plugin)
# chain-threads = 2

# print contract's output to console (eosio::chain_plugin)
contracts-console = true

# print deeper information about chain operations (eosio::chain_plugin)
# deep-mind = false

# Account added to actor whitelist (may specify multiple times) (eosio::chain_plugin)
# actor-whitelist =

# Account added to actor blacklist (may specify multiple times) (eosio::chain_plugin)
# actor-blacklist =

# Contract account added to contract whitelist (may specify multiple times) (eosio::chain_plugin)
# contract-whitelist =

# Contract account added to contract blacklist (may specify multiple times) (eosio::chain_plugin)
# contract-blacklist =

# Action (in the form code::action) added to action blacklist (may specify multiple times) (eosio::chain_plugin)
# action-blacklist =

# Public key added to blacklist of keys that should not be included in authorities (may specify multiple times) (eosio::chain_plugin)
# key-blacklist =

# Deferred transactions sent by accounts in this list do not have any of the subjective whitelist/blacklist checks applied to them (may specify multiple times) (eosio::chain_plugin)
# sender-bypass-whiteblacklist =

# Database read mode ("speculative", "head", "read-only", "irreversible").
# In "speculative" mode: database contains state changes by transactions in the blockchain up to the head block as well as some transactions not yet included in the blockchain.
# In "head" mode: database contains state changes by only transactions in the blockchain up to the head block; transactions received by the node are relayed if valid.
# In "read-only" mode: (DEPRECATED: see p2p-accept-transactions & api-accept-transactions) database contains state changes by only transactions in the blockchain up to the head block; transactions received via the P2P network are not relayed and transactions cannot be pushed via the chain API.
# In "irreversible" mode: database contains state changes by only transactions in the blockchain up to the last irreversible block; transactions received via the P2P network are not relayed and transactions cannot be pushed via the chain API.
#  (eosio::chain_plugin)
# read-mode = speculative

# Allow API transactions to be evaluated and relayed if valid. (eosio::chain_plugin)
# api-accept-transactions = true

# Chain validation mode ("full" or "light").
# In "full" mode all incoming blocks will be fully validated.
# In "light" mode all incoming blocks headers will be fully validated; transactions in those validated blocks will be trusted
#  (eosio::chain_plugin)
# validation-mode = full

# Disable the check which subjectively fails a transaction if a contract bills more RAM to another account within the context of a notification handler (i.e. when the receiver is not the code of the action). (eosio::chain_plugin)
# disable-ram-billing-notify-checks = false

# Subjectively limit the maximum length of variable components in a variable legnth signature to this size in bytes (eosio::chain_plugin)
# maximum-variable-signature-length = 16384

# Indicate a producer whose blocks headers signed by it will be fully validated, but transactions in those validated blocks will be trusted. (eosio::chain_plugin)
# trusted-producer =

# Database map mode ("mapped", "heap", or "locked").
# In "mapped" mode database is memory mapped as a file.
# In "heap" mode database is preloaded in to swappable memory and will use huge pages if available.
# In "locked" mode database is preloaded, locked in to memory, and will use huge pages if available.
#  (eosio::chain_plugin)
# database-map-mode = mapped

# Maximum size (in MiB) of the EOS VM OC code cache (eosio::chain_plugin)
# eos-vm-oc-cache-size-mb = 1024

# Number of threads to use for EOS VM OC tier-up (eosio::chain_plugin)
# eos-vm-oc-compile-threads = 1

# Enable EOS VM OC tier-up runtime (eosio::chain_plugin)
eos-vm-oc-enable = true

# enable queries to find accounts by various metadata. (eosio::chain_plugin)
# enable-account-queries = false

# maximum allowed size (in bytes) of an inline action for a nonprivileged account (eosio::chain_plugin)
# max-nonprivileged-inline-action-size = 4096

# Maximum size (in GiB) allowed to be allocated for the Transaction Retry feature. Setting above 0 enables this feature. (eosio::chain_plugin)
# transaction-retry-max-storage-size-gb =

# How often, in seconds, to resend an incoming transaction to network if not seen in a block. (eosio::chain_plugin)
# transaction-retry-interval-sec = 20

# Maximum allowed transaction expiration for retry transactions, will retry transactions up to this value. (eosio::chain_plugin)
# transaction-retry-max-expiration-sec = 120

# Maximum size (in GiB) allowed to be allocated for the Transaction Finality Status feature. Setting above 0 enables this feature. (eosio::chain_plugin)
# transaction-finality-status-max-storage-size-gb =

# Duration (in seconds) a successful transaction's Finality Status will remain available from being first identified. (eosio::chain_plugin)
# transaction-finality-status-success-duration-sec = 180

# Duration (in seconds) a failed transaction's Finality Status will remain available from being first identified. (eosio::chain_plugin)
# transaction-finality-status-failure-duration-sec = 180

# if set, periodically prune the block log to store only configured number of most recent blocks (eosio::chain_plugin)
block-log-retain-blocks = 10240

# PEM encoded trusted root certificate (or path to file containing one) used to validate any TLS connections made.  (may specify multiple times)
#  (eosio::http_client_plugin)
# https-client-root-cert =

# true: validate that the peer certificates are valid and trusted, false: ignore cert errors (eosio::http_client_plugin)
# https-client-validate-peers = true

# The filename (relative to data-dir) to create a unix socket for HTTP RPC; set blank to disable. (eosio::http_plugin)
# unix-socket-path =

# The local IP and port to listen for incoming http connections; set blank to disable. (eosio::http_plugin)
http-server-address = 0.0.0.0:8888

# The local IP and port to listen for incoming https connections; leave blank to disable. (eosio::http_plugin)
# https-server-address =

# Filename with the certificate chain to present on https connections. PEM format. Required for https. (eosio::http_plugin)
# https-certificate-chain-file =

# Filename with https private key in PEM format. Required for https (eosio::http_plugin)
# https-private-key-file =

# Configure https ECDH curve to use: secp384r1 or prime256v1 (eosio::http_plugin)
# https-ecdh-curve = secp384r1

# Specify the Access-Control-Allow-Origin to be returned on each request. (eosio::http_plugin)
access-control-allow-origin = *

# Specify the Access-Control-Allow-Headers to be returned on each request. (eosio::http_plugin)
# access-control-allow-headers =

# Specify the Access-Control-Max-Age to be returned on each request. (eosio::http_plugin)
# access-control-max-age =

# Specify if Access-Control-Allow-Credentials: true should be returned on each request. (eosio::http_plugin)
# access-control-allow-credentials = false

# The maximum body size in bytes allowed for incoming RPC requests (eosio::http_plugin)
max-body-size = 8388608

# Maximum size in megabytes http_plugin should use for processing http requests. 503 error response when exceeded. (eosio::http_plugin)
http-max-bytes-in-flight-mb = 500

# Maximum time for processing a request. (eosio::http_plugin)
http-max-response-time-ms = 100000

# Append the error log to HTTP responses (eosio::http_plugin)
verbose-http-errors = true

# If set to false, then any incoming "Host" header is considered valid (eosio::http_plugin)
http-validate-host = false

# Additionaly acceptable values for the "Host" header of incoming HTTP requests, can be specified multiple times.  Includes http/s_server_address by default. (eosio::http_plugin)
# http-alias =

# Number of worker threads in http thread pool (eosio::http_plugin)
http-threads = 4096

# The maximum number of pending login requests (eosio::login_plugin)
# max-login-requests = 1000000

# The maximum timeout for pending login requests (in seconds) (eosio::login_plugin)
# max-login-timeout = 60

# The actual host:port used to listen for incoming p2p connections. (eosio::net_plugin)
p2p-listen-endpoint = 0.0.0.0:9876

# An externally accessible host:port for identifying this node. Defaults to p2p-listen-endpoint. (eosio::net_plugin)
# p2p-server-address =

# The public endpoint of a peer node to connect to. Use multiple p2p-peer-address options as needed to compose a network.
#   Syntax: host:port[:<trx>|<blk>]
#   The optional 'trx' and 'blk' indicates to node that only transactions 'trx' or blocks 'blk' should be sent.  Examples:
#     p2p.eos.io:9876
#     p2p.trx.eos.io:9876:trx
#     p2p.blk.eos.io:9876:blk
#  (eosio::net_plugin)
# https://mainnet.eosio.online/endpoints๏ผŒUpdated at: August 22, 2022
p2p-peer-address = eos.seed.eosnation.io:9876
p2p-peer-address = eos.edenia.cloud:9876
p2p-peer-address = p2p.eossweden.org:9876
p2p-peer-address = p2p.eosflare.io:9876
p2p-peer-address = peer.main.alohaeos.com:9876
p2p-peer-address = seed.greymass.com:9876
p2p-peer-address = p2p-eos.whaleex.com:9876
p2p-peer-address = peer.eosio.sg:9876
p2p-peer-address = p2p.genereos.io:9876
p2p-peer-address = p2p.eos.detroitledger.tech:1337

# Maximum number of client nodes from any single IP address (eosio::net_plugin)
# p2p-max-nodes-per-host = 1

# Allow transactions received over p2p network to be evaluated and relayed if valid. (eosio::net_plugin)
# p2p-accept-transactions = true

# The name supplied to identify this node amongst the peers. (eosio::net_plugin)
agent-name = "NodeHub-EOS"

# Can be 'any' or 'producers' or 'specified' or 'none'. If 'specified', peer-key must be specified at least once. If only 'producers', peer-key is not required. 'producers' and 'specified' may be combined. (eosio::net_plugin)
allowed-connection = any

# Optional public key of peer allowed to connect.  May be used multiple times. (eosio::net_plugin)
# peer-key =

# Tuple of [PublicKey, WIF private key] (may specify multiple times) (eosio::net_plugin)
# peer-private-key =

# Maximum number of clients from which connections are accepted, use 0 for no limit (eosio::net_plugin)
max-clients = 100

# number of seconds to wait before cleaning up dead connections (eosio::net_plugin)
connection-cleanup-period = 60

# max connection cleanup time per cleanup call in milliseconds (eosio::net_plugin)
# max-cleanup-time-msec = 10

# Maximum time to track transaction for duplicate optimization (eosio::net_plugin)
# p2p-dedup-cache-expire-time-sec = 10

# Number of worker threads in net_plugin thread pool (eosio::net_plugin)
# net-threads = 2

# number of blocks to retrieve in a chunk from any individual peer during synchronization (eosio::net_plugin)
sync-fetch-span = 500

# Enable experimental socket read watermark optimization (eosio::net_plugin)
# use-socket-read-watermark = false

# The string used to format peers when logging messages about them.  Variables are escaped with ${<variable name>}.
# Available Variables:
#    _name  	self-reported name
#
#    _cid   	assigned connection id
#
#    _id    	self-reported ID (64 hex characters)
#
#    _sid   	first 8 characters of _peer.id
#
#    _ip    	remote IP address of peer
#
#    _port  	remote port number of peer
#
#    _lip   	local IP address connected to peer
#
#    _lport 	local port number connected to peer
#
#  (eosio::net_plugin)
# peer-log-format = ["${_name}" - ${_cid} ${_ip}:${_port}]

# peer heartbeat keepalive message interval in milliseconds (eosio::net_plugin)
# p2p-keepalive-interval-ms = 10000

# Enable block production, even if the chain is stale. (eosio::producer_plugin)
enable-stale-production = false

# Start this node in a state where production is paused (eosio::producer_plugin)
# pause-on-startup = false

# Limits the maximum time (in milliseconds) that is allowed a pushed transaction's code to execute before being considered invalid (eosio::producer_plugin)
max-transaction-time = 60000

# Limits the maximum age (in seconds) of the DPOS Irreversible Block for a chain this node will produce blocks on (use negative value to indicate unlimited) (eosio::producer_plugin)
max-irreversible-block-age = -1

# ID of producer controlled by this node (e.g. inita; may specify multiple times) (eosio::producer_plugin)
# producer-name =

# (DEPRECATED - Use signature-provider instead) Tuple of [public key, WIF private key] (may specify multiple times) (eosio::producer_plugin)
# private-key =

# Key=Value pairs in the form <public-key>=<provider-spec>
# Where:
#    <public-key>    	is a string form of a vaild EOSIO public key
#
#    <provider-spec> 	is a string in the form <provider-type>:<data>
#
#    <provider-type> 	is KEY, or KEOSD
#
#    KEY:<data>      	is a string form of a valid EOSIO private key which maps to the provided public key
#
#    KEOSD:<data>    	is the URL where keosd is available and the approptiate wallet(s) are unlocked (eosio::producer_plugin)
# signature-provider = EOS6MRyAjQq8ud7hVNYcfnVPJqcVpscN5So8BhtHuGYqET5GDW5CV=KEY:5KQwrPbwdL6PhXujxW37FSSQZ1JiwsST4cqQzDeyXtP79zkvFD3

# Limits the maximum time (in milliseconds) that is allowed for sending blocks to a keosd provider for signing (eosio::producer_plugin)
# keosd-provider-timeout = 5

# account that can not access to extended CPU/NET virtual resources (eosio::producer_plugin)
greylist-account = blocktwitter
greylist-account = chaintwitter
greylist-account = eidosonecoin

# Limit (between 1 and 1000) on the multiple that CPU/NET virtual resources can extend during low usage (only enforced subjectively; use 1000 to not enforce any limit) (eosio::producer_plugin)
# greylist-limit = 1000

# Offset of non last block producing time in microseconds. Valid range 0 .. -block_time_interval. (eosio::producer_plugin)
# produce-time-offset-us = 0

# Offset of last block producing time in microseconds. Valid range 0 .. -block_time_interval. (eosio::producer_plugin)
# last-block-time-offset-us = -200000

# Percentage of cpu block production time used to produce block. Whole number percentages, e.g. 80 for 80% (eosio::producer_plugin)
# cpu-effort-percent = 80

# Percentage of cpu block production time used to produce last block. Whole number percentages, e.g. 80 for 80% (eosio::producer_plugin)
# last-block-cpu-effort-percent = 80

# Threshold of CPU block production to consider block full; when within threshold of max-block-cpu-usage block can be produced immediately (eosio::producer_plugin)
# max-block-cpu-usage-threshold-us = 5000

# Threshold of NET block production to consider block full; when within threshold of max-block-net-usage block can be produced immediately (eosio::producer_plugin)
# max-block-net-usage-threshold-bytes = 1024

# Maximum wall-clock time, in milliseconds, spent retiring scheduled transactions in any block before returning to normal transaction processing. (eosio::producer_plugin)
# max-scheduled-transaction-time-per-block-ms = 100

# Time in microseconds allowed for a transaction that starts with insufficient CPU quota to complete and cover its CPU usage. (eosio::producer_plugin)
# subjective-cpu-leeway-us = 31000

# Sets the maximum amount of failures that are allowed for a given account per block. (eosio::producer_plugin)
# subjective-account-max-failures = 3

# Sets the time to return full subjective cpu for accounts (eosio::producer_plugin)
# subjective-account-decay-time-minutes = 1440

# ratio between incoming transactions and deferred transactions when both are queued for execution (eosio::producer_plugin)
# incoming-defer-ratio = 1

# Maximum size (in MiB) of the incoming transaction queue. Exceeding this value will subjectively drop transaction with resource exhaustion. (eosio::producer_plugin)
# incoming-transaction-queue-size-mb = 1024

# Disable the re-apply of API transactions. (eosio::producer_plugin)
# disable-api-persisted-trx = false

# Disable subjective CPU billing for API/P2P transactions (eosio::producer_plugin)
# disable-subjective-billing = true

# Account which is excluded from subjective CPU billing (eosio::producer_plugin)
# disable-subjective-account-billing =

# Disable subjective CPU billing for P2P transactions (eosio::producer_plugin)
# disable-subjective-p2p-billing = true

# Disable subjective CPU billing for API transactions (eosio::producer_plugin)
# disable-subjective-api-billing = true

# Number of worker threads in producer thread pool (eosio::producer_plugin)
# producer-threads = 2

# the location of the snapshots directory (absolute path or relative to application data dir) (eosio::producer_plugin)
# snapshots-dir = "snapshots"

# Time in seconds between two consecutive checks of resource usage. Should be between 1 and 300 (eosio::resource_monitor_plugin)
# resource-monitor-interval-seconds = 2

# Threshold in terms of percentage of used space vs total space. If used space is above (threshold - 5%), a warning is generated.  Unless resource-monitor-not-shutdown-on-threshold-exceeded is enabled, a graceful shutdown is initiated if used space is above the threshold. The value should be between 6 and 99 (eosio::resource_monitor_plugin)
# resource-monitor-space-threshold = 90

# Used to indicate nodeos will not shutdown when threshold is exceeded. (eosio::resource_monitor_plugin)
# resource-monitor-not-shutdown-on-threshold-exceeded =

# Number of resource monitor intervals between two consecutive warnings when the threshold is hit. Should be between 1 and 450 (eosio::resource_monitor_plugin)
# resource-monitor-warning-interval = 30

# the location of the state-history directory (absolute path or relative to application data dir) (eosio::state_history_plugin)
state-history-dir = "/mnt/eosmain/node/state-history"

# enable trace history (eosio::state_history_plugin)
trace-history = true

# enable chain state history (eosio::state_history_plugin)
chain-state-history = true

# the endpoint upon which to listen for incoming connections. Caution: only expose this port to your internal network. (eosio::state_history_plugin)
state-history-endpoint = 127.0.0.1:8080

# enable debug mode for trace history (eosio::state_history_plugin)
# trace-history-debug-mode = false

# if set, periodically prune the state history files to store only configured number of most recent blocks (eosio::state_history_plugin)
# state-history-log-retain-blocks =

# the location of the trace directory (absolute path or relative to application data dir) (eosio::trace_api_plugin)
# trace-dir = "traces"

# the number of blocks each "slice" of trace data will contain on the filesystem (eosio::trace_api_plugin)
# trace-slice-stride = 10000

# Number of blocks to ensure are kept past LIB for retrieval before "slice" files can be automatically removed.
# A value of -1 indicates that automatic removal of "slice" files will be turned off. (eosio::trace_api_plugin)
# trace-minimum-irreversible-history-blocks = -1

# Number of blocks to ensure are uncompressed past LIB. Compressed "slice" files are still accessible but may carry a performance loss on retrieval
# A value of -1 indicates that automatic compression of "slice" files will be turned off. (eosio::trace_api_plugin)
# trace-minimum-uncompressed-irreversible-history-blocks = -1

# ABIs used when decoding trace RPC responses.
# There must be at least one ABI specified OR the flag trace-no-abis must be used.
# ABIs are specified as "Key=Value" pairs in the form <account-name>=<abi-def>
# Where <abi-def> can be:
#    an absolute path to a file containing a valid JSON-encoded ABI
#    a relative path from `data-dir` to a file containing a valid JSON-encoded ABI
#  (eosio::trace_api_plugin)
# trace-rpc-abi =

# Use to indicate that the RPC responses will not use ABIs.
# Failure to specify this option when there are no trace-rpc-abi configuations will result in an Error.
# This option is mutually exclusive with trace-rpc-api (eosio::trace_api_plugin)
# trace-no-abis =

# Lag in number of blocks from the head block when selecting the reference block for transactions (-1 means Last Irreversible Block) (eosio::txn_test_gen_plugin)
# txn-reference-block-lag = 0

# Number of worker threads in txn_test_gen thread pool (eosio::txn_test_gen_plugin)
# txn-test-gen-threads = 2

# Prefix to use for accounts generated and used by this plugin (eosio::txn_test_gen_plugin)
# txn-test-gen-account-prefix = txn.test.

# Plugin(s) to enable, may be specified multiple times
plugin = eosio::net_plugin
plugin = eosio::http_plugin
plugin = eosio::chain_plugin
plugin = eosio::chain_api_plugin
plugin = eosio::producer_plugin
plugin = eosio::producer_api_plugin
plugin = eosio::state_history_plugin

my hyperion config file:
connections.json

{
    "amqp": {
      "host": "127.0.0.1:5672",
      "api": "127.0.0.1:15672",
      "user": "admin",
      "pass": "NodeHub",
      "vhost": "hyperion"
    },
    "elasticsearch": {
      "host": "127.0.0.1:9200",
      "ingest_nodes": [
        "127.0.0.1:9200"
      ],
      "user": "elastic",
      "pass": "NodeHub"
    },
    "redis": {
      "host": "127.0.0.1",
      "port": "6379"
    },
    "chains": {
      "eos": {
        "name": "EOS Mainnet",
        "chain_id": "aca376f206b8fc25a6ed44dbdc66547c36c6c33e3a119ffbeaef943642f0e906",
        "http": "http://127.0.0.1:8888",
        "ship": "ws://127.0.0.1:8080",
        "WS_ROUTER_HOST": "0.0.0.0",
        "WS_ROUTER_PORT": 7001
      }
    }
}

chain/eos.config.json

{
  "api": {
    "chain_name": "eos",
    "server_addr": "0.0.0.0",
    "server_port": 7000,
    "server_name": "0.0.0.0:7000",
    "provider_name": "BitStack",
    "provider_url": "https://bitstack.com",
    "chain_api": "",
    "push_api": "",
    "chain_logo_url": "",
    "enable_caching": true,
    "cache_life": 1,
    "limits": {
      "get_actions": 1000,
      "get_voters": 100,
      "get_links": 1000,
      "get_deltas": 1000,
      "get_trx_actions": 200
    },
    "access_log": false,
    "enable_explorer": false
  },
  "settings": {
    "preview": false,
    "chain": "eos",
    "eosio_alias": "eosio",
    "parser": "1.8",
    "auto_stop": 0,
    "index_version": "v1",
    "debug": false,
    "bp_logs": false,
    "bp_monitoring": false,
    "ipc_debug_rate": 60000,
    "allow_custom_abi": false,
    "rate_monitoring": true,
    "max_ws_payload_kb": 256,
    "ds_profiling": false,
    "auto_mode_switch": false,
    "hot_warm_policy": false,
    "custom_policy": "",
    "bypass_index_map": false,
    "index_partition_size": 10000000
  },
  "blacklists": {
    "actions": [],
    "deltas": []
  },
  "whitelists": {
    "actions": [],
    "deltas": [],
    "max_depth": 10,
    "root_only": false
  },
  "scaling": {
    "batch_size": 10000,
    "queue_limit": 50000,
    "readers": 2,
    "ds_queues": 2,
    "ds_threads": 1,
    "ds_pool_size": 1,
    "indexing_queues": 1,
    "ad_idx_queues": 1,
    "max_autoscale": 4,
    "batch_size": 5000,
    "resume_trigger": 5000,
    "auto_scale_trigger": 20000,
    "block_queue_limit": 10000,
    "max_queue_limit": 100000,
    "routing_mode": "heatmap",
    "polling_interval": 10000
  },
  "indexer": {
    "enabled": true,
    "start_on": 1,
    "stop_on": 0,
    "rewrite": false,
    "purge_queues": true,
    "live_reader": true,
    "live_only_mode": false,
    "abi_scan_mode": false,
    "fetch_block": true,
    "fetch_traces": true,
    "disable_reading": false,
    "disable_indexing": false,
    "process_deltas": true,
    "max_inline": 20
  },
  "features": {
    "streaming": {
      "enable": true,
      "traces": true,
      "deltas": false
    },
    "tables": {
      "proposals": true,
      "accounts": true,
      "voters": true,
      "userres": false,
      "delband": false
    },
    "index_deltas": true,
    "index_transfer_memo": true,
    "index_all_deltas": true,
    "deferred_trx": false,
    "failed_trx": false,
    "resource_limits": false,
    "resource_usage": false
  },
  "prefetch": {
    "read": 50,
    "block": 100,
    "index": 500
  }
}

indexer: "JavaScript out of memory" fatal errors

Hello, I regularly (every 1--2 days) get the following error

2021-01-24T02:55:27: FATAL ERROR: invalid table size Allocation failed - JavaScript heap out of memory
2021-01-24T02:55:27:  1: 0xa0e670 node::Abort() [hyp-daobet-master]
2021-01-24T02:55:27:  2: 0xa0ea9c node::OnFatalError(char const*, char const*) [hyp-daobet-master]
2021-01-24T02:55:27:  3: 0xb83afe v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [hyp-daobet-master]
2021-01-24T02:55:27:  4: 0xb83e79 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [hyp-daobet-master]
2021-01-24T02:55:27:  5: 0xd32305  [hyp-daobet-master]
2021-01-24T02:55:27:  6: 0xf23cba v8::internal::HashTable<v8::internal::StringTable, v8::internal::StringTableShape>::New(v8::internal::Isolate*, int, v8::internal::AllocationType, v8::internal::MinimumCapacity) [hyp-daobet-master]
2021-01-24T02:55:27:  7: 0xf2437b v8::internal::HashTable<v8::internal::StringTable, v8::internal::StringTableShape>::EnsureCapacity(v8::internal::Isolate*, v8::internal::Handle<v8::internal::StringTable>, int, v8::internal::AllocationType) [hyp-daobet-master]
2021-01-24T02:55:27:  8: 0xf2d0d1 v8::internal::Handle<v8::internal::String> v8::internal::StringTable::LookupKey<v8::internal::InternalizedStringKey>(v8::internal::Isolate*, v8::internal::InternalizedStringKey*) [hyp-daobet-master]
2021-01-24T02:55:27:  9: 0xf2d1c6 v8::internal::StringTable::LookupString(v8::internal::Isolate*, v8::internal::Handle<v8::internal::String>) [hyp-daobet-master]
2021-01-24T02:55:27: 10: 0x107306b v8::internal::Runtime_GetProperty(int, unsigned long*, v8::internal::Isolate*) [hyp-daobet-master]
2021-01-24T02:55:27: 11: 0x13da519  [hyp-daobet-master]

OS: Ubuntu 18.04 amd64.
Server: 64 GiB RAM, 8 cores.
Installation type: no Docker.

Chain config:

{
  "api": {
    ...
    "enable_caching": true,
    "cache_life": 1,
    "limits": {
      "get_actions": 1000,
      "get_voters": 100,
      "get_links": 1000,
      "get_deltas": 1000,
      "get_trx_actions": 200
    },
    "access_log": false,
    "enable_explorer": false,
    "chain_api_error_log": false
  },
  "settings": {
    "preview": false,
    ...
    "parser": "1.8",
    "auto_stop": 300,
    "index_version": "v1",
    "debug": false,
    "bp_logs": false,
    "bp_monitoring": true,
    "ipc_debug_rate": 60000,
    "allow_custom_abi": false,
    "rate_monitoring": true,
    "max_ws_payload_kb": 256,
    "ds_profiling": false,
    "auto_mode_switch": false
  },
  "blacklists": {
    "actions": [],
    "deltas": []
  },
  "whitelists": {
    "actions": [],
    "deltas": [],
    "max_depth": 10,
    "root_only": false
  },
  "scaling": {
    "readers":            2,
    "ds_queues":          2,
    "ds_threads":         2,
    "ds_pool_size":       2,
    "indexing_queues":    2,
    "ad_idx_queues":      1,
    "max_autoscale":      4,
    "batch_size":         5000,
    "resume_trigger":     5000,
    "auto_scale_trigger": 20000,
    "block_queue_limit":  10000,
    "max_queue_limit":    100000,
    "routing_mode":       "heatmap",
    "polling_interval":   10000
  },
  "indexer": {
    "start_on": 0,
    "stop_on": 0,
    "rewrite": false,
    "purge_queues": false,
    "live_reader": true,
    "live_only_mode": false,
    "abi_scan_mode": false,
    "fetch_block": true,
    "fetch_traces": true,
    "disable_reading": false,
    "disable_indexing": false,
    "process_deltas": true
  },
  "features": {
    "streaming": {
      "enable": false,
      "traces": false,
      "deltas": false
    },
    "tables": {
      "proposals": true,
      "accounts": true,
      "voters": true,
      "userres": false,
      "delband": false
    },
    "index_deltas": false,
    "index_transfer_memo": false,
    "index_all_deltas": true,
    "deferred_trx": false,
    "failed_trx": false,
    "resource_limits": false,
    "resource_usage": false
  },
  "prefetch": {
    "read": 50,
    "block": 100,
    "index": 500
  }
}

History Fetching

Dear Hyperion developers,

Is Hyperion an api to fetch history data from your server,
or users need to sync mainnet blocks locally?

If the latter is true, how long may it take to sync all info, and how large it is expected to be?

Thank you!
Boren

/state/alive endpoint returns an error even though the cluster is alive

(on the current master branch)

$ curl "localhost:7000/v2/state/alive"
{"status":"ERROR","msg":"elasticsearch cluster is not available"}

However, the cluster is available and other endpoints are returning results as expected

$ curl "localhost:7000/v2/history/get_actions?limit=1"
{"query_time":17,"lib":53322863,"total":{"value":10000,"relation":"gte"},"actions":[{"action_ordinal":1,"creator_action_ordinal":0,"act":{"account":"ffgametongbi","name":"stopbet","authorization":[{"actor":"ffgametongbi","permission":"active"}],"data":{"from":"ffgametongbi","room_id":"13","game_id":"1"}},"context_free":false,"elapsed":"0","account_ram_deltas":[{"account":"ffgametongbi","delta":"346"}],"except":null,"error_code":null,"@timestamp":"2019-06-21T10:18:36.000","block_num":53323176,"producer":"helloeoschbp","trx_id":"b5aa9a36a8e1eeb7991b702b3413aa0f020aa4234385c2ddba5863648f54662f","global_sequence":262207142,"receipts":[{"receiver":"ffgametongbi","global_sequence":"262207142","recv_sequence":"4240344","auth_sequence":[{"account":"ffgametongbi","sequence":"4242486"}]}],"notified":["ffgametongbi"],"code_sequence":30,"abi_sequence":9}]}

On the 1.7 release this endpoint works (same cluster).

/definitions/ecosystem_settings.js references launcher.js, which no longer exist

When trying to launch an indexer via
pm2 start --only chain-indexer --update-env
the launch fails with the error message that launcher.js doesn't exist. Changing it to launcher.ts results in a different error when trying to start the indexer:
SyntaxError: Cannot use import statement outside a module.

Also launcher.ts does not seem to be a typescript-converted version of launcher.js, as the original launcher.js has been deleted in this commit 647b6f7 while launcher.ts has just been renamed from indexer.ts

I have a problem to use eos-api

I started 'eos-indexer' and 'eos-api'.

I have a problem to use eos-api.
I can get response about "localhost:7000/v2/state/get_account?account=[myaccountname]"
But, I can't get any response except the api mentioned above.
I can get response => {"query_time_ms":95.029,"cached":false,"lib":0,"total":{"value":0,"relation":"eq"},"actions":[]}

I checked connection from hyperion to nodeos. I can get same chain-id.

please, help me..
I can hardly find a solution.

Cannot find module './socketManager'

I get master code๏ผŒfound an error on deploy๏ผŒhow is fixed? as follows.

  • /opt/Hyperion-History-API/api/api-loader.js
    at Function.Module._resolveFilename (internal/modules/cjs/loader.js:794:15)
    at Module.Hook._require.Module.require (/usr/lib/node_modules/pm2/node_modules/require-in-the-middle/index.js:51:29)
    at require (internal/modules/cjs/helpers.js:74:18)
    at Object. (/opt/Hyperion-History-API/api/api-loader.js:20:29)
    at Module._compile (internal/modules/cjs/loader.js:956:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:973:10)
    at Module.load (internal/modules/cjs/loader.js:812:32)
    at Function.Module._load (internal/modules/cjs/loader.js:724:14)
    at /usr/lib/node_modules/pm2/lib/ProcessContainer.js:297:23
    at wrapper (/usr/lib/node_modules/pm2/node_modules/async/internal/once.js:12:16)
    at next (/usr/lib/node_modules/pm2/node_modules/async/waterfall.js:96:20)
    at /usr/lib/node_modules/pm2/node_modules/async/internal/onlyOnce.js:12:16
    at WriteStream. (/usr/lib/node_modules/pm2/lib/Utility.js:186:13)
    at WriteStream.emit (events.js:210:5)
    at internal/fs/streams.js:299:10
    at FSReqCallback.oncomplete (fs.js:146:23)
    Error: Cannot find module './socketManager'

Temp workaround for SIG_WA deserialize error - breaks indexing on parser 2.1

Hopefully this helps someone who is facing the same issue, and or the EOSRIO team is able to patch something

NOTE: This seems to be fixed in the latest https://github.com/EOSIO/eosjs/blob/v22.1.0/src/eosjs-numeric.ts - but that doesn't match the code in addons/eosjs-native

Nodeos version: 2.1
Parser version: 2.1

Ran into an error synchronizing blocks signed with SIG_WA_ - here are 2 examples:
79955656
79996963

Would cause my indexer to kick into constant complaint mode about a missing block:

0|proton-indexer  | 2022-04-03T18:55:51: | SHIP Status Report
0|proton-indexer  | 2022-04-03T18:55:51: | Init block: 2
0|proton-indexer  | 2022-04-03T18:55:51: | Head block: 122792569
0|proton-indexer  | 2022-04-03T18:55:51: Error: unrecognized signature format
0|proton-indexer  | 2022-04-03T18:55:51:     at Object.signatureToString (/opt/eosio/src/Hyperion-History-API/addons/eosjs-native/eosjs-numeric.js:463:15)
0|proton-indexer  | 2022-04-03T18:55:51:     at SerialBuffer.getSignature (/opt/eosio/src/Hyperion-History-API/addons/src/eosjs-serialize.ts:521:24)
0|proton-indexer  | 2022-04-03T18:55:51:     at Object.deserialize (/opt/eosio/src/Hyperion-History-API/addons/src/eosjs-serialize.ts:952:63)
0|proton-indexer  | 2022-04-03T18:55:51:     at Object.deserializeArray [as deserialize] (/opt/eosio/src/Hyperion-History-API/addons/src/eosjs-serialize.ts:701:34)
0|proton-indexer  | 2022-04-03T18:55:51:     at Object.deserializeStruct [as deserialize] (/opt/eosio/src/Hyperion-History-API/addons/src/eosjs-serialize.ts:661:45)
0|proton-indexer  | 2022-04-03T18:55:51:     at Object.deserializeVariant [as deserialize] (/opt/eosio/src/Hyperion-History-API/addons/src/eosjs-serialize.ts:686:36)
0|proton-indexer  | 2022-04-03T18:55:51:     at Object.deserializeStruct [as deserialize] (/opt/eosio/src/Hyperion-History-API/addons/src/eosjs-serialize.ts:661:45)
0|proton-indexer  | 2022-04-03T18:55:51:     at Object.deserializeStruct [as deserialize] (/opt/eosio/src/Hyperion-History-API/addons/src/eosjs-serialize.ts:661:45)
0|proton-indexer  | 2022-04-03T18:55:51:     at Object.deserializeVariant [as deserialize] (/opt/eosio/src/Hyperion-History-API/addons/src/eosjs-serialize.ts:686:36)
0|proton-indexer  | 2022-04-03T18:55:51:     at Object.deserializeStruct [as deserialize] (/opt/eosio/src/Hyperion-History-API/addons/src/eosjs-serialize.ts:661:45)
0|proton-indexer  | 2022-04-03T18:55:51:     at Object.deserializeArray [as deserialize] (/opt/eosio/src/Hyperion-History-API/addons/src/eosjs-serialize.ts:701:34)
0|proton-indexer  | 2022-04-03T18:55:51:     at Object.deserializeStruct [as deserialize] (/opt/eosio/src/Hyperion-History-API/addons/src/eosjs-serialize.ts:661:45)
0|proton-indexer  | 2022-04-03T18:55:51:     at Object.deserializeVariant [as deserialize] (/opt/eosio/src/Hyperion-History-API/addons/src/eosjs-serialize.ts:686:36)
0|proton-indexer  | 2022-04-03T18:55:51:     at Object.deserializeOptional [as deserialize] (/opt/eosio/src/Hyperion-History-API/addons/src/eosjs-serialize.ts:718:32)
0|proton-indexer  | 2022-04-03T18:55:51:     at Object.deserializeStruct [as deserialize] (/opt/eosio/src/Hyperion-History-API/addons/src/eosjs-serialize.ts:661:45)
0|proton-indexer  | 2022-04-03T18:55:51:     at Object.deserializeVariant [as deserialize] (/opt/eosio/src/Hyperion-History-API/addons/src/eosjs-serialize.ts:686:36)
0|proton-indexer  | 2022-04-03T18:55:51: [208714 - 01_reader] Missing block: 79955656 current block: 79955657
0|proton-indexer  | 2022-04-03T18:55:51: Missing block 79955658 received!
0|proton-indexer  | 2022-04-03T18:55:51: [208714 - 01_reader] Missing block: 79955656 current block: 79955658
0|proton-indexer  | 2022-04-03T18:55:51: Missing block 79955659 received!
0|proton-indexer  | 2022-04-03T18:55:51: [208714 - 01_reader] Missing block: 79955656 current block: 79955659
0|proton-indexer  | 2022-04-03T18:55:51: Missing block 79955660 received!
0|proton-indexer  | 2022-04-03T18:55:51: [208714 - 01_reader] Missing block: 79955656 current block: 79955660
0|proton-indexer  | 2022-04-03T18:55:51: Missing block 79955661 received!
0|proton-indexer  | 2022-04-03T18:55:51: [208714 - 01_reader] Missing block: 79955656 current block: 79955661
0|proton-indexer  | 2022-04-03T18:55:51: Missing block 79955662 received!
0|proton-indexer  | 2022-04-03T18:55:51: [208714 - 01_reader] Missing block: 79955656 current block: 79955662
0|proton-indexer  | 2022-04-03T18:55:51: Missing block 79955663 received!
0|proton-indexer  | 2022-04-03T18:55:51: [208714 - 01_reader] Missing block: 79955656 current block: 79955663
0|proton-indexer  | 2022-04-03T18:55:51: Missing block 79955664 received!
0|proton-indexer  | 2022-04-03T18:55:51: [208714 - 01_reader] Missing block: 79955656 current block: 79955664
0|proton-indexer  | 2022-04-03T18:55:51: Missing block 79955665 received!
0|proton-indexer  | 2022-04-03T18:55:51: [208714 - 01_reader] Missing block: 79955656 current block: 79955665
0|proton-indexer  | 2022-04-03T18:55:51: Missing block 79955666 received!
0|proton-indexer  | 2022-04-03T18:55:51: [208714 - 01_reader] Missing block: 79955656 current block: 79955666
0|proton-indexer  | 2022-04-03T18:55:51: Missing block 79955667 received!
0|proton-indexer  | 2022-04-03T18:55:51: [208714 - 01_reader] Missing block: 79955656 current block: 79955667
0|proton-indexer  | 2022-04-03T18:55:51: Missing block 79955668 received!
0|proton-indexer  | 2022-04-03T18:55:51: [208714 - 01_reader] Missing block: 79955656 current block: 79955668

The end of this logfile would just continue to have the same "Missing block" error forever and would not write many blocks to the queue...

I was able to patch the file addons/eosjs-native/eosjs-numeric.jswith an update to translate the WA keytype.

git diff addons/eosjs-native/eosjs-numeric.js
diff --git a/addons/eosjs-native/eosjs-numeric.js b/addons/eosjs-native/eosjs-numeric.js
index 8ed5a78..249ea9d 100644
--- a/addons/eosjs-native/eosjs-numeric.js
+++ b/addons/eosjs-native/eosjs-numeric.js
@@ -308,6 +308,7 @@ var KeyType;
 (function (KeyType) {
     KeyType[KeyType["k1"] = 0] = "k1";
     KeyType[KeyType["r1"] = 1] = "r1";
+    KeyType[KeyType["wa"] = 2] = "wa";
 })(KeyType = exports.KeyType || (exports.KeyType = {}));
 /** Public key data size, excluding type field */
 exports.publicKeyDataSize = 33;
@@ -459,6 +460,8 @@ function signatureToString(signature) {
         return keyToString(signature, 'K1', 'SIG_K1_');
     } else if (signature.type === KeyType.r1) {
         return keyToString(signature, 'R1', 'SIG_R1_');
+    } else if (signature.type === KeyType.wa) {
+       return keyToString(signature, 'WA', 'SIG_WA_');
     } else {
         throw new Error('unrecognized signature format');
     }

Now I am able to completely synchronize the range, even though I am seeing this new error in the pm2 out:

0|proton-indexer  | 2022-04-03T19:44:18: [222902 - 07_deserializer] deserializeNative >> transaction_trace[] >> Bad variant index
0|proton-indexer  | 2022-04-03T19:44:18: [222902 - 07_deserializer] transaction_trace[] deserialization failed with eosjs!
0|proton-indexer  | 2022-04-03T19:44:18: [222902 - 07_deserializer] [WARNING] transaction_trace[] deserialization failed on block 79955656

Nothing shows in the deserialization error logs for these errors.

Indexer configuration when start node from portable snapshot

Hello,

I'm running a node from a portable snapshot so the state_history_plugin doesn't have state history for older blocks. I run indexer first time with abi_scan_mode = true but it doesn't receive any block to index.

Please guide me, thank you!

Hyperion Not Indexing Data

I've got my hyperion connected to an RPC/SHIP node endpoint, but it is not indexing any data and when I query the API, it does not return any transactions such as account creation. Not sure what I'm doing wrong or if there's a gap in the documentation or perhaps a setting I am misunderstanding. Any help would be appreciated : )

node -v
v16.10.0

ecosystem.config.js:

const {addApiServer, addIndexer} = require("./definitions/ecosystem_settings");

module.exports = {
    apps: [
        addIndexer('CHAIN_NAME'), // Index chain name
        addApiServer('CHAIN_NAME', 1) // API chain name, API threads number
    ]
};

connections.json:

{
    "amqp": {
      "host": "127.0.0.1:5672",
      "api": "127.0.0.1:15672",
      "user": "hyperion",
      "pass": "hyperion",
      "vhost": "hyperion"
    },
    "elasticsearch": {
      "host": "127.0.0.1:9200",
      "ingest_nodes": [
        "127.0.0.1:9200"
      ],
      "user": "hyperion",
      "pass": "hyperion"
    },
    "redis": {
      "host": "127.0.0.1",
      "port": "6379"
    },
    "chains": {
      "CHAIN_NAME": {
        "name": "Testnet",
        "chain_id": "CHAIN_ID",
        "http": "http://IP_HERE:8888",
        "ship": "ws://IP_HERE:8887",
        "WS_ROUTER_PORT": 7001
      }
    }
  }

/chains/chain_name.config.json:

{
    "api": {
      "chain_name": "Testnet",
      "server_addr": "0.0.0.0",
      "server_port": 7000,
      "server_name": "0.0.0.0:7000",
      "provider_name": "Example Provider",
      "provider_url": "https://example.com",
      "chain_api": "",
      "push_api": "",
      "chain_logo_url": "",
      "enable_caching": true,
      "cache_life": 1,
      "limits": {
        "get_actions": 1000,
        "get_voters": 100,
        "get_links": 1000,
        "get_deltas": 1000,
        "get_trx_actions": 200
      },
      "access_log": false,
      "enable_explorer": true,
      "chain_api_error_log": false,
      "custom_core_token": "",
      "enable_export_action": false,
      "disable_tx_cache": false,
      "tx_cache_expiration_sec": 3600
    },
    "settings": {
      "preview": false,
      "chain": "CHAIN_NAME",
      "eosio_alias": "eosio",
      "parser": "2.1",
      "ignore_snapshot": "true",
      "auto_stop": 0,
      "index_version": "v1",
      "debug": false,
      "bp_logs": false,
      "bp_monitoring": false,
      "ipc_debug_rate": 60000,
      "allow_custom_abi": true,
      "rate_monitoring": true,
      "max_ws_payload_kb": 256,
      "ds_profiling": false,
      "auto_mode_switch": false,
      "hot_warm_policy": false,
      "custom_policy": "",
      "bypass_index_map": false,
  "index_partition_size": 10000000

    },
    "blacklists": {
      "actions": [],
      "deltas": []
    },
    "whitelists": {
      "actions": [],
      "deltas": [],
      "max_depth": 10,
      "root_only": false
    },
    "scaling": {
      "readers": 1,
      "ds_queues": 1,
      "ds_threads": 1,
      "ds_pool_size": 1,
      "indexing_queues": 1,
      "ad_idx_queues": 1,
      "max_autoscale": 4,
      "batch_size": 5000,
      "resume_trigger": 5000,
      "auto_scale_trigger": 20000,
      "block_queue_limit": 10000,
      "max_queue_limit": 100000,
      "routing_mode": "heatmap",
      "polling_interval": 10000
    },
    "indexer": {
      "enabled": true,
      "start_on": 1,
      "stop_on": 0,
      "rewrite": false,
      "purge_queues": true,
      "live_reader": true,
      "live_only_mode": false,
      "abi_scan_mode": false, # ran intially with true
      "fetch_block": true,
      "fetch_traces": true,
      "disable_reading": false,
      "disable_indexing": false,
      "process_deltas": true,
      "disable_delta_rm": false
    },
    "features": {
      "streaming": {
        "enable": true,
        "traces": true,
        "deltas": true
      },
      "tables": {
        "proposals": true,
        "accounts": true,
        "voters": true
      },
      "index_deltas": true,
      "index_transfer_memo": true,
      "index_all_deltas": true,
      "deferred_trx": false,
      "failed_trx": false,
      "resource_limits": false,
      "resource_usage": false
    },
    "prefetch": {
      "read": 50,
      "block": 100,
      "index": 500
    }
  }

Example call /v2/history/get_actions?account=eosio&skip=0&limit=100&sort=desc:

{
  "query_time_ms": 1.296,
  "cached": false,
  "lib": 0,
  "total": {
    "value": 0,
    "relation": "eq"
  },
  "actions": []
}

Health query:

{
  "version": "3.3.4-rc7",
  "version_hash": "18ef675b8804a0bf2257f6553e95b6d1a6282e61",
  "host": "0.0.0.0:7000",
  "health": [
    {
      "service": "RabbitMq",
      "status": "OK",
      "time": 1633148861841
    },
    {
      "service": "NodeosRPC",
      "status": "OK",
      "service_data": {
        "head_block_num": 65788,
        "head_block_time": "2021-10-02T04:19:13.000",
        "time_offset": 508842,
        "last_irreversible_block": 65787,
        "chain_id": "CHAIN_ID"
      },
      "time": 1633148861842
    },
    {
      "service": "Elasticsearch",
      "status": "OK",
      "service_data": {
        "last_indexed_block": 65749,
        "total_indexed_blocks": 65749,
        "active_shards": "100.0%"
      },
      "time": 1633148861845
    }
  ],
  "features": {
    "streaming": {
      "enable": true,
      "traces": true,
      "deltas": true
    },
    "tables": {
      "proposals": true,
      "accounts": true,
      "voters": true
    },
    "index_deltas": true,
    "index_transfer_memo": true,
    "index_all_deltas": true,
    "deferred_trx": false,
    "failed_trx": false,
    "resource_limits": false,
    "resource_usage": false
  },
  "query_time_ms": 8.618
}

Log output:

0|CHAIN_NAME-indexer  | 2021-10-02T03:48:32: [31489 - 00_master] --------- Hyperion Indexer 3.3.4-rc7 ---------
0|CHAIN_NAME-indexer  | 2021-10-02T03:48:32: [31489 - 00_master] Using parser version 2.1
0|CHAIN_NAME-indexer  | 2021-10-02T03:48:32: [31489 - 00_master] Chain: CHAIN_NAME
0|CHAIN_NAME-indexer  | 2021-10-02T03:48:32: [31489 - 00_master] 
0|CHAIN_NAME-indexer  | 2021-10-02T03:48:32: ---------------
0|CHAIN_NAME-indexer  | 2021-10-02T03:48:32:  INDEXING MODE 
0|CHAIN_NAME-indexer  | 2021-10-02T03:48:32: ---------------
0|CHAIN_NAME-indexer  | 2021-10-02T03:48:32: [31489 - 00_master] Purging all CHAIN_NAME queues!
0|CHAIN_NAME-indexer  | 2021-10-02T03:48:32: [31489 - 00_master] Elasticsearch: 7.15.0 | Lucene: 8.9.0
0|CHAIN_NAME-indexer  | 2021-10-02T03:48:32: [31489 - 00_master] Ingest client ready at http://127.0.0.1:9200/
0|CHAIN_NAME-indexer  | 2021-10-02T03:48:32: [31489 - 00_master] Painless Update Script loaded!
0|CHAIN_NAME-indexer  | 2021-10-02T03:48:32: [31489 - 00_master] Action Mapping added for @voteproducer
0|CHAIN_NAME-indexer  | 2021-10-02T03:48:32: [31489 - 00_master] Updating index templates for CHAIN_NAME...
0|CHAIN_NAME-indexer  | 2021-10-02T03:48:32: [31489 - 00_master] 14 index templates updated
0|CHAIN_NAME-indexer  | 2021-10-02T03:48:32: [31489 - 00_master] Last indexed block (deltas): 1
0|CHAIN_NAME-indexer  | 2021-10-02T03:48:32: [31489 - 00_master]  |>> First Block: 1
0|CHAIN_NAME-indexer  | 2021-10-02T03:48:32: [31489 - 00_master]  >>| Last  Block: 62106
0|CHAIN_NAME-indexer  | 2021-10-02T03:48:32: [31489 - 00_master] Setting parallel reader [1] from block 1 to 5001
0|CHAIN_NAME-indexer  | 2021-10-02T03:48:32: [31489 - 00_master] Setting live reader at head = 62106
0|CHAIN_NAME-indexer  | 2021-10-02T03:48:32: [31489 - 00_master] Loading indices...
0|CHAIN_NAME-indexer  | 2021-10-02T03:48:32: [31489 - 00_master] Delta streaming enabled!
0|CHAIN_NAME-indexer  | 2021-10-02T03:48:32: [31489 - 00_master] Action trace streaming enabled!
0|CHAIN_NAME-indexer  | 2021-10-02T03:48:32: [31489 - 00_master] ๐Ÿ“ฃ๏ธ  Deserialization errors are being logged in:
0|CHAIN_NAME-indexer  | 2021-10-02T03:48:32: [/home/ubuntu/hyperion-history-api/logs/CHAIN_NAME/deserialization_errors.log]
0|CHAIN_NAME-indexer  | 2021-10-02T03:48:32: [31489 - 00_master] 15 workers launched
1|CHAIN_NAME-api      | 2021-10-02T03:48:32: [31505 - 00_master] Chain API URL: "http://IP:8888" | Push API URL: "undefined"
1|CHAIN_NAME-api      | 2021-10-02T03:48:32: Importing stream module
1|CHAIN_NAME-api      | 2021-10-02T03:48:32: [31505 - 00_master] Websocket manager loaded!
1|CHAIN_NAME-api      | 2021-10-02T03:48:32: [31505 - 00_master] starting relay - http://127.0.0.1:7001
1|CHAIN_NAME-api      | 2021-10-02T03:48:33: [31505 - 00_master] Last commit hash on this branch is: 18ef675b8804a0bf2257f6553e95b6d1a6282e61
0|CHAIN_NAME-indexer  | 2021-10-02T03:48:33: [31531 - 01_reader] Websocket connected!
0|CHAIN_NAME-indexer  | 2021-10-02T03:48:33: [31489 - 00_master] received ship abi for distribution
0|CHAIN_NAME-indexer  | 2021-10-02T03:48:33: [31531 - 01_reader] 
0|CHAIN_NAME-indexer  | 2021-10-02T03:48:33: | SHIP Status Report
0|CHAIN_NAME-indexer  | 2021-10-02T03:48:33: | Init block: 0
0|CHAIN_NAME-indexer  | 2021-10-02T03:48:33: | Head block: 0
0|CHAIN_NAME-indexer  | 2021-10-02T03:48:33: [31614 - 15_delta_updater] Launched delta updater, consuming from CHAIN_NAME:delta_rm
0|CHAIN_NAME-indexer  | 2021-10-02T03:48:33: [31532 - 02_continuous_reader] Websocket connected!
0|CHAIN_NAME-indexer  | 2021-10-02T03:48:33: [31532 - 02_continuous_reader] 
0|CHAIN_NAME-indexer  | 2021-10-02T03:48:33: | SHIP Status Report
0|CHAIN_NAME-indexer  | 2021-10-02T03:48:33: | Init block: 0
0|CHAIN_NAME-indexer  | 2021-10-02T03:48:33: | Head block: 0
1|CHAIN_NAME-api      | 2021-10-02T03:48:33: [31505 - 00_master] CHAIN_NAME hyperion api ready and listening on port 7000
0|CHAIN_NAME-indexer  | 2021-10-02T03:48:34: [ROUTER] New relay connected with ID = KSTwdFmk-VM1vK44AAAB
1|CHAIN_NAME-api      | 2021-10-02T03:48:34: [31505 - 00_master] Relay Connected!

0|CHAIN_NAME-indexer  | 2021-10-02T03:48:37: [31489 - 00_master] W:15 | R:6002.2 | C:2887.2 | A:0 | D:0 | I:2674.8 | 14425/30000/62105 | syncs in a few seconds (23.2% 48.3%)
0|CHAIN_NAME-indexer  | 2021-10-02T03:48:42: [31489 - 00_master] W:15 | R:6423 | C:3962.8 | A:0 | D:0 | I:3921 | 34229/62105/62105 | syncs in a few seconds (55.1% 100.0%)
0|CHAIN_NAME-indexer  | 2021-10-02T03:48:47: [31489 - 00_master] W:15 | R:2.2 | C:4832.4 | A:0 | D:0 | I:4831.6 | 58380/62105/62105 | syncs in a few seconds (94.0% 100.0%)
0|CHAIN_NAME-indexer  | 2021-10-02T03:48:52: [31489 - 00_master] W:15 | R:1.8 | C:746.8 | A:0 | D:0 | I:1001.8
0|CHAIN_NAME-indexer  | 2021-10-02T03:48:57: [31489 - 00_master] W:15 | R:2 | C:2 | A:0 | D:0 | I:2
0|CHAIN_NAME-indexer  | 2021-10-02T03:49:02: [31489 - 00_master] W:15 | R:2 | C:2 | A:0 | D:0 | I:2
0|CHAIN_NAME-indexer  | 2021-10-02T03:49:07: [31489 - 00_master] W:15 | R:2 | C:2 | A:0 | D:0 | I:2
0|CHAIN_NAME-indexer  | 2021-10-02T03:49:12: [31489 - 00_master] W:15 | R:2 | C:2 | A:0 | D:0 | I:2
0|CHAIN_NAME-indexer  | 2021-10-02T03:49:17: [31489 - 00_master] W:15 | R:2.2 | C:2.2 | A:0 | D:0 | I:2
0|CHAIN_NAME-indexer  | 2021-10-02T03:49:22: [31489 - 00_master] W:15 | R:1.8 | C:1.8 | A:0 | D:0 | I:2

Hyperion fails when action receipt is null on 1.8 nodeos

Just ran the Hyperion 1.8 pre-release on the kylin testnet. I'm getting actions with null receipts which make hyperion fail:

0|Indexer  | TypeError: Cannot read property '1' of null
0|Indexer  |     at processAction (/home/pm2/hyperion/workers/deserializer.worker.js:333:46)
0|Indexer  |     at processTicksAndRejections (internal/process/task_queues.js:86:5)
0|Indexer  | { action_ordinal: 1,
0|Indexer  |   creator_action_ordinal: 0,
0|Indexer  |   receipt: null,
0|Indexer  |   receiver: 'eosio.token',
0|Indexer  |   act:
0|Indexer  |    { account: 'eosio.token',
0|Indexer  |      name: 'transfer',
0|Indexer  |      authorization: [ [Object] ],
0|Indexer  |      data: { memo: '75,,' } },
0|Indexer  |   context_free: false,
0|Indexer  |   elapsed: '0',
0|Indexer  |   except: 'Y',
0|Indexer  |   error_code: '10000000000000000000',
0|Indexer  |   '@transfer':
0|Indexer  |    { from: 'blackjackeee',
0|Indexer  |      to: 'godappdice12',
0|Indexer  |      amount: 0.5,
0|Indexer  |      symbol: 'EOS' },
0|Indexer  |   '@timestamp': '2019-03-11T21:36:43.000',
0|Indexer  |   block_num: 38118232,
0|Indexer  |   producer: 'superoneiobp',
0|Indexer  |   trx_id:
0|Indexer  |    '4133195894f3116fcc7ace75edb3b17cdd7926045851e2930836d0c1e2b6a723' }

Not sure whether this can only appear on testnets like kylin. However, it seems like there's no way to skip those actions, Hyperion is just going to spam the logs and won't keep on processing the next blocks.

`hex_data` returned by hyperion has the wrong padding hence returning the wrong action receipt

Hello,
we've checked out v3.3.7-2 and it seems the hex_data field in the action is mismatching the one in the explorer, here's an example:

this action can be found here under the path execution_trace.action_traces.inline_traces[2]:

"act": {
    "account": "xbsc.ptokens",
    "name": "pegin",
    "authorization": [
       {
            "actor": "xbsc.ptokens",
            "permission": "active"
        }
    ],
    "data": {
        "destinationAddr": "0xb713C9ce8655D0D98BBcD6AB9b77B4769d28d722",
        "quantity": "350.0000 EFX",
        "sender": "kucoinrise11",
        "tokenContract": "effecttokens",
        "userData": ""
    },
    "hex_data": "1082c2ee4e47918680a7823467a4d652e06735000000000004454658000000002a30786237313343396365383635354430443938424263443641423962373742343736396432386437323200000000"
}

while from our hyperion-api we get this one for the same action

{
  "account": "xbsc.ptokens",
  "name": "pegin",
  "authorization": [
    {
      "actor": "xbsc.ptokens",
      "permission": "active"
    }
  ],
  "data": {
    "sender": "kucoinrise11",
    "tokenContract": "effecttokens",
    "quantity": "350.0000 EFX",
    "destinationAddr": "0xb713C9ce8655D0D98BBcD6AB9b77B4769d28d722",
    "userData": ""
  },
  "hex_data": "1082C2EE4E47918680A7823467A4D652E06735000000000004454658000000002A30786237313343396365383635354430443938424263443641423962373742343736396432386437323200",
  "global_sequence": 357286554079
}

you can clearly see that the padding is wrong.

This patch has temporary solved the issue for us, but it's not robust for obvious reasons:

diff --git a/api/routes/v1-history/get_actions/get_actions.ts b/api/routes/v1-history/get_actions/get_actions.ts
index 39e32a9..81a92d9 100644
--- a/api/routes/v1-history/get_actions/get_actions.ts
+++ b/api/routes/v1-history/get_actions/get_actions.ts
@@ -336,6 +336,14 @@ async function getActions(fastify: FastifyInstance, request: FastifyRequest) {
                         txEnc,
                         txDec
                     );
+                   // We noticed that pegin action needs the hex_data
+                   // field padded to 158/2 bytes, hence this fix,
+                   // redeem actions are not affected by this problem
+                  let tmp = action.act.hex_data;
+                   if (action.act.name === "pegin") {
+                       action.act.hex_data = tmp.padEnd(158, '0')
+                   }
+
                 } catch (e: any) {
                     console.log(e);
                 }

How to fix this problem while indexer's starting: scaling.max_queue_limit is not defined!

vimchain@ubuntu:~/Starteos/Nodeos$ pm2 logs eos-indexer
[TAILING] Tailing last 15 lines for [eos-indexer] process (change the value with --lines option)
/home/vimchain/.pm2/logs/eos-indexer-error.log last 15 lines:
/home/vimchain/.pm2/logs/eos-indexer-out.log last 15 lines:
0|eos-inde | 2020-06-22T01:47:57: [00_master] scaling.max_queue_limit is not defined!

I use "pm2 start" to boot this software, because the "npm run start:indexer" did nothing for me.

Indexer error - RangeError: Invalid typed array

Nodeos: 2.1
Parser: 2.1

Could be another issue with the indexer due to addons/eosjs-native not being updated. Hopefully this is helpful. It causes my indexer to stop indexing the entire range when this occurs. I was unable to tell which exact tx is causing this. Happy to provide access to my ship node if you want to debug.

Proton block: 57940010

2022-04-03T20:11:47: RangeError: Invalid typed array length: 49326109116587355612366462438421e2944484
2022-04-03T20:11:47:     at new Uint8Array (<anonymous>)
2022-04-03T20:11:47:     at keyToString (/opt/eosio/src/Hyperion-History-API/addons/src/eosjs-numeric.ts:302:2)
2022-04-03T20:11:47:     at Object.signatureToString (/opt/eosio/src/Hyperion-History-API/addons/eosjs-native/eosjs-numeric.js:460:16)
2022-04-03T20:11:47:     at SerialBuffer.getSignature (/opt/eosio/src/Hyperion-History-API/addons/src/eosjs-serialize.ts:521:24)
2022-04-03T20:11:47:     at Object.deserialize (/opt/eosio/src/Hyperion-History-API/addons/src/eosjs-serialize.ts:952:63)
2022-04-03T20:11:47:     at Object.deserializeArray [as deserialize] (/opt/eosio/src/Hyperion-History-API/addons/src/eosjs-serialize.ts:701:34)
2022-04-03T20:11:47:     at Object.deserializeStruct [as deserialize] (/opt/eosio/src/Hyperion-History-API/addons/src/eosjs-serialize.ts:661:45)
2022-04-03T20:11:47:     at Object.deserializeVariant [as deserialize] (/opt/eosio/src/Hyperion-History-API/addons/src/eosjs-serialize.ts:686:36)
2022-04-03T20:11:47:     at Object.deserializeStruct [as deserialize] (/opt/eosio/src/Hyperion-History-API/addons/src/eosjs-serialize.ts:661:45)
2022-04-03T20:11:47:     at Object.deserializeStruct [as deserialize] (/opt/eosio/src/Hyperion-History-API/addons/src/eosjs-serialize.ts:661:45)
2022-04-03T20:11:47:     at Object.deserializeVariant [as deserialize] (/opt/eosio/src/Hyperion-History-API/addons/src/eosjs-serialize.ts:686:36)
2022-04-03T20:11:47:     at Object.deserializeStruct [as deserialize] (/opt/eosio/src/Hyperion-History-API/addons/src/eosjs-serialize.ts:661:45)
2022-04-03T20:11:47:     at Object.deserializeArray [as deserialize] (/opt/eosio/src/Hyperion-History-API/addons/src/eosjs-serialize.ts:701:34)
2022-04-03T20:11:47:     at Object.deserializeStruct [as deserialize] (/opt/eosio/src/Hyperion-History-API/addons/src/eosjs-serialize.ts:661:45)
2022-04-03T20:11:47:     at Object.deserializeVariant [as deserialize] (/opt/eosio/src/Hyperion-History-API/addons/src/eosjs-serialize.ts:686:36)
2022-04-03T20:11:47:     at Object.deserializeOptional [as deserialize] (/opt/eosio/src/Hyperion-History-API/addons/src/eosjs-serialize.ts:718:32)

Question: Handling nodeosd forks

Could you please give me a quick over-view on how you handle nodeosd forks? I would like to look at how you handle this in your code..

I see workers/state-reader.worker.js is calling requestBlockRange(msg.data.first_block, msg.data.last_block) from nodeosd via get_blocks_request_v0 with irreversible_only set to false. How do you recover if you see a block that disappears due to a fork?

Whether leap 3.1 is supported now๏ผŸ

I used the version of Leap3.1 to start the node, but the snapshot uses version V6. But when I start./run EOS-Indexer, I cannot import block data.

logs: No blocks processed! Indexer will stop in 25 seconds!

Cannot read property 'producer' of undefined

as follows, how is fixed?

TypeError: Cannot read property 'producer' of undefined
at Object.processBlock (/opt/Hyperion-History-API/workers/deserializer.worker.js:95:29)
at HyperionModuleLoader.messageParser (/opt/Hyperion-History-API/modules/parsers/1.8-parser.js:118:39)
at runMicrotasks ()
at processTicksAndRejections (internal/process/task_queues.js:93:5)

Indexing not working and showing "FATAL: Unlisted Worker: delta_updater"

for hyperion v3.1.4 => eos2.1.0 series

hyperion => eos is working though

2021-10-30T17:25:24: [00_master] Loading indices...
2021-10-30T17:25:24: [00_master] ActionIndex: eos-action-v1-000001 | First: null | Last: null
2021-10-30T17:25:24: [00_master] DeltaIndex: eos-delta-v1-000001 | First: null | Last: null
2021-10-30T17:25:24: [00_master] Indexer rewrite enabled (106700000 - 0)
2021-10-30T17:25:24: [00_master] ๐Ÿ“ฃ๏ธ  Deserialization errors are being logged in:
2021-10-30T17:25:24:  /hyperion-history-api/logs/eos/deserialization_errors.log
2021-10-30T17:25:25: FATAL: Unlisted Worker: delta_updater
2021-10-30T17:25:26: [01_continuous_reader] Websocket connected!
2021-10-30T17:25:26: [00_master] received ship abi for distribution
2021-10-30T17:25:26: [03_deserializer] deserializeNative >> signed_block >> Invalid argument
2021-10-30T17:25:26: [02_deserializer] deserializeNative >> signed_block >> Invalid argument

(printing some blocks)
(but not indexing)

2021-10-30T17:25:26: [00_master] The worker #2 has disconnected
2021-10-30T17:25:26: [00_master] The worker #3 has disconnected
2021-10-30T17:25:29: [00_master] W:11 | R:2.4 | C:0 | A:0 | D:0 | I:0
2021-10-30T17:25:29: [00_master] No blocks processed! Indexer will stop in -5 seconds!
2021-10-30T17:25:34: [00_master] W:11 | R:1.8 | C:0 | A:0 | D:0 | I:0
2021-10-30T17:25:34: [00_master] No blocks processed! Indexer will stop in -10 seconds!
2021-10-30T17:25:39: [00_master] W:11 | R:2 | C:0 | A:0 | D:0 | I:0
2021-10-30T17:25:39: [00_master] No blocks processed! Indexer will stop in -15 seconds!
2021-10-30T17:25:44: [00_master] W:11 | R:2 | C:0 | A:0 | D:0 | I:0
2021-10-30T17:25:44: [00_master] No blocks processed! Indexer will stop in -20 seconds!
2021-10-30T17:25:49: [00_master] W:11 | R:2 | C:0 | A:0 | D:0 | I:0
2021-10-30T17:25:49: [00_master] No blocks processed! Indexer will stop in -25 seconds!
2021-10-30T17:25:54: [00_master] W:11 | R:2 | C:0 | A:0 | D:0 | I:0
2021-10-30T17:25:54: [00_master] No blocks processed! Indexer will stop in -30 seconds!
2021-10-30T17:25:59: [00_master] W:11 | R:2 | C:0 | A:0 | D:0 | I:0

also if i switching to 3.3.4-rc8 , it seems things stuck without log from pm2

How to solve "No blocks processed" problem?

0|eos-inde | 2020-06-23T18:39:46: [00_master] No blocks processed! Indexer will stop in 35 seconds!
0|eos-inde | 2020-06-23T18:39:51: [00_master] W:11 | R:0 | C:0 | A:0 | D:0 | I:0 | 0/0/127640799 | syncs a few seconds ago (0.0% 0.0%)
0|eos-inde | 2020-06-23T18:39:51: [00_master] No blocks processed! Indexer will stop in 30 seconds!
0|eos-inde | 2020-06-23T18:39:56: [00_master] W:11 | R:0 | C:0 | A:0 | D:0 | I:0 | 0/0/127640799 | syncs a few seconds ago (0.0% 0.0%)
0|eos-inde | 2020-06-23T18:39:56: [00_master] No blocks processed! Indexer will stop in 25 seconds!
0|eos-inde | 2020-06-23T18:40:01: [00_master] W:11 | R:0 | C:0 | A:0 | D:0 | I:0 | 0/0/127640799 | syncs a few seconds ago (0.0% 0.0%)
0|eos-inde | 2020-06-23T18:40:01: [00_master] No blocks processed! Indexer will stop in 20 seconds!
0|eos-inde | 2020-06-23T18:40:06: [00_master] W:11 | R:0 | C:0 | A:0 | D:0 | I:0 | 0/0/127640799 | syncs a few seconds ago (0.0% 0.0%)
0|eos-inde | 2020-06-23T18:40:06: [00_master] No blocks processed! Indexer will stop in 15 seconds!
0|eos-inde | 2020-06-23T18:40:11: [00_master] W:11 | R:0 | C:0 | A:0 | D:0 | I:0 | 0/0/127640799 | syncs a few seconds ago (0.0% 0.0%)
0|eos-inde | 2020-06-23T18:40:11: [00_master] No blocks processed! Indexer will stop in 10 seconds!
0|eos-inde | 2020-06-23T18:40:16: [00_master] W:11 | R:0 | C:0 | A:0 | D:0 | I:0 | 0/0/127640799 | syncs a few seconds ago (0.0% 0.0%)
0|eos-inde | 2020-06-23T18:40:16: [00_master] No blocks processed! Indexer will stop in 5 seconds!
0|eos-inde | 2020-06-23T18:40:21: [00_master] W:11 | R:0 | C:0 | A:0 | D:0 | I:0 | 0/0/127640799 | syncs a few seconds ago (0.0% 0.0%)
0|eos-inde | 2020-06-23T18:40:21: [00_master] Reached limit for no blocks processed, stopping now...

It's werid, my main-net nodeos' plugin state_history_plugin is on, and it seems everything is fine with other dependencies.

history not imported

Hi EOS Rio!

Thank you for Hyperion.

We are currently setting up a test instance. The local EOS node for Hyperion gets the history from the production node.

We configured the project as instructed in the Install.md and it seems the indexer is not picking up any messages. We debugged it.

It seems that block count and chain id from the HTTP rpc client is correctly found, but it seems the "initialize_abi" message is never sent between processes, which looks like a requirement to start syncing from scratch.

What do you think?

Log output:

Workers: 5 | Read: 0 blocks/s | Consume: 0 blocks/s 
| Deserialize: 0 actions/s | Index: 0 docs/s | 0/0/7962680

Only get part of index from elasticsearch

eos v2.1.0
hyperion: v3.3.4-rc8

I started hyperion with version above, but get only part of index

$curl http://127.0.0.1:9200/_cat/indices
green  open eos-perm-v1  ywsxnOK_RvKv6CpHjpI-wg 2 0    2    0 12.8kb 12.8kb
green  open eos-block-v1 v87TZsXDQpaBfNEpou13PQ 2 0 4552 2081  2.4mb  2.4mb

After sending a transaction, get A:0 from Indexer's log

2021-11-30T07:25:26: [29 - 00_master] W:14 | R:2 | C:2 | A:0 | D:0 | I:2
2021-11-30T07:25:31: [29 - 00_master] W:14 | R:2 | C:2 | A:0 | D:0 | I:2
2021-11-30T07:25:36: [29 - 00_master] W:14 | R:2 | C:2 | A:0 | D:0 | I:2
2021-11-30T07:25:41: [29 - 00_master] W:14 | R:2 | C:2 | A:0 | D:0 | I:2

querystring.after should match format \"date-time\"

When trying to request the api, it's impossible to use after or before parameters in /v2/history/get_actions. I get a json error :

{
  "statusCode": 400,
  "error": "Bad Request",
  "message": "querystring.after should match format \"date-time\", querystring.before should match format \"date-time\""
}

You can reproduce the bug by going to any swagger UI and try to add an after and/or before parameter in ISO8601 format (as requested by the form).
Tried on https://eos.hyperion.eosrio.io/v2/docs/index.html and https://junglehistory.cryptolions.io/v2/docs/index.html with format "2019-10-02T12:00:00" for example.

It doesn't work either with a direct curl :
curl -X GET "https://eos.hyperion.eosrio.io/v2/history/get_actions?limit=10&sort=desc&after=2019-10-01T12%3A00%3A00&before=2019-10-02T12%3A00%3A00" -H "accept: application/json"

Am I doing something wrong ? Maybe the format is not ISO8601 ?

No blocks processed

2022-08-29T14:29:03: [00_master] --------- Hyperion Indexer 3.1.5 ---------
2022-08-29T14:29:03: [00_master] Using parser version 1.8
2022-08-29T14:29:03: [00_master] Chain: eos
2022-08-29T14:29:03: [00_master]
2022-08-29T14:29:03: -------------------
2022-08-29T14:29:03:  ABI SCAN MODE
2022-08-29T14:29:03: -------------------
2022-08-29T14:29:03: [00_master] Elasticsearch: 7.17.6 | Lucene: 8.11.1
2022-08-29T14:29:03: [00_master] Ingest client ready at http://127.0.0.1:9200/
2022-08-29T14:29:03: [00_master] Painless Update Script loaded!
2022-08-29T14:29:03: [00_master] Mapping added for @voteproducer
2022-08-29T14:29:03: [00_master] Updating index templates for eos...
2022-08-29T14:29:03: [00_master] 14 index templates updated
2022-08-29T14:29:03: [00_master] Finished creating indices!
2022-08-29T14:29:03: [00_master] Fetching last indexed block using the delta index...
2022-08-29T14:29:03: [00_master] Last indexed block (deltas): 1
2022-08-29T14:29:03: [00_master] Last indexed ABI: 1
2022-08-29T14:29:03: [00_master]  |>> First Block: 1
2022-08-29T14:29:03: [00_master]  >>| Last  Block: 265222522
2022-08-29T14:29:03: [00_master] Setting parallel reader [1] from block 1 to 5001
2022-08-29T14:29:03: [00_master] Loading indices...
2022-08-29T14:29:03: [00_master] ActionIndex: eos-action-v1-000001 | First: null | Last: null
2022-08-29T14:29:03: [00_master] DeltaIndex: eos-delta-v1-000001 | First: null | Last: null
2022-08-29T14:29:03: [00_master] ๐Ÿ“ฃ๏ธ  Deserialization errors are being logged in:
2022-08-29T14:29:03:  /home/ubuntu/hyperion-history-api/logs/eos/deserialization_errors.log
2022-08-29T14:29:04: [01_reader] Websocket connected!
2022-08-29T14:29:04: [00_master] received ship abi for distribution
2022-08-29T14:29:08: [00_master] W:12 | R:0 | C:0 | A:0 | D:0 | I:0 | 0/0/265222521 | syncs a few seconds ago (0.0% 0.0%)
2022-08-29T14:29:08: [00_master] No blocks processed! Indexer will stop in -5 seconds!
2022-08-29T14:29:13: [00_master] W:12 | R:0 | C:0 | A:0 | D:0 | I:0 | 0/0/265222521 | syncs a few seconds ago (0.0% 0.0%)
2022-08-29T14:29:13: [00_master] No blocks processed! Indexer will stop in -10 seconds!
2022-08-29T14:29:18: [00_master] W:12 | R:0 | C:0 | A:0 | D:0 | I:0 | 0/0/265222521 | syncs a few seconds ago (0.0% 0.0%)
2022-08-29T14:29:18: [00_master] No blocks processed! Indexer will stop in -15 seconds!
2022-08-29T14:29:23: [00_master] W:12 | R:0 | C:0 | A:0 | D:0 | I:0 | 0/0/265222521 | syncs a few seconds ago (0.0% 0.0%)

Is there any problem?

Hyperion randomly stops reading new blocks in ABI cache mode

I regularly get logs like this:

0|Indexer  | Workers: 65 | Read: 1118.6 blocks/s | Consume: 1136.4 blocks/s | Deserialize: 0 actions/s | Index: 1.6 docs/s | 772427/772338/47117261
0|Indexer  | Workers: 65 | Read: 1412.8 blocks/s | Consume: 1395 blocks/s | Deserialize: 0 actions/s | Index: 2.8 docs/s | 779402/779402/47117261
0|Indexer  | Workers: 65 | Read: 1306.4 blocks/s | Consume: 1320.6 blocks/s | Deserialize: 0 actions/s | Index: 1.2 docs/s | 786005/785934/47117261
0|Indexer  | Workers: 65 | Read: 3001.6 blocks/s | Consume: 2987.4 blocks/s | Deserialize: 0 actions/s | Index: 2.4 docs/s | 800942/800942/47117261
0|Indexer  | Workers: 65 | Read: 2008.6 blocks/s | Consume: 2008.6 blocks/s | Deserialize: 0 actions/s | Index: 2 docs/s | 810985/810985/47117261
0|Indexer  | Workers: 65 | Read: 1622.6 blocks/s | Consume: 1622.6 blocks/s | Deserialize: 0 actions/s | Index: 0.4 docs/s | 819098/819098/47117261
0|Indexer  | Workers: 65 | Read: 9124.4 blocks/s | Consume: 9124.4 blocks/s | Deserialize: 0 actions/s | Index: 4.8 docs/s | 864720/864720/47117261
0|Indexer  | Workers: 65 | Read: 1369 blocks/s | Consume: 1369 blocks/s | Deserialize: 0 actions/s | Index: 0 docs/s | 871565/871565/47117261
0|Indexer  | Workers: 65 | Read: 0 blocks/s | Consume: 0 blocks/s | Deserialize: 0 actions/s | Index: 0 docs/s | 871565/871565/47117261
0|Indexer  | Workers: 65 | Read: 0 blocks/s | Consume: 0 blocks/s | Deserialize: 0 actions/s | Index: 0 docs/s | 871565/871565/47117261
0|Indexer  | Workers: 65 | Read: 0 blocks/s | Consume: 0 blocks/s | Deserialize: 0 actions/s | Index: 0 docs/s | 871565/871565/47117261
0|Indexer  | Workers: 65 | Read: 0 blocks/s | Consume: 0 blocks/s | Deserialize: 0 actions/s | Index: 0 docs/s | 871565/871565/47117261
0|Indexer  | Workers: 65 | Read: 0 blocks/s | Consume: 0 blocks/s | Deserialize: 0 actions/s | Index: 0 docs/s | 871565/871565/47117261
0|Indexer  | Workers: 65 | Read: 0 blocks/s | Consume: 0 blocks/s | Deserialize: 0 actions/s | Index: 0 docs/s | 871565/871565/47117261
0|Indexer  | Workers: 65 | Read: 0 blocks/s | Consume: 0 blocks/s | Deserialize: 0 actions/s | Index: 0 docs/s | 871565/871565/47117261
0|Indexer  | Workers: 65 | Read: 0 blocks/s | Consume: 0 blocks/s | Deserialize: 0 actions/s | Index: 0 docs/s | 871565/871565/47117261

Hyperion just suddenly stops reading new blocks. RabbitMQ queues are empty. Restarting helps, but this happens quite often.

This is my ecosystem.config.js

module.exports = {
    apps: [
        {
            name: "Indexer",
            script: "./launcher.js",
            node_args: ["--max-old-space-size=8192"],
            autorestart: false,
            kill_timeout: 3600,
            env: {
                AMQP_HOST: '127.0.0.1:5672',
                AMQP_USER: '<user>',
                AMQP_PASS: '<password>',
                ES_HOST: '<elasticsearch>:9200',
                NODEOS_HTTP: '<nodeos>:8888',
                NODEOS_WS: '<nodeos>:9090',
                START_ON: 0,
                STOP_ON: 0,
                REWRITE: 'false',
                BATCH_SIZE: 5000,
                LIVE_READER: 'false',
                LIVE_ONLY: 'false',
                FETCH_BLOCK: 'false',
                FETCH_TRACES: 'false',
                CHAIN: 'eos',
                CREATE_INDICES: 'false',
                PREVIEW: 'false',
                DISABLE_READING: 'false',
                READERS: 8,
                DESERIALIZERS: 10,
                DS_MULT: 4,
                ES_INDEXERS_PER_QUEUE: 4,
                ES_ACT_QUEUES: 2,
                READ_PREFETCH: 50,
                BLOCK_PREFETCH: 100,
                INDEX_PREFETCH: 500,
                ENABLE_INDEXING: 'true',
                PROC_DELTAS: 'true',
                INDEX_DELTAS: 'false',
                INDEX_ALL_DELTAS: 'false',
                ABI_CACHE_MODE: 'true'
            }
        },
        {
            name: 'API',
            script: "./api/api-loader.js",
            exec_mode: 'cluster',
            merge_logs: true,
            instances: 4,
            autorestart: true,
            exp_backoff_restart_delay: 100,
            watch: ["api"],
            env: {
                SERVER_PORT: '7000',
                SERVER_NAME: 'example.com',
                SERVER_ADDR: '127.0.0.1',
                NODEOS_HTTP: 'http://127.0.0.1:8888',
                ES_HOST: '127.0.0.1:9200',
                CHAIN: 'eos'
            }
        }
    ]
};

There's no log output in Indexer-error.log.

Generally it seems like the ABI cacher is pretty fluctuating when it comes to performance. Sometimes it's pretty constant at 15,000 - 25,000 blocks per second. Sometimes it drops to a couple of hundered blocks/s or even to zero blocks/s within a couple of minutes. Sometimes restarting helps to make it run better, sometimes I just run into the same issue again.

Usually it's performing pretty well after starting up and then the performance drops after some time.

Server has 72 vCPUs and 150 GB of RAM, so it shouldn't be an issue with low hardware specs. Elasticsearch and Nodeos are running on their own servers.

Hyperion is up to date with the current master branch, Nodeos version is 1.6.3, setup followed the install instructions here.

Fix_missing_blocks script error

This fails with "UnicodeEncodeError: 'latin-1' codec can't encode characters in position 43-44: ordinal not in range(256)"

====================================================================================================
Searching for missing Blocks  
scripts/fix_missing_blocks/fix-missing-blocks.py:131: DeprecationWarning: The 'body' parameter is deprecated for the 'search' API and will be removed in a future version. Instead use API parameters directly. See https://github.com/elastic/elasticsearch-py/issues/1698 for more information
 result = es.search(index="proton-block-*", body=query)
scripts/fix_missing_blocks/fix-missing-blocks.py:247: DeprecationWarning: The 'body' parameter is deprecated for the 'search' API and will be removed in a future version. Instead use API parameters directly. See https://github.com/elastic/elasticsearch-py/issues/1698 for more information
 result = es.search(index="proton-block-*", body=query_body2(gte,lte,1000))
====================================================================================================
Completed Search  
====================================================================================================
1.Building replacement text for config file  
====================================================================================================
2.Updating the config file  
====================================================================================================
3.Config file is now updated for:  0 - 1000 
====================================================================================================
4.Running Re-indexing for:  0 - 1000 

2022-04-02T16:45:30: [2051214 - 00_master] Last indexed block (deltas): 121070391

2022-04-02T16:45:30: [2051214 - 00_master]  |>> First Block: 0

2022-04-02T16:45:30: [2051214 - 00_master]  >>| Last  Block: 1000

2022-04-02T16:45:30: [2051214 - 00_master] Setting parallel reader [1] from block 0 to 1000

2022-04-02T16:45:30: [2051214 - 00_master] Loading indices...

2022-04-02T16:45:31: [2051214 - 00_master] ActionIndex: proton-action-v1-000002 | First: 10000001 | Last: 20000000

2022-04-02T16:45:32: [2051214 - 00_master] ActionIndex: proton-action-v1-000003 | First: 20000001 | Last: 30000000

2022-04-02T16:45:32: [2051214 - 00_master] ActionIndex: proton-action-v1-000004 | First: 30000001 | Last: 40000000

2022-04-02T16:45:38: [2051214 - 00_master] ActionIndex: proton-action-v1-000005 | First: 40000001 | Last: 50000000

2022-04-02T16:45:39: [2051214 - 00_master] ActionIndex: proton-action-v1-000001 | First: 2 | Last: 10000000

2022-04-02T16:45:45: [2051214 - 00_master] ActionIndex: proton-action-v1-000006 | First: 50000001 | Last: 60000000

2022-04-02T16:45:51: [2051214 - 00_master] ActionIndex: proton-action-v1-000007 | First: 60000001 | Last: 70000000

2022-04-02T16:45:51: [2051214 - 00_master] ActionIndex: proton-action-v1-000008 | First: 70000001 | Last: 80000000

2022-04-02T16:45:51: [2051214 - 00_master] ActionIndex: proton-action-v1-000009 | First: 80000001 | Last: 80635816

2022-04-02T16:45:52: [2051214 - 00_master] ActionIndex: proton-action-v1-000013 | First: 120000001 | Last: 121070391

2022-04-02T16:45:52: [2051214 - 00_master] ActionIndex: proton-action-v1-000010 | First: 100000000 | Last: 100000000

2022-04-02T16:45:52: [2051214 - 00_master] ActionIndex: proton-action-v1-000011 | First: 100000001 | Last: 100005113

2022-04-02T16:45:52: [2051214 - 00_master] ActionIndex: proton-action-v1-000012 | First: 117308394 | Last: 120000000

2022-04-02T16:45:54: [2051214 - 00_master] DeltaIndex: proton-delta-v1-000008 | First: 70000001 | Last: 80000000

2022-04-02T16:45:54: [2051214 - 00_master] DeltaIndex: proton-delta-v1-000009 | First: 80000001 | Last: 80635816

2022-04-02T16:45:56: [2051214 - 00_master] DeltaIndex: proton-delta-v1-000006 | First: 50000001 | Last: 60000000

2022-04-02T16:45:59: [2051214 - 00_master] DeltaIndex: proton-delta-v1-000007 | First: 60000001 | Last: 70000000

2022-04-02T16:45:59: [2051214 - 00_master] DeltaIndex: proton-delta-v1-000004 | First: 30000001 | Last: 40000000

2022-04-02T16:46:01: [2051214 - 00_master] DeltaIndex: proton-delta-v1-000005 | First: 40000001 | Last: 50000000

2022-04-02T16:46:03: [2051214 - 00_master] DeltaIndex: proton-delta-v1-000002 | First: 10000001 | Last: 20000000

2022-04-02T16:46:05: [2051214 - 00_master] DeltaIndex: proton-delta-v1-000003 | First: 20000001 | Last: 30000000

2022-04-02T16:46:07: [2051214 - 00_master] DeltaIndex: proton-delta-v1-000001 | First: 9 | Last: 10000000

2022-04-02T16:46:07: [2051214 - 00_master] DeltaIndex: proton-delta-v1-000013 | First: 120000001 | Last: 121070391

2022-04-02T16:46:07: [2051214 - 00_master] DeltaIndex: proton-delta-v1-000011 | First: 100000001 | Last: 100005113

2022-04-02T16:46:08: [2051214 - 00_master] DeltaIndex: proton-delta-v1-000012 | First: 117308394 | Last: 120000000

2022-04-02T16:46:08: [2051214 - 00_master] DeltaIndex: proton-delta-v1-000010 | First: 100000000 | Last: 100000000

2022-04-02T16:46:08: [2051214 - 00_master] Indexer rewrite enabled (0 - 1000)

2022-04-02T16:46:08: [2051214 - 00_master] Delta streaming enabled!

2022-04-02T16:46:08: [2051214 - 00_master] Action trace streaming enabled!

Traceback (most recent call last):
 File "scripts/fix_missing_blocks/fix-missing-blocks.py", line 284, in <module>
   MagicFuzz(gt_lt_list)
 File "scripts/fix_missing_blocks/fix-missing-blocks.py", line 188, in MagicFuzz
   startRewrite(gt,lt)
 File "scripts/fix_missing_blocks/fix-missing-blocks.py", line 213, in startRewrite
   print(line)
UnicodeEncodeError: 'latin-1' codec can't encode characters in position 43-44: ordinal not in range(256)

Our Hyperion Fork is Not Pulling From Our Docker Image

We forked the Hyperion repo and made some changes to get it to work with our blockchain. We created a new Docker image, but whenever we run Docker, our changes are not being reflected. For example, we want it to pull our ABI repo, but it keeps pulling from the default one.

Wrong data types formating

Bug is in view of next data types:
std::vector, eosio::public_key

Correct data view from nodeos get actions:

"data": {
"from": "user1",
"to": "user2",
"iv": "b8d35e246bd5f3b03a0b56d21fad25dc",
"ephem_key": "EOS8X8RFwcinE4rV3WwpxabyjnoTSrAo8zzEFZgR96HGsY5WJ49Yu",
"cipher_text": "10dbea68b99dd78fef4d1ddb5da0a39f38d6ddda0021589658dd5176ba6c3948",
"mac": "1c4fd576a7930fc547e1f6ae0d8b14cdc91224264e9cfe243d06498e85ab5e1b"
}

Incorrect data view from hyperion get actions:
"data": {
"from": "user1",
"to": "user2",
"iv": "B8D35E246BD5F3B03A0B56D21FAD25DC",
"ephem_key": "PUB_K1_8X8RFwcinE4rV3WwpxabyjnoTSrAo8zzEFZgR96HGsY5ae5UTq",
"cipher_text": "10DBEA68B99DD78FEF4D1DDB5DA0A39F38D6DDDA0021589658DD5176BA6C3948",
"mac": "1C4FD576A7930FC547E1F6AE0D8B14CDC91224264E9CFE243D06498E85AB5E1B"
}

Expected low case for bytes and eos key formating for eosio::public_key

Branch:master

get_missed_blocks generate erronious results

Missing:

$ curl -s GET "https://meetone.eosn.io/v2/stats/get_missed_blocks?producer=dilidilif.m" | jq
{
  "query_time_ms": 18.539,
  "stats": {
    "by_producer": {}
  },
  "events": []
}

Wrong producer

$ curl -s GET "https://kylin.eosn.io/v2/stats/get_missed_blocks?producer=foo" | jq
{
  "query_time_ms": 3,
  "stats": {
    "by_producer": {
      "nationkylin1": 6
    }
  },
  "events": [
    {
      "@timestamp": "2020-05-29T11:11:11.775Z",
      "last_block": 106866070,
      "schedule_version": 320,
      "size": 6,
      "producer": "nationkylin1"
    }
  ]
}

with /v2/state/get_links, I got a unlinkauthed link from response

after linkauthed action "update2' to permisson "default" of account "ab.test.1", do unlinkauth.
I still get link with /v2/state/get_links, only timstamp change into unlinkauth time.
How to know the link has been deleted from permission.

{
    "query_time_ms": 0.205,
    "cached": true,
    "total": {
        "value": 4,
        "relation": "eq"
    },
    "links": [
        {
            "block_num": 37475879,
            "timestamp": "2020-08-02T11:36:47.500",
            "account": "ab.test.1",
            "permission": "default",
            "code": "ab",
            "action": "update2"
        }
      ........
    ]
}

deserializeNative >> signed_block >> Invalid argument

Hi,

I try to run hyperion with nodeos on wax mainnet chain and have issue with indexer which connect fine to node but doesn't seem to be able to process blocks.

I get deserializer error then a dump of hexadecimal then the "No blocks processed!" error like in issue #39

Any hint ?

2021-12-15T10:26:18:  /hyperion-history-api/logs/wax/deserialization_errors.log
2021-12-15T10:26:19: [01_reader] Websocket connected!
2021-12-15T10:26:19: [02_continuous_reader] Websocket connected!
2021-12-15T10:26:19: [00_master] received ship abi for distribution
2021-12-15T10:26:19: [03_deserializer] deserializeNative >> signed_block >> Invalid argument
2021-12-15T10:26:19: {
2021-12-15T10:26:19:   head: {
2021-12-15T10:26:19:     block_num: 156093695,
2021-12-15T10:26:19:     block_id: '094DCCFF173FD4920B50B07DC834DA1F6EA2895ABBE59782282E13E0A3F2256D'
2021-12-15T10:26:19:   },
2021-12-15T10:26:19:   last_irreversible: {
2021-12-15T10:26:19:     block_num: 156093364,
2021-12-15T10:26:19:     block_id: '094DCBB4939CCA484A44D1175BD6AF1D39389B6624E3CF3A404D79D1C2BDDB02'
2021-12-15T10:26:19:   },
2021-12-15T10:26:19:   this_block: {
2021-12-15T10:26:19:     block_num: 156093673,
2021-12-15T10:26:19:     block_id: '094DCCE93EAA3D10EF581DB9808501E17AA446BD89E60674ADCB9867F481FF8A'
2021-12-15T10:26:19:   },
2021-12-15T10:26:19:   prev_block: {
2021-12-15T10:26:19:     block_num: 156093672,
2021-12-15T10:26:19:     block_id: '094DCCE8ACACEC7E8DD7C618B10565B558CF0A1CE097DC5EF2EEE0C64DEE97E0'
2021-12-15T10:26:19:   },
2021-12-15T10:26:19:   block: [
2021-12-15T10:26:19:     'signed_block_v1',
2021-12-15T10:26:19:     {
2021-12-15T10:26:19:       timestamp: '2021-12-14T17:00:52.000',
2021-12-15T10:26:19:       producer: 'ledgerwiseio',
2021-12-15T10:26:19:       confirmed: 0,
2021-12-15T10:26:19:       previous: '094DCCE8ACACEC7E8DD7C618B10565B558CF0A1CE097DC5EF2EEE0C64DEE97E0',
2021-12-15T10:26:19:       transaction_mroot: 'CB132A7AF33AF8D4CDF03C69B4C0B327B13101B74AD53E900F8C0607A21BE50D',
2021-12-15T10:26:19:       action_mroot: '701A6CB7552BDA2FED34958C0FF269113ED781EDA80860C88F62E7C588B97AB0',
2021-12-15T10:26:19:       schedule_version: 389,
2021-12-15T10:26:19:       new_producers: null,
2021-12-15T10:26:19:       header_extensions: [],
2021-12-15T10:26:19:       producer_signature: 'SIG_K1_K5CKHkcKjzi5bkwCb49JnNt2c5m4HbMMcERDbXb8WqPSU7uVKC8wkTaSnc6zpQPfBbPg8EsUztEmCK9pMw3mNEW7P36WrF',
2021-12-15T10:26:19:       prune_state: 2,
2021-12-15T10:26:19:       transactions: [Array],
2021-12-15T10:26:19:       block_extensions: []
2021-12-15T10:26:19:     }
2021-12-15T10:26:19:   ],
2021-12-15T10:26:19: 321100009086037A3C9F0834283B62E9DDCD0700000000000000000000000000000001020101008053BC9483A95C342DB137842FA2BD77528DEB81C60389FD2FD9E5E4E43ED9D291E80F69945E9E3BAAD2A03707000000DF007C5D010000000130A9CBE6AAA41690315164FE0400000005048053BC
....
....
....
2021-12-15T12:32:59: [00_master] The worker #3 has disconnected
2021-12-15T12:33:03: [00_master] W:13 | R:2.2 | C:0 | A:0 | D:0 | I:0 | 0/0/156234312 | syncs a few seconds ago (0.0% 0.0%)
2021-12-15T12:33:03: [00_master] No blocks processed! Indexer will stop in 295 seconds!
2021-12-15T12:33:08: [00_master] W:13 | R:2 | C:0 | A:0 | D:0 | I:0 | 0/0/156234312 | syncs a few seconds ago (0.0% 0.0%)
2021-12-15T12:33:08: [00_master] No blocks processed! Indexer will stop in 290 seconds!
2021-12-15T12:33:13: [00_master] W:13 | R:2 | C:0 | A:0 | D:0 | I:0 | 0/0/156234312 | syncs a few seconds ago (0.0% 0.0%)
2021-12-15T12:33:13: [00_master] No blocks processed! Indexer will stop in 285 seconds!
2021-12-15T12:33:18: [00_master] W:13 | R:2 | C:0 | A:0 | D:0 | I:0 | 0/0/156234312 | syncs a few seconds ago (0.0% 0.0%)
2021-12-15T12:33:18: [00_master] No blocks processed! Indexer will stop in 280 seconds!

`hex_data` field in action is no longer encoded per the EOS encoding spec.

The route to get actions used to return the hex_data field as a hex string of the action data, encoded per the EOS encoding spec.

Now it returns a hex string that is the json encoding of the action data:

            act.action_trace.act.hex_data = Buffer.from(flatstr(JSON.stringify(action.act.data))).toString('hex');

The line of code in question is here.

How can we get the original, eos-specific encoding back?

resource_already_exists_exception on first pass / ABI scan

I get the following issue when attempting to start the Hyperion indexer configured for Proton mainnet:

2020-04-25T21:24:13: [00_master] --------- Hyperion Indexer 3.0.0 ---------
2020-04-25T21:24:13: [00_master] Using parser version 1.8
2020-04-25T21:24:13: [00_master] Chain: proton
2020-04-25T21:24:13: [00_master]
2020-04-25T21:24:13: -------------------
2020-04-25T21:24:13:  ABI SCAN MODE
2020-04-25T21:24:13: -------------------
2020-04-25T21:24:13: [00_master] Purging all proton queues!
2020-04-25T21:24:13: [00_master] Elasticsearch: 7.6.2 | Lucene: 8.4.0
2020-04-25T21:24:13: [00_master] Ingest client ready at http://127.0.0.1:9200/
2020-04-25T21:24:13: [00_master] Painless Update Script loaded!
2020-04-25T21:24:13: [00_master] Mapping added for @voteproducer
2020-04-25T21:24:13: [00_master] Updating index templates for proton...
2020-04-25T21:24:15: [00_master] 12 index templates updated
2020-04-25T21:24:15: [00_master] Creating index proton-action-v1-000001...
2020-04-25T21:24:45: ResponseError: resource_already_exists_exception
2020-04-25T21:24:45:     at IncomingMessage.<anonymous> (/opt/eosio/src/Hyperion-History-API/node_modules/@elastic/elasticsearch/lib/Transport.js:296:25)
2020-04-25T21:24:45:     at IncomingMessage.emit (events.js:327:22)
2020-04-25T21:24:45:     at endReadableNT (_stream_readable.js:1201:12)
2020-04-25T21:24:45:     at processTicksAndRejections (internal/process/task_queues.js:84:21) {
2020-04-25T21:24:45:   meta: {
2020-04-25T21:24:45:     body: { error: [Object], status: 400 },
2020-04-25T21:24:45:     statusCode: 400,
2020-04-25T21:24:45:     headers: {
2020-04-25T21:24:45:       'content-type': 'application/json; charset=UTF-8',
2020-04-25T21:24:45:       'content-length': '433'
2020-04-25T21:24:45:     },
2020-04-25T21:24:45:     warnings: null,
2020-04-25T21:24:45:     meta: {
2020-04-25T21:24:45:       context: null,
2020-04-25T21:24:45:       request: [Object],
2020-04-25T21:24:45:       name: 'elasticsearch-js',
2020-04-25T21:24:45:       connection: [Object],
2020-04-25T21:24:45:       attempts: 1,
2020-04-25T21:24:45:       aborted: false
2020-04-25T21:24:45:     }
2020-04-25T21:24:45:   }
2020-04-25T21:24:45: }

My Hyperion configs are below:

chains/proton.config.json

{
  "api": {
    "chain_name": "proton",
    "server_addr": "127.0.0.1",
    "server_port":  7000,
    "server_name": "127.0.0.1:7000",
    "provider_name": "EOS Detroit",
    "provider_url": "https://eosdetroit.io",
    "chain_logo_url": "https://bloks.io/img/chains/proton.png",
    "enable_caching": true,
    "cache_life": 1,
    "limits": {
      "get_actions": 1000,
      "get_voters": 100,
      "get_links": 1000,
      "get_deltas": 1000
    },
    "access_log": false,
    "enable_explorer": false
  },
  "settings": {
    "preview": false,
    "chain": "proton",
    "eosio_alias": "eosio",
    "parser": "1.8",
    "auto_stop": 300,
    "index_version": "v1",
    "debug": false,
    "rate_monitoring": true,
    "bp_logs": false,
    "bp_monitoring": false,
    "ipc_debug_rate": 60000,
    "allow_custom_abi": false
  },
  "blacklists": {
    "actions": [],
    "deltas": []
  },
  "whitelists": {
    "actions": [],
    "deltas": []
  },
  "scaling": {
    "batch_size": 10000,
    "queue_limit": 50000,
    "readers": 1,
    "ds_queues": 1,
    "ds_threads": 1,
    "ds_pool_size": 1,
    "indexing_queues": 1,
    "ad_idx_queues": 1,
    "max_autoscale": 4,
    "auto_scale_trigger": 20000
  },
  "indexer": {
    "start_on": 0,
    "stop_on": 0,
    "rewrite": false,
    "purge_queues": true,
    "live_reader": false,
    "live_only_mode": false,
    "abi_scan_mode": true,
    "fetch_block": true,
    "fetch_traces": true,
    "disable_reading": false,
    "disable_indexing": false,
    "process_deltas": true,
    "max_inline": 20
  },
  "features": {
    "streaming": {
      "enable": false,
      "traces": false,
      "deltas": false
    },
    "tables": {
      "proposals": true,
      "accounts": true,
      "voters": true,
      "userres": false,
      "delband": false
    },
    "index_deltas": true,
    "index_transfer_memo": true,
    "index_all_deltas": true
  },
  "prefetch": {
    "read": 50,
    "block": 100,
    "index": 500
  }
}

ecosystem.config.js

const {addApiServer, addIndexer} = require("./definitions/ecosystem_settings");

module.exports = {
    apps: [
        addIndexer('proton'), // Index chain name
        addApiServer('proton', 4) // API chain name, API threads number
    ]
};

connections.json

{
  "amqp": {
    "host": "127.0.0.1:5672",
    "api": "127.0.0.1:15672",
    "user": "admin",
    "pass": "[REDACTED]",
    "vhost": "hyperion"
  },
  "elasticsearch": {
    "host": "127.0.0.1:9200",
    "ingest_nodes": ["127.0.0.1:9200"],
    "user": "elastic",
    "pass": "[REDACTED]
  },
  "redis": {
    "host": "127.0.0.1",
    "port": "6379"
  },
  "chains": {
    "proton": {
      "name": "proton prod",
      "chain_id": "384da888112027f0321850a169f737c33e53b388aad48b5adace4bab97f437e0",
      "http": "http://127.0.0.1:8282",
      "ship": "ws://127.0.0.1:8887"
    }
  }
}

pathComponents.at is not a function

I request v1 compatible found some error:
curl --header 'Content-type: application/json' -X POST "http://127.0.0.1:7000/v1/chain/get_block" -d '{"block_num_or_id": "281400906"}'
display:
{ statusCode: 500, error: "Internal Server Error", message: "pathComponents.at is not a function" }
How to solve๏ผŸ

Feature request: Emulate the /v1/trace_api/get_block API call

Hyperion does a good job at emulating other APIs. Here is another one that could be done:

curl -d '{"block_num": 115296023}' http://localhost:8888/v1/trace_api/get_block | jq

Results:

{
  "id": "06df47178961040c4b8e9e3f835b88fd4076c42523df1fad44629e847d5f57a2",
  "number": 115296023,
  "previous_id": "06df47165c20a2c5f2fd6dc9d5761575fd853e8476835df8eff6faf518311bbe",
  "status": "irreversible",
  "timestamp": "2020-04-13T13:46:18.000Z",
  "producer": "whaleex.com",
  "transactions": [
    {
      "id": "a086c5b0df89d99f103632b2b9105daaaaad88296a0ce9c2d1184792f616f91f",
      "actions": [
        {
          "receiver": "eosio",
          "account": "eosio",
          "action": "onblock",
          "authorization": [
            {
              "account": "eosio",
              "permission": "active"
            }
          ],
          "data": "53534e4c500f7598aa7c4dc6000006df471519087beca24e2b257ac20bbcdfa756acf2b98213f22c69354a2093a30a016618bd4f143cad42fe3f1d6f5a38e3b0350a79c384618fa7c37fab35d6fc3718b12962c28a21ab0b6ebe12c2bef93a7ba4bd88e5cafe2910255e21e71e4a9b0600000000"
        }
      ]
    },
    {
      "id": "17d617befee5367e039f2c76bc890a625788e92061fe96de570d64a610851de3",
      "actions": [
        {
          "receiver": "eosio.token",
          "account": "eosio.token",
          "action": "transfer",
          "authorization": [
            {
              "account": "smozpwowlvht",
              "permission": "active"
            }
          ],
          "data": "90db8e9cf2faa9c4301d456a524c9353010000000000000004454f5300000000053139373236"
...

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.