Code Monkey home page Code Monkey logo

go-eth2-client's People

Contributors

0xtylerholmes avatar alrevuelta avatar avalonche avatar ciaranmcveigh5 avatar corverroos avatar draganm avatar galrogozinski avatar gpsanant avatar mcdee avatar moshe-blox avatar olegshmuelov avatar pk910 avatar samcm avatar savid avatar xenowits avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

go-eth2-client's Issues

Validator state is calculated, not using API status

I would like to understand better how the library is getting the status of the validator. In our use case with Prysm, we are seeing the status a validator with returned status of Pending_queued on a Pyrmont validator that is already attesting. My expectation would be "Active_ongoing".

For example, 0xaf63f69ea465a0804dbabed347097228878cba8134c1d04156acc62ab0f3fbf0c1a8994f9521d251f13e69e42da9ac99 is active and has been for some time, yet the status is returned from the lib as Pending_queued. This could certainly be user error, but just wanted to check.

Zerolog logger isn't threadsafe

We cannot run our tests with -race since it detects a race inside go-eth2-client. This is due to the use of a global zerolog.Logger that is not threadsafe. This global logger is used concurrently in the http package.

To reproduce, run: go test -race ./... in this repo.

Error: invalid value for inactivity score

I'm getting an error invalid value for inactivity score from time to time. A bit odd since this value should be zero unless the network hits non-finality. However I haven't managed to reproduce it, it appears every 30 minutes with an Infura endpoint.

On the other hand, shouldn't ParseUint use 64 here, since its a uint64 type according to the spec? Not sure if this is the issue though.

Some of the errors I got:

failed to parse altair beacon state: invalid value for inactivity score 0: strconv.ParseUint: parsing \"1600\": value out of range"
failed to parse altair beacon state: invalid value for inactivity score 0: strconv.ParseUint: parsing \"1612\": value out of range"
failed to parse altair beacon state: invalid value for inactivity score 0: strconv.ParseUint: parsing \"1640\": value out of range"
failed to parse altair beacon state: invalid value for inactivity score 0: strconv.ParseUint: parsing \"1712\": value out of range"
failed to parse altair beacon state: invalid value for inactivity score 0: strconv.ParseUint: parsing \"1732\": value out of range"
failed to parse altair beacon state: invalid value for inactivity score 0: strconv.ParseUint: parsing \"1732\": value out of range"
failed to parse altair beacon state: invalid value for inactivity score 0: strconv.ParseUint: parsing \"1820\": value out of range"

I will update as soon as I get more.

Caching of "static" information should time out

The http module caches what it considers to be static information, however at current the connections to clients can drop and reattach with the client changing its data. These changes are not reflected in the http module's connection.

To address this, caching of static information should be discarded and refreshed on a periodic basis. This will also allow for situations where beacon nodes update their configuration and similar information without a restart.

Add support to `produceBlockV3`

This is to track the story of supporting produceBlockV3 API. More context can be found here.

This feature in particular important for dependent projects like charon, to avoid overengineering in creating a "private" version of the API wrapper.

Thank you!

ResourceExhausted: grpc: received message larger than max

Hello,
I'm syncing a Prysm with a database using chaind, and I get these errors persistently.
I tried to hack MaxCallRecvMsgSize, but no luck.

Dec 16 16:22:26 chaind[110813]: {"level":"warn","service":"proposerduties","impl":"standard","epoch":8,"error":"failed to fetch proposer duties: call to GetDuties() failed: rpc error: code = ResourceExhausted desc = grpc: received message larger than max (5557752 vs. 4194304)","time":"2020-12-16T16:22:26Z","message":"Failed to update proposer duties"}
Dec 16 16:27:56 chaind[110813]: {"level":"warn","service":"proposerduties","impl":"standard","epoch":9,"error":"failed to fetch proposer duties: call to GetDuties() failed: rpc error: code = ResourceExhausted desc = grpc: received message larger than max (5557752 vs. 4194304)","time":"2020-12-16T16:27:56Z","message":"Failed to update proposer duties"}
Dec 16 16:32:40 chaind[110813]: {"level":"warn","service":"proposerduties","impl":"standard","epoch":10,"error":"failed to fetch proposer duties: call to GetDuties() failed: rpc error: code = ResourceExhausted desc = grpc: received message larger than max (5557752 vs. 4194304)","time":"2020-12-16T16:32:40Z","message":"Failed to update proposer duties"}
Dec 16 16:38:04 chaind[110813]: {"level":"warn","service":"proposerduties","impl":"standard","epoch":11,"error":"failed to fetch proposer duties: call to GetDuties() failed: rpc error: code = ResourceExhausted desc = grpc: received message larger than max (5557752 vs. 4194304)","time":"2020-12-16T16:38:04Z","message":"Failed to update proposer duties"}

Unable to parse blocks without version headers

The new block version parsing code assumes the existence of the Eth-Consensus-Version, which according to the spec is required. However, not all providers send this back (e.g. QuickNode). It would be nice to fall back to the previous behaviour of the partial unmarshalling to find the version if the header doesn't exist.

Corrupted "Good" Capella beacon block data

The "Good" test beacon block data
have incorrect size for some of the body.deposits proofs

instead of 32 bytes the test proof data has 33 bytes.
e.g "0x000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f20" --> 33 bytes

How to reproduce:

  1. Marshal SSZ the block
  2. Fail on Unmarshal

saw the issue for Capella block, but could happen to all other versions as well

[BUG] GET failed with status 429: {"message":"grpc: received message larger than max (70716954 vs. 4194304)

Using ethdo but it calls the go-eth2-client ...

When trying to generate an offline file, I get a 429 error with this command

./ethdo validator credentials set --prepare-offline --connection (eth2 beacon node:3500)

Connections to remote beacon nodes should be secure. This warning can be silenced with --allow-insecure-connections
Error: failed to process: failed to obtain validators: failed to request validators: GET failed with status 429: {"message":"grpc: received message larger than max (70716973 vs. 4194304)","code":429}

Google seems to indicate that an option should be set:

I solved this problem by assigning the grpc.max_message_length when creating the gRPC stub

    # create the gRPC stub
options = [('grpc.max_message_length', 100 * 1024 * 1024)]
channel = grpc.insecure_channel(server_url, options = options)
stub = prediction_service_pb2_grpc.PredictionServiceStub(channel)

Thanks for the pointer @Eloring ! I needed this option to get it to work:

options = [('grpc.max_receive_message_length', max_message_length)]

I traced the code as best as I could and I can see where the HTTP request is sent but I can't see where options would be set...

Keep url prefix when calling events api

When using an url like this one https://domain/prefix/ all the requests I've tested work fine except for the events one which replaces the path with the /eth/v1/events one.

Probably because the other GET/POST requests append the request url to the basepath compared to the events one which uses resolveReference which replaces the path since the events endpoint starts with a /

Loading validator set does not work for prysm clients with default flags

Loading the validator set does currently not work with prysm clients without supplying special flags.

The ValidatorsProvider.Validators function uses the /eth/v2/debug/beacon/states/{state_id} endpoint to fetch the full state.
I guess this was made because the regular /eth/v1/beacon/states/{state_id}/validators endpoint does not support SSZ encoding?

But /debug/ apis are not available for prysm without supplying the --enable-debug-rpc-endpoints client flag.

It would be great to have a possibility to disable using the /debug/ endpoint if it's not available :)

`prysmgrpc` doesn't implement `SignedBeaconBlockProvider`

The SignedBeaconBlockProvider interface was recently changed to return *spec.VersionedSignedBeaconBlock instead of *spec.SignedBeaconBlock.

The http package implements this change, but prysmgrpc still implements the previous interface.

How can we call `SignedBeaconBlock when connected to Prysm? Or is this still a work in progress?

Thanks!

Lighthouse does not accept multiple topics in event subscription in current format

Hi,

While tinkering with the Events handler, I tried subscribing to a bunch of events, e.g. my topics array looked like this:

 ([]string) (len=4 cap=4) {
  (string) (len=4) "head",
  (string) (len=11) "chain_reorg",
  (string) (len=5) "block",
  (string) (len=11) "attestation"
 }

However, after a long timeout I finally got this error:

{"level":"error","service":"client","impl":"http","error":"could not connect to stream: Bad Request","time":"2021-12-31T14:37:39Z","message":"Failed to subscribe to event stream"}

After enabling trace I noticed this:

{"level":"trace","service":"client","impl":"http","url":"http://lighthouse_beacon_node:5052/eth/v1/events?topics=head&topics=chain_reorg&topics=block&topics=attestation","time":"2021-12-31T14:40:38Z","message":"GET request to events stream"}

Which matches the source:

reference, err := url.Parse(fmt.Sprintf("/eth/v1/events?topics=%s", strings.Join(topics, "&topics=")))

However, at least the Lighthouse beacon node that I'm running locally does not appreciate this style:

curl -v "http://192.168.1.1:5052/eth/v1/events?topics=block&topics=head"
*   Trying 192.168.1.1:5052...
* Connected to 192.168.1.1 (192.168.1.1) port 5052 (#0)
> GET /eth/v1/events?topics=block&topics=head HTTP/1.1
> Host: 192.168.1.1:5052
> User-Agent: curl/7.80.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 400 Bad Request
< content-type: application/json
< server: Lighthouse/v2.0.1-fff01b2+/x86_64-linux
< content-length: 90
< date: Fri, 31 Dec 2021 15:17:19 GMT
<
* Connection #0 to host 192.168.1.1 left intact
{"code":400,"message":"BAD_REQUEST: invalid query: Invalid query string","stacktraces":[]}

While it works fine if you only supply a single topic:

curl -v "http://192.168.1.1:5052/eth/v1/events?topics=block"
*   Trying 192.168.1.1:5052...
* Connected to 192.168.1.1 (192.168.1.1) port 5052 (#0)
> GET /eth/v1/events?topics=block HTTP/1.1
> Host: 192.168.1.1:5052
> User-Agent: curl/7.80.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: text/event-stream
< cache-control: no-cache
< server: Lighthouse/v2.0.1-fff01b2+/x86_64-linux
< transfer-encoding: chunked
< date: Fri, 31 Dec 2021 15:22:21 GMT
<
event:block
data:{"slot":"2845010","block":"0x0f3fc168f49846788fb5a397d5b75a9972917f7613ecf8f4d54f32d6b134469d"}

It also finds comma separated topics just fine:

curl -v "http://192.168.1.1:5052/eth/v1/events?topics=head,block"
*   Trying 192.168.1.1:5052...
* Connected to 192.168.1.1 (192.168.1.1) port 5052 (#0)
> GET /eth/v1/events?topics=head,block HTTP/1.1
> Host: 192.168.1.1:5052
> User-Agent: curl/7.80.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: text/event-stream
< cache-control: no-cache
< server: Lighthouse/v2.0.1-fff01b2+/x86_64-linux
< transfer-encoding: chunked
< date: Fri, 31 Dec 2021 15:22:52 GMT
<
event:block
data:{"slot":"2845013","block":"0x68b08ab5edbe632b75dd048487be0ae2e031f19513f5f4b3f469803b0f42ae1e"}

event:head
data:{"slot":"2845013","block":"0x68b08ab5edbe632b75dd048487be0ae2e031f19513f5f4b3f469803b0f42ae1e","state":"0xdeb1b1c86cf4e7d59a3d554e38d62b43156690bb06567512b16937e88401ef30","current_duty_dependent_root":"0x28d2b8de90c26971978929f3ac808e627e51ae9e96d317ed6842941bbaa27666","previous_duty_dependent_root":"0xb26c6b4c0ecf01841e112a811918f90ca6943387c82497833006099c28824bd9","epoch_transition":false}

Builder API: Validator Registration

To be used by Obols Charon client

Validator clients have to register with a relay to gain access to the builder network. They are recommended to re-register every epoch.

The "validator registration" section reference in "Builder -- Honest Validator" has additional details

https://github.com/ethereum/builder-specs/blob/main/specs/validator.md#validator-registration

The specs for the payload that is sent with this registration can be found in the link below (ValidatorRegistrationV1 & SignedValidatorRegistrationV1)

https://github.com/ethereum/builder-specs/blob/main/specs/builder.md#validatorregistrationv1

Happy to draft up a rough PR using "BlindedBeaconBlock" references in the repo as a template

Have made a start at implementing the changes on #19

Thanks

Return metadata fields for relevant endpoints

As per the latest beacon-APIs spec, some endpoints contain execution_optimistic and dependent_root fields as part of their response object. This field was added as part of this PR.

As of now go-eth2-client doesn't support these fields in some of the relevant endpoints.

  • At obol while integrating charon to lodestar VC, we found out that on startup while querying for duties it expects these fields to be present.
  • The dependent_root field is required to detect reorgs where all validator clients compare the dependent_root returned from getAttesterDuties beacon API response with the dependent roots received from SSE events for head topic. If both of these are different they call getAttesterDuties again.
  • Lighthouse VC also expects this field to be present specially in GET /eth/v1/beacon/blocks/{block_id}/root for signing sync committee messages.

In order to support all validator clients and to conform with latest spec, we are suggesting to return these fields as part of response of all the relevant endpoints.

For more details:

Prysm ValidatorsByPubKey returns 0 balances for all but last pubkey

Given the following

if provider, isProvider := b.service.(ValidatorsProvider); isProvider {
    validators, err := provider.ValidatorsByPubKey(ctx, "head", pubkeys)
}

Only the first validator in the returned slice has the correct balance set, all the preceeding balances are set to 0

On Pyrmont I tested with the following keys

b1aa1fbe5851d7477ba12042f05bf406771471a118252bb1d455a184af23a4f317d854668f683aba629dcd3f698ba7b7
a7e6da76277e0ab2fbfc69ce532fa71cf7ea977102042ac5ed9b4ea6adf1346a56bce68d8f731d682d6eeaa8cab06cc8
90c39607ff913c77b1d5565143d2d75a16e07ef32c943ea055aef73af66c2d430c0f59980041bd99b55bac67dd597cf4

and got

{
  "110293": {
    "index": "110293",
    "balance": "161027066386",
    "status": "Active_ongoing",
    "validator": {
      "pubkey": "0x90c39607ff913c77b1d5565143d2d75a16e07ef32c943ea055aef73af66c2d430c0f59980041bd99b55bac67dd597cf4",
      "withdrawal_credentials": "0x00ea4c7b221960571961b8e4b9dff539d650aaadbe85362f37879619dd5b872e",
      "effective_balance": "32000000000",
      "slashed": false,
      "activation_eligibility_epoch": "5094",
      "activation_epoch": "5174",
      "exit_epoch": "18446744073709551615",
      "withdrawable_epoch": "18446744073709551615"
    }
  },
  "110532": {
    "index": "110532",
    "balance": "0",
    "status": "Active_ongoing",
    "validator": {
      "pubkey": "0xa7e6da76277e0ab2fbfc69ce532fa71cf7ea977102042ac5ed9b4ea6adf1346a56bce68d8f731d682d6eeaa8cab06cc8",
      "withdrawal_credentials": "0x00575239c1b43b616da1bb476f3b0f730f24e867ec445d35a4aff4ae1898e358",
      "effective_balance": "32000000000",
      "slashed": false,
      "activation_eligibility_epoch": "5555",
      "activation_epoch": "5565",
      "exit_epoch": "18446744073709551615",
      "withdrawable_epoch": "18446744073709551615"
    }
  },
  "117132": {
    "index": "117132",
    "balance": "0",
    "status": "Active_ongoing",
    "validator": {
      "pubkey": "0xb1aa1fbe5851d7477ba12042f05bf406771471a118252bb1d455a184af23a4f317d854668f683aba629dcd3f698ba7b7",
      "withdrawal_credentials": "0x0077eff8b10549d5f8f5a0bd1a7a3e3bb679086489d5838bf27694647f85c3bd",
      "effective_balance": "32000000000",
      "slashed": false,
      "activation_eligibility_epoch": "16167",
      "activation_epoch": "16178",
      "exit_epoch": "18446744073709551615",
      "withdrawable_epoch": "18446744073709551615"
    }
  }
}

When passing one key at a time, I get the following results

{
  "117132": {
    "index": "117132",
    "balance": "192724520139",
    "status": "Active_ongoing",
    "validator": {
      "pubkey": "0xb1aa1fbe5851d7477ba12042f05bf406771471a118252bb1d455a184af23a4f317d854668f683aba629dcd3f698ba7b7",
      "withdrawal_credentials": "0x0077eff8b10549d5f8f5a0bd1a7a3e3bb679086489d5838bf27694647f85c3bd",
      "effective_balance": "32000000000",
      "slashed": false,
      "activation_eligibility_epoch": "16167",
      "activation_epoch": "16178",
      "exit_epoch": "18446744073709551615",
      "withdrawable_epoch": "18446744073709551615"
    }
  }
}
{
  "110532": {
    "index": "110532",
    "balance": "127042406111",
    "status": "Active_ongoing",
    "validator": {
      "pubkey": "0xa7e6da76277e0ab2fbfc69ce532fa71cf7ea977102042ac5ed9b4ea6adf1346a56bce68d8f731d682d6eeaa8cab06cc8",
      "withdrawal_credentials": "0x00575239c1b43b616da1bb476f3b0f730f24e867ec445d35a4aff4ae1898e358",
      "effective_balance": "32000000000",
      "slashed": false,
      "activation_eligibility_epoch": "5555",
      "activation_epoch": "5565",
      "exit_epoch": "18446744073709551615",
      "withdrawable_epoch": "18446744073709551615"
    }
  }
}
{
  "110293": {
    "index": "110293",
    "balance": "161027066386",
    "status": "Active_ongoing",
    "validator": {
      "pubkey": "0x90c39607ff913c77b1d5565143d2d75a16e07ef32c943ea055aef73af66c2d430c0f59980041bd99b55bac67dd597cf4",
      "withdrawal_credentials": "0x00ea4c7b221960571961b8e4b9dff539d650aaadbe85362f37879619dd5b872e",
      "effective_balance": "32000000000",
      "slashed": false,
      "activation_eligibility_epoch": "5094",
      "activation_epoch": "5174",
      "exit_epoch": "18446744073709551615",
      "withdrawable_epoch": "18446744073709551615"
    }
  }
}

Issue when no active multi providers

When there are no active providers in a multi client, then doCall returns nil,nil which causes unexpected behaviour.

We at Obol encountered this after adding support for multiple beacon nodes using the multi package. Most of our tests and setups still only has a single beacon node. We configure our eth2 clients with a 2s timeout. This results in sporadic timeouts from beacon nodes in the wild (most calls are super fast, but sometimes around epoch change we suspect, calls sporadically take longer and timeout). This results in the only client being disabled.

Since these timeouts are sporadic, and subsequent calls will succeed, but we have only a single client, we are suggesting to allow "falling back" to inactive providers if no active providers are available. This will result in slower failures when all providers are actually down, but it will recover seamlessly if one of the "inactive providers" is actually "active", like in our case. This is similar to AWS load balancers that "fail open" when all targets are unhealthy.

Some spec values are not checked prior to start

I'm adding this tool into some testnet testing infrastructure which uses different presets (minimal) and some custom config values. Due to this some of the assumed parameters used in parsing some of the objects are not valid. We should check the spec of the node and update these parameters prior to using endpoints which parse the beacon state.

I will submit a PR that I am currently using to work around this, but I am not sure if this is the best route to accomplish this.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.