Code Monkey home page Code Monkey logo

ceremonyclient's Introduction

Quilibrium - Solstice

Quilibrium is a decentralized alternative to platform as a service providers. This release is part of the phases of the Dusk release, which finalizes with the full permissionless mainnet in version 2.0. Documentation for the underlying technology can be found at https://www.quilibrium.com/

Quick Start

Running production nodes from source is no longer recommended given build complexity. Please refer to our release information to obtain the latest version.

Running From Source

Builds are now a hybrid of Rust and Go, so you will need both go 1.22 and latest Rust + Cargo.

VDF

The VDF implementation is now in Rust, and requires GMP to build. On Mac, you can install GMP with brew (brew install gmp). On Linux, you will need to find the appropriate package for your distro.

Install the go plugin for uniffi-rs:

cargo install uniffi-bindgen-go --git https://github.com/NordSecurity/uniffi-bindgen-go --tag v0.2.1+v0.25.0

Be sure to follow the PATH export given by the installer.

Build the Rust VDF implementation by navigating to the vdf folder, and run ./generate.sh.

Node

Because of the Rust interop, be sure you follow the above steps for the VDF before proceeding to this. Navigate to the node folder, and run (making sure to update the path for the repo):

CGO_LDFLAGS="-L/path/to/ceremonyclient/target/release -lvdf -ldl -lm" \
    CGO_ENABLED=1 \
    GOEXPERIMENT=arenas \
    go run ./... --signature-check=false

gRPC/REST Support

If you want to enable gRPC/REST, add the following entries to your config.yml:

listenGrpcMultiaddr: <multiaddr> 
listenRESTMultiaddr: <multiaddr>

Please note: this interface, while read-only, is unauthenticated and not rate- limited. It is recommended that you only enable if you are properly controlling access via firewall or only query via localhost.

Token Balance

In order to query the token balance of a running node, execute the following command from the node/ folder:

./node-$version-$platform -balance

The accumulated token balance will be printed to stdout in QUILs.

Note that this feature requires that gRPC support is enabled.

Community Section

This section contains community-built clients, applications, guides, etc

Disclaimer: Because some of these may contain external links, do note that these are unofficial – every dependency added imparts risk, so if another project's github account were compromised, for example, it could lead people down a dangerous or costly path. Proceed with caution as always and refer to reliable members of the community for verification of links before clicking or connecting your wallets

1. The Q Guide - Beginners’ Guide

  • A detailed beginners' guide for how to setup a Quilibrium Node, created by @demipoet - link

Development

Please see the CONTRIBUTING.md file for more information on how to contribute to this repository.

License + Interpretation

Significant portions of Quilibrium's codebase depends on GPL-licensed code, mandating a minimum license of GPL, however Quilibrium is licensed as AGPL to accomodate the scenario in which a cloud provider may wish to coopt the network software. The AGPL allows such providers to do so, provided they are willing to contribute back the management code that interacts with the protocol and node software. To provide clarity, our interpretation is with respect to node provisioning and management tooling for deploying alternative networks, and not applications which are deployed to the network, mainnet status monitors, or container deployments of mainnet nodes from the public codebase.

ceremonyclient's People

Contributors

0xluk avatar 0xozgur avatar 0xzoz avatar agostbiro avatar alchemydc avatar bayoumymac avatar branksypop avatar cassonmars avatar demipoet avatar fitblip avatar freekers avatar hhwill avatar littleblackcloud avatar mjessup avatar mscurtescu avatar ninj696 avatar ohern24 avatar paddingme avatar polo-obx avatar raykyri avatar scmart avatar sercancybervision avatar shawnharmsen avatar shyba avatar sirouk avatar talentbuilder avatar tjsturos avatar vectorisvector avatar xingqiwang avatar zephyrsailor avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

ceremonyclient's Issues

Frame number resets back to 1 without node crashing, stopping or restarting

I've observed that my node resets/drops back to frame number 1 without the node (application) itself crashing, stopping or restarting.
Logs are below. I'm running node version 1.4.9 on Ubuntu 22.04 LTS with 6 cores and 32GB RAM.

{"level":"info","ts":1710754361.5276842,"caller":"ceremony/consensus_frames.go:155","msg":"checking peer list","peers":8459,"uncooperative_peers":47,"current_head_frame":5115}
{"level":"info","ts":1710754361.535362,"caller":"ceremony/consensus_frames.go:189","msg":"polling peer for new frames","peer_id":"EiD3hmtJI8gnUnZDZ6QA/B+VGl6By3YiIwpFE4S/j1TwRw=="}
{"level":"info","ts":1710754362.7055988,"caller":"master/master_clock_consensus_engine.go:372","msg":"slow bandwidth, scoring out","peer_id":"QmbJKxLDRv6P7K5NnRCBT7UYv2eXwFqDWF32TgBvv5axPW"}
{"level":"info","ts":1710754363.018633,"caller":"ceremony/consensus_frames.go:155","msg":"checking peer list","peers":8458,"uncooperative_peers":48,"current_head_frame":5115}
{"level":"info","ts":1710754363.02434,"caller":"ceremony/consensus_frames.go:189","msg":"polling peer for new frames","peer_id":"EiDgOMRPpkwnokWMhfIulVCx5iRI/ZtVtkvGifd7oplQqA=="}
{"level":"info","ts":1710754364.0805676,"caller":"ceremony/consensus_frames.go:155","msg":"checking peer list","peers":8457,"uncooperative_peers":49,"current_head_frame":5115}
{"level":"info","ts":1710754364.0873816,"caller":"ceremony/consensus_frames.go:189","msg":"polling peer for new frames","peer_id":"EiCpUZ/PPg0USN77T4WGflZwQ8vl+3V2deuxDLcD1Qfm6Q=="}
{"level":"info","ts":1710754364.5297134,"caller":"master/master_clock_consensus_engine.go:200","msg":"peers in store","peer_store_count":1894,"network_peer_count":447}
{"level":"info","ts":1710754366.0137274,"caller":"ceremony/consensus_frames.go:155","msg":"checking peer list","peers":8456,"uncooperative_peers":50,"current_head_frame":5115}
{"level":"info","ts":1710754366.0199614,"caller":"ceremony/consensus_frames.go:189","msg":"polling peer for new frames","peer_id":"EiAv/HAyL9Flsw3UOBQaRiHbYvKn7u2SbZwfZz+Gwk88fA=="}
{"level":"info","ts":1710754369.7206352,"caller":"master/master_clock_consensus_engine.go:350","msg":"peer returned error","peer_id":"QmfWy1NofQLGMF6q8a17i5La3LDkfk4qr1LsPX36qHuLGm","error":"rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial context: failed to dial: failed to dial QmfWy1NofQLGMF6q8a17i5La3LDkfk4qr1LsPX36qHuLGm:\\n  * [/ip4/65.108.233.35/udp/8336/quic] dial backoff\\n  * [/ip4/65.108.233.35/udp/65487/quic] dial backoff\""}
{"level":"info","ts":1710754369.9970086,"caller":"ceremony/consensus_frames.go:155","msg":"checking peer list","peers":8455,"uncooperative_peers":51,"current_head_frame":5115}
{"level":"info","ts":1710754370.0044847,"caller":"ceremony/consensus_frames.go:189","msg":"polling peer for new frames","peer_id":"EiAngF1c/S/6GX9SKphk7z/QqSe5XInecIOCPmyq93M0jw=="}
{"level":"info","ts":1710754371.250096,"caller":"ceremony/consensus_frames.go:155","msg":"checking peer list","peers":8454,"uncooperative_peers":52,"current_head_frame":5115}
{"level":"info","ts":1710754371.257507,"caller":"ceremony/consensus_frames.go:189","msg":"polling peer for new frames","peer_id":"EiDXKrbqTGUCD+Sc+vK6dffF2hPyqrb24ZNYdnuGWlZJ0A=="}
{"level":"info","ts":1710754373.0870228,"caller":"ceremony/consensus_frames.go:155","msg":"checking peer list","peers":8453,"uncooperative_peers":53,"current_head_frame":5115}
{"level":"info","ts":1710754373.0979981,"caller":"ceremony/consensus_frames.go:189","msg":"polling peer for new frames","peer_id":"EiBOzQz3PkCnoQs9Ec8NGrqemHgF0FWCYjFXUPYFZAIS4w=="}
{"level":"info","ts":1710754374.534998,"caller":"master/master_clock_consensus_engine.go:200","msg":"peers in store","peer_store_count":1899,"network_peer_count":456}
{"level":"info","ts":1710754374.562176,"caller":"ceremony/consensus_frames.go:155","msg":"checking peer list","peers":8452,"uncooperative_peers":54,"current_head_frame":5115}
{"level":"info","ts":1710754374.5692165,"caller":"ceremony/consensus_frames.go:189","msg":"polling peer for new frames","peer_id":"EiCIPNntnue9HZi+CLjXy+XSMOTqmT8esBIMGPTLL2Zwjw=="}
{"level":"info","ts":1710754376.2222955,"caller":"master/master_clock_consensus_engine.go:372","msg":"slow bandwidth, scoring out","peer_id":"QmXqsn362E9bBgJasHPBLyxenD7Jn4FtVwKbYiH1gbCSwk"}
{"level":"info","ts":1710754376.6270893,"caller":"ceremony/consensus_frames.go:155","msg":"checking peer list","peers":8451,"uncooperative_peers":55,"current_head_frame":5115}
{"level":"info","ts":1710754376.6348116,"caller":"ceremony/consensus_frames.go:189","msg":"polling peer for new frames","peer_id":"EiBfbC8WkKkrwPkvSb8zps5xBqQuAk1yNmbc/QXDS3m8LA=="}
{"level":"info","ts":1710754377.8331783,"caller":"ceremony/consensus_frames.go:155","msg":"checking peer list","peers":8450,"uncooperative_peers":56,"current_head_frame":5115}
{"level":"info","ts":1710754377.8413193,"caller":"ceremony/consensus_frames.go:189","msg":"polling peer for new frames","peer_id":"EiAxbxCwVRd5swoTDN0WJuzBPBImMloeRVqczZauF6c7bg=="}
{"level":"info","ts":1710754379.916905,"caller":"ceremony/consensus_frames.go:155","msg":"checking peer list","peers":8450,"uncooperative_peers":57,"current_head_frame":5115}
{"level":"info","ts":1710754379.926146,"caller":"ceremony/consensus_frames.go:189","msg":"polling peer for new frames","peer_id":"EiA9FLFmQFfiVgv42zPf00A1hiwsQ6dP3Ff4VYKhefPLPA=="}
{"level":"info","ts":1710754381.106154,"caller":"ceremony/consensus_frames.go:155","msg":"checking peer list","peers":8449,"uncooperative_peers":58,"current_head_frame":5115}
{"level":"info","ts":1710754381.1137753,"caller":"ceremony/consensus_frames.go:189","msg":"polling peer for new frames","peer_id":"EiAWwVeeD4epbg8sJJrKSH7QzKn4l9AfkBAv4kSx6ZxBOA=="}
{"level":"info","ts":1710754382.3172874,"caller":"ceremony/consensus_frames.go:155","msg":"checking peer list","peers":8448,"uncooperative_peers":59,"current_head_frame":5115}
{"level":"info","ts":1710754382.3238087,"caller":"ceremony/consensus_frames.go:189","msg":"polling peer for new frames","peer_id":"EiB4Jb14pMY7BiAyqck9/hKCLBZ13TpfIqq9RsUkvDQgHQ=="}
{"level":"info","ts":1710754383.5719333,"caller":"ceremony/consensus_frames.go:155","msg":"checking peer list","peers":8447,"uncooperative_peers":60,"current_head_frame":5115}
{"level":"info","ts":1710754383.5800178,"caller":"ceremony/consensus_frames.go:189","msg":"polling peer for new frames","peer_id":"EiDsefmEM1h7Zt/JeKK1raWTb4rlsWY+Q3rHFMfaFdqt1g=="}
{"level":"info","ts":1710754384.5413003,"caller":"master/master_clock_consensus_engine.go:200","msg":"peers in store","peer_store_count":1902,"network_peer_count":450}
{"level":"info","ts":1710754384.7223754,"caller":"ceremony/consensus_frames.go:155","msg":"checking peer list","peers":8447,"uncooperative_peers":61,"current_head_frame":5115}
{"level":"info","ts":1710754384.7299747,"caller":"ceremony/consensus_frames.go:189","msg":"polling peer for new frames","peer_id":"EiDVngikPhoWPFGQDaukzbPry2+VnuJMNl+4qtNH7qLhQg=="}
{"level":"info","ts":1710754386.5140018,"caller":"ceremony/consensus_frames.go:155","msg":"checking peer list","peers":8446,"uncooperative_peers":62,"current_head_frame":5115}
{"level":"info","ts":1710754386.5243154,"caller":"ceremony/consensus_frames.go:189","msg":"polling peer for new frames","peer_id":"EiB8q8flPd70x/91NCnQ+LgzDM4c8z4cM9BPFln58TqjlQ=="}
{"level":"info","ts":1710754392.835773,"caller":"ceremony/consensus_frames.go:373","msg":"received compressed sync frame","from":1,"to":17,"frames":17,"proofs":10}
{"level":"info","ts":1710754392.8358765,"caller":"ceremony/peer_messaging.go:452","msg":"processing frame","frame_number":1,"aggregate_commits":1}
{"level":"info","ts":1710754392.8359067,"caller":"ceremony/peer_messaging.go:468","msg":"found matching proof","frame_number":1,"commit_index":0}
{"level":"info","ts":1710754392.8491461,"caller":"ceremony/message_handler.go:309","msg":"got clock frame","address":"AIAAAAAAAAAEAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAPAVyCbkWVDUUSKVMpVFIXF9cwmk0gg+V4aidqSi9eMAUybnbBt6X7JfvdvQg+hk=","filter":"AIAAAAAAAAAEAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAPAVyCbkWVDUUSKVMpVFIXF9cwmk0gg+V4aidqSi9eMAUybnbBt6X7JfvdvQg+hk=","frame_number":1,"proof_count":1}
{"level":"info","ts":1710754393.9073637,"caller":"master/master_clock_consensus_engine.go:372","msg":"slow bandwidth, scoring out","peer_id":"QmQmKUNf6ujMDiBJs5ZBvwK44xsbFQyAGBof8RdDsgeYCk"}
{"level":"info","ts":1710754394.5518062,"caller":"master/master_clock_consensus_engine.go:200","msg":"peers in store","peer_store_count":1908,"network_peer_count":443}
{"level":"info","ts":1710754395.6866598,"caller":"ceremony/message_handler.go:327","msg":"clock frame was valid","address":"AIAAAAAAAAAEAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAPAVyCbkWVDUUSKVMpVFIXF9cwmk0gg+V4aidqSi9eMAUybnbBt6X7JfvdvQg+hk=","filter":"AIAAAAAAAAAEAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAPAVyCbkWVDUUSKVMpVFIXF9cwmk0gg+V4aidqSi9eMAUybnbBt6X7JfvdvQg+hk=","frame_number":1}
{"level":"info","ts":1710754395.6867433,"caller":"ceremony/peer_messaging.go:452","msg":"processing frame","frame_number":2,"aggregate_commits":1}
{"level":"info","ts":1710754395.6867588,"caller":"ceremony/peer_messaging.go:468","msg":"found matching proof","frame_number":2,"commit_index":0}
{"level":"info","ts":1710754395.6990714,"caller":"ceremony/message_handler.go:309","msg":"got clock frame","address":"AIAAAAAAAAAEAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAPAVyCbkWVDUUSKVMpVFIXF9cwmk0gg+V4aidqSi9eMAUybnbBt6X7JfvdvQg+hk=","filter":"AIAAAAAAAAAEAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAPAVyCbkWVDUUSKVMpVFIXF9cwmk0gg+V4aidqSi9eMAUybnbBt6X7JfvdvQg+hk=","frame_number":2,"proof_count":1}
{"level":"info","ts":1710754397.7966897,"caller":"ceremony/message_handler.go:327","msg":"clock frame was valid","address":"AIAAAAAAAAAEAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAPAVyCbkWVDUUSKVMpVFIXF9cwmk0gg+V4aidqSi9eMAUybnbBt6X7JfvdvQg+hk=","filter":"AIAAAAAAAAAEAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAPAVyCbkWVDUUSKVMpVFIXF9cwmk0gg+V4aidqSi9eMAUybnbBt6X7JfvdvQg+hk=","frame_number":2}
{"level":"info","ts":1710754397.7967722,"caller":"ceremony/peer_messaging.go:452","msg":"processing frame","frame_number":3,"aggregate_commits":1}
{"level":"info","ts":1710754397.7968047,"caller":"ceremony/peer_messaging.go:468","msg":"found matching proof","frame_number":3,"commit_index":0}
{"level":"info","ts":1710754397.8092442,"caller":"ceremony/message_handler.go:309","msg":"got clock frame","address":"AIAAAAAAAAAEAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAPAVyCbkWVDUUSKVMpVFIXF9cwmk0gg+V4aidqSi9eMAUybnbBt6X7JfvdvQg+hk=","filter":"AIAAAAAAAAAEAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAPAVyCbkWVDUUSKVMpVFIXF9cwmk0gg+V4aidqSi9eMAUybnbBt6X7JfvdvQg+hk=","frame_number":3,"proof_count":1}

panic

{"level":"error","ts":1710234119.712218,"caller":"ceremony/ceremony_execution_engine.go:583","msg":"error while materializing application from frame","error":"materialize application from frame: get outputs from clock frame: proto: cannot parse invalid wire-format data","errorVerbose":"proto: cannot parse invalid wire-format data\nget outputs from clock frame\nsource.quilibrium.com/quilibrium/monorepo/node/execution/intrinsics/ceremony/application.GetOutputsFromClockFrame\n\t/Users/guotie/ceremonyclient/node/execution/intrinsics/ceremony/application/ceremony_application.go:576\nsource.quilibrium.com/quilibrium/monorepo/node/execution/intrinsics/ceremony/application.MaterializeApplicationFromFrame\n\t/Users/guotie/ceremonyclient/node/execution/intrinsics/ceremony/application/ceremony_application.go:596\nsource.quilibrium.com/quilibrium/monorepo/node/execution/intrinsics/ceremony.(*CeremonyExecutionEngine).RunWorker\n\t/Users/guotie/ceremonyclient/node/execution/intrinsics/ceremony/ceremony_execution_engine.go:581\nsource.quilibrium.com/quilibrium/monorepo/node/execution/intrinsics/ceremony.(*CeremonyExecutionEngine).Start.func1.1\n\t/Users/guotie/ceremonyclient/node/execution/intrinsics/ceremony/ceremony_execution_engine.go:449\nruntime.goexit\n\t/opt/homebrew/Cellar/[email protected]/1.20.14/libexec/src/runtime/asm_arm64.s:1172\nmaterialize application from frame\nsource.quilibrium.com/quilibrium/monorepo/node/execution/intrinsics/ceremony/application.MaterializeApplicationFromFrame\n\t/Users/guotie/ceremonyclient/node/execution/intrinsics/ceremony/application/ceremony_application.go:598\nsource.quilibrium.com/quilibrium/monorepo/node/execution/intrinsics/ceremony.(*CeremonyExecutionEngine).RunWorker\n\t/Users/guotie/ceremonyclient/node/execution/intrinsics/ceremony/ceremony_execution_engine.go:581\nsource.quilibrium.com/quilibrium/monorepo/node/execution/intrinsics/ceremony.(*CeremonyExecutionEngine).Start.func1.1\n\t/Users/guotie/ceremonyclient/node/execution/intrinsics/ceremony/ceremony_execution_engine.go:449\nruntime.goexit\n\t/opt/homebrew/Cellar/[email protected]/1.20.14/libexec/src/runtime/asm_arm64.s:1172","stacktrace":"source.quilibrium.com/quilibrium/monorepo/node/execution/intrinsics/ceremony.(*CeremonyExecutionEngine).RunWorker\n\t/Users/guotie/ceremonyclient/node/execution/intrinsics/ceremony/ceremony_execution_engine.go:583\nsource.quilibrium.com/quilibrium/monorepo/node/execution/intrinsics/ceremony.(*CeremonyExecutionEngine).Start.func1.1\n\t/Users/guotie/ceremonyclient/node/execution/intrinsics/ceremony/ceremony_execution_engine.go:449"}

panic: materialize application from frame: get outputs from clock frame: proto: cannot parse invalid wire-format data
goroutine 359896 [running]:
source.quilibrium.com/quilibrium/monorepo/node/execution/intrinsics/ceremony.(*CeremonyExecutionEngine).RunWorker(0x14000150b40)
        /Users/guotie/ceremonyclient/node/execution/intrinsics/ceremony/ceremony_execution_engine.go:587 +0x22bc
source.quilibrium.com/quilibrium/monorepo/node/execution/intrinsics/ceremony.(*CeremonyExecutionEngine).Start.func1.1()
        /Users/guotie/ceremonyclient/node/execution/intrinsics/ceremony/ceremony_execution_engine.go:449 +0x20
created by source.quilibrium.com/quilibrium/monorepo/node/execution/intrinsics/ceremony.(*CeremonyExecutionEngine).Start.func1
        /Users/guotie/ceremonyclient/node/execution/intrinsics/ceremony/ceremony_execution_engine.go:448 +0xc4
exit status 2

Node error Report: Out of memory: Killed process 1837 (node) total-vm:47604924KB, anon-rss: 29873432KB, file-rss:0KB, UID:0 pgtables: 75060KB oom_score_adj:0

Tomorrow I followed this link's steps https://drive.google.com/file/d/1atQ2Gb8vLzqxiS2cqRAp9ojFNDJup3TU/view to install ceremony client and client is scuccessful running. Just now, I saw an error showed that client was killed and threw an error:

Out of memory: Killed process 1837 (node) total-vm:47604924KB, anon-rss: 29873432KB, file-rss:0KB, UID:0 pgtables: 75060KB oom_score_adj:0

image

I think maybe this problem is a memory leak issue or I made some mistakes when I installed the client? Please help and look into it, many thanks.

My node installation enviroments:
ubuntu 22.04 on virtual box
client version: latest
go version: 1.20.17

P2P issue

nvNjF695HatUvR1wktFdG5AEHic:\n * [/ip4/94.72.117.94/udp/8336/quic] dial backoff"}
{"level":"info","ts":1709911553.7084908,"caller":"p2p/blossomsub.go:336","msg":"error while connecting to dht peer","error":"failed to dial: failed to dial QmP8C7g9ZRiWzhqN2AgFu5onS6HwHzR6Vv1TCHxAhnCSnq:\n * [/ip4/65.108.194.84/udp/8336/quic] dial backoff"}
{"level":"info","ts":1709911553.7082407,"caller":"p2p/blossomsub.go:336","msg":"error while connecting to dht peer","error":"failed to dial: failed to dial QmYVaHXdFmHFeTa6oPixgjMVag6Ex7gLjE559ejJddwqzu:\n * [/ip4/51.15.18.247/udp/8336/quic] dial backoff"}
{"level":"info","ts":1709911553.708798,"caller":"p2p/blossomsub.go:336","msg":"error while connecting to dht peer","error":"failed to dial: failed to dial QmPBYgDy7snHon7PAn8nv1shApQBQz1iHb2sBBS8QSgQwW:\n * [/ip4/207.246.81.38/udp/8336/quic] dial backoff"}
{"level":"info","ts":1709911553.7089484,"caller":"p2p/blossomsub.go:336","msg":"error while connecting to dht peer","error":"failed to dial: failed to dial QmW6QDvKuYqJYYMP5tMZSp12X3nexywK28tZNgqtqNpEDL:\n * [/ip4/186.233.184.181/udp/8336/quic] dial backoff"}
{"level":"info","ts":1709911553.7090929,"caller":"p2p/blossomsub.go:336","msg":"error while connecting to dht peer","error":"failed to dial: failed to dial QmeqBjm3iX7sdTieyto1gys5ruQrQNPKfaTGcVQQWJPYDV:\n * [/ip4/204.186.74.46/udp/8316/quic] dial backoff"}
{"level":"info","ts":1709911553.7091136,"caller":"p2p/blossomsub.go:336","msg":"error while connecting to dht peer","error":"failed to dial: failed to dial Qmd233pLUDvcDW3ama27usfbG1HxKNh1V9dmWVW1SXp1pd:\n * [/ip4/204.186.74.47/udp/8317/quic] dial backoff"}
{"level":"info","ts":1709911553.7095754,"caller":"p2p/blossomsub.go:336","msg":"error while connecting to dht peer","error":"failed to dial: failed to dial Qmekz5obb9qCRP5CrZ4D8Tmabbr5mJf6mgBJHTaitrx7Fx:\n * [/ip4/185.209.178.115/udp/8336/quic] dial backoff"}
{"level":"info","ts":1709911553.7536309,"caller":"p2p/blossomsub.go:336","msg":"error while connecting to dht peer","error":"failed to dial: no good addresses"}
{"level":"info","ts":1709911553.753755,"caller":"p2p/blossomsub.go:336","msg":"error while connecting to dht peer","error":"failed to dial: no good addresses"}
{"level":"info","ts":1709911553.7538009,"caller":"p2p/blossomsub.go:336","msg":"error while connecting to dht peer","error":"failed to dial: no good addresses"}
{"level":"info","ts":1709911553.7538571,"caller":"p2p/blossomsub.go:336","msg":"error while connecting to dht peer","error":"failed to dial: no good addresses"}
{"level":"info","ts":1709911553.7575586,"caller":"p2p/blossomsub.go:336","msg":"error while connecting to dht peer","error":"failed to dial: no good addresses"}
{"level":"info","ts":1709911553.7578814,"caller":"p2p/blossomsub.go:336","msg":"error while connecting to dht peer","error":"failed to dial: no good addresses"}
{"level":"info","ts":1709911553.7580137,"caller":"p2p/blossomsub.go:336","msg":"error while connecting to dht peer","error":"failed to dial: no good addresses"}
{"level":"info","ts":1709911561.3822622,"caller":"master/master_clock_consensus_engine.go:174","msg":"peers in store","peer_store_count":18,"network_peer_count":0}
{"level":"info","ts":1709911571.3832698,"caller":"master/master_clock_consensus_engine.go:174","msg":"peers in store","peer_store_count":18,"network_peer_count":0}
image

Instructions For Adding A Bootstrap Peer

In order to coordinate peering in a simple location, this issue will serve to track peer additions. After generating a peer id, please report back with the multiaddr format of your node, e.g.:

/ip4/<ip addr>/udp/8336/quic/p2p/<peer id>

Please remember that bootstrap peers must have static ips, and should remain online as much as possible – these nodes will be first contact peers which other nodes will connect to in order to discover the network graph, until we move into the phase where the DHT is no longer necessary.

how to config grpc

when i want to check balance, and run command GOEXPERIMENT=arenas go run ./... -balance

error like this:
gRPC Not Enabled, Please Configure
exit status 1

but i realy have config the grpc

I encountered this bug. process exit.

{"level":"info","ts":1709319673.604675,"caller":"p2p/blossomsub.go:331","msg":"error while connecting to dht peer","error":"failed to dial: context deadline exceeded"}
signal: killed

Problem with first running node.

I cloned the repo from github.
and try to run it with command
go run ./...
I don't have a voucher for using it.

I got message:

:~/ceremonyclient/node# go run ./...
package source.quilibrium.com/quilibrium/monorepo/node
imports source.quilibrium.com/quilibrium/monorepo/node/app
imports source.quilibrium.com/quilibrium/monorepo/node/consensus
imports source.quilibrium.com/quilibrium/monorepo/node/execution
imports source.quilibrium.com/quilibrium/monorepo/node/protobufs
imports source.quilibrium.com/quilibrium/monorepo/nekryptology/pkg/core/curves
imports arena: build constraints exclude all Go files in /snap/go/10506/src/arena

My version of golang go version go1.22.0 linux/amd64, system: Ubuntu 22.04 Jammy Jellyfish

Node running for 3 days but balance at 0

Hello,

I've been running a node for about 3 days, first 1.3.0 and now v1.4.0 – Sunset, but the QUIL balance stays at 0.
How can I tell whether the node has synced ?
Seems like I'm past peer discovery but I don't see any mentions of frames.

Logs :

                   Quilibrium Node - v1.4.0 – Sunset

Loading ceremony state and starting node...
{"level":"info","ts":1709315721.6114168,"caller":"node/main.go:244","msg":"generating difficulty metric"}
{"level":"info","ts":1709315744.309308,"caller":"node/main.go:266","msg":"generating entropy for commit/proof sizes"}
{"level":"info","ts":1709315744.3972023,"caller":"node/main.go:283","msg":"generating 16 degree commitment metric"}
{"level":"info","ts":1709315744.4691756,"caller":"node/main.go:292","msg":"generating 128 degree commitment metric"}
{"level":"info","ts":1709315745.0463583,"caller":"node/main.go:301","msg":"generating 1024 degree commitment metric"}
{"level":"info","ts":1709315749.89425,"caller":"node/main.go:310","msg":"generating 65536 degree commitment metric"}
{"level":"info","ts":1709316047.329166,"caller":"node/main.go:319","msg":"generating 16 degree proof metric"}
{"level":"info","ts":1709316047.4060285,"caller":"node/main.go:328","msg":"generating 128 degree proof metric"}

Any pointers ?

Error 137

Hi @CassOnMars -

I'm attempting to run the sequencer in Docker, although looks like i'm getting a Error 137. Any ideas?

image

Greg

could not emit stats

"could not emit stats","error":"rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp 15.204.57.80:443: i/o timeout\"","stacktrace":"source.quilibrium.com/quilibrium/monorepo/node/consensus/ceremony.(*CeremonyDataClockConsensusEngine).Start.func2\n\t/root/ceremonyclient/node/consensus/ceremony/ceremony_data_clock_consensus_engine.go:396

I try to restart server times and close the firewall, but doesn't work.

voucher in the repo

who will redeem the voucher in this repo?
maybe it's better to remove it and invalidate it by redeeming it before the actual redeem procedure is open to the public.

cheers

High CPU load

Running the ceremony client for the 1st time, this is a pretty decent server:

16 Core W-2145 Xeon @ 3.7Ghz
Ubuntu 22.04 64bit / Linux 5.15.0-91-generic
128Gb ECC DDR4 Ram
NVMe M.2 SSDs

Is this normal or just because initial run? Been running about an hour so far.

Screenshot 2023-12-19 at 16 54 12

Docker qclient cross-mint error

Normally docker node commands has structure like below

docker compose exec node qclient help
docker compose exec node qclient token help
docker compose exec node qclient token balance

When using crossmint function with this structure, throws error

`docker compose exec node qclient cross-mint 0x0000000`
config directory doesn't exist: ../node/.config/

We need to run command with config flag.
docker compose exec node qclient cross-mint --config .config 0x0000000

@agostbiro @mscurtescu

no peers available, skipping sync

{"level":"warn","ts":1710232600.425793,"caller":"ceremony/consensus_frames.go:463","msg":"no peers available, skipping sync"}

I have No public IP address, does that matters?

Problem with fetching balance

I have problem with fetching balance. My docker is running properly. Look at below. Repo is up to date

~/ceremonyclient# docker-compose exec node go run ./... -balance
panic: error getting token info: rpc error: code = Unknown desc = get token info: get highest candidate data clock frame: item not found

goroutine 1 [running]:
main.main()
	/opt/ceremonyclient/node/main.go:101 +0x7f4
exit status 2

Node keeps failing while starting with "invalid table" error

My node keeps failing while restarting with below error;

{"level":"info","ts":1709053362.2602189,"caller":"ceremony/ceremony_data_clock_consensus_engine.go:255","msg":"starting ceremony consensus engine"}
{"level":"info","ts":1709053362.2602262,"caller":"ceremony/ceremony_data_clock_consensus_engine.go:260","msg":"loading last seen state"}
{"level":"info","ts":1709053362.261019,"caller":"master/master_clock_consensus_engine.go:146","msg":"peers in store","peer_store_count":75,"network_peer_count":6}
panic: get latest data clock frame: pebble/table: invalid table 047146 (checksum mismatch at 0/14117670)

goroutine 360077 [running]:
source.quilibrium.com/quilibrium/monorepo/node/consensus/time.(*DataTimeReel).Start(0xc005cae000)
	/opt/ceremonyclient/node/consensus/time/data_time_reel.go:125 +0x2fd
source.quilibrium.com/quilibrium/monorepo/node/consensus/ceremony.(*CeremonyDataClockConsensusEngine).Start(0xc00002a000)
	/opt/ceremonyclient/node/consensus/ceremony/ceremony_data_clock_consensus_engine.go:261 +0x96
source.quilibrium.com/quilibrium/monorepo/node/execution/intrinsics/ceremony.(*CeremonyExecutionEngine).Start.func1()
	/opt/ceremonyclient/node/execution/intrinsics/ceremony/ceremony_execution_engine.go:438 +0x33
created by source.quilibrium.com/quilibrium/monorepo/node/execution/intrinsics/ceremony.(*CeremonyExecutionEngine).Start
	/opt/ceremonyclient/node/execution/intrinsics/ceremony/ceremony_execution_engine.go:437 +0x198
exit status 2

Any suggestion for this issue?

frame number sync too slow

I have run node for an hour, but the frame_number is

{"level":"info","ts":1710235971.354947,"caller":"ceremony/ceremony_data_clock_consensus_engine.go:389","msg":"broadcasting peer info","frame_number":34}

Lost config.yml How to get wallet rewards

In a recent release update, you can use keys.yml and config.yml to get wallet rewards, because the config.yml file was accidentally lost. How do you get wallet rewards in this case

Private keys are not generating

Creating config directory .config
Generating default config...
Generating random host key...
Generating keystore key...
Saving config...
Clearing test data...
Deduplicating and compressing clock frame data...
Loading ceremony state and starting node...

but inside keys.yml is nothing :
cat keys.yml
null:

High CPU usage and potential memory leak?

Ola!

So my node has been running for a couple days now and I just realized that my CPU and RAM + SWAP were at max usage for quite a while. I kinda forgot to make a screenshot and in my haste restarted the node to see if it would happen again. So I'll send a screenshot once it's that time again. But wanted to address this 'concern' in the meantime.

Is this normal behavior for it to gradually use all my resources to the max? I've seen this behavior previously when there was no cap on the amount of peers it can discover in other cosmos chains (or if the genesis file was too big), but I'm a pleb not sure if this has anything to do with it at all. Could of course be so many other things that could result to memory leakages.

Currently it already started to use quite a lot of mem + CPU:
image

PS: I'm running the ceremony client node using the binary installed via go install inside the /ceremonyclient/node folder. So it's the /root/go/bin/node file (I should have renamed that btw hehe.)


Oh yeah, specs:
64GB DDR4 RAM
Configured 32GB swap
2x 512 GB NVMe SSDs
AMD Ryzen 5 3600 CPU

Node crashed with 'signal: killed'

I'm running ceremonyclient v1.2.15 on Ubuntu 22.04.04 LTS (AMD64) with Go version 1.20.14.
My startup command is GOEXPERIMENT=arenas go run ./....

After running for about 4 hours, the node crashed with 'signal: killed'. Please find below the tail of the logfile:

{"level":"info","ts":1709084085.4507408,"caller":"ceremony/peer_messaging.go:310","msg":"found first half of matching segment data","frame_number":726,"commit_index":0,"inclusion_commit_index":0,"type_url":"types.quilibrium.com/quilibrium.node.application.pb.IntrinsicExecutionOutput"}
{"level":"info","ts":1709084085.4508023,"caller":"ceremony/peer_messaging.go:320","msg":"found second half of matching segment data","frame_number":726,"commit_index":0,"inclusion_commit_index":0,"type_url":"types.quilibrium.com/quilibrium.node.application.pb.IntrinsicExecutionOutput"}
{"level":"info","ts":1709084085.4633043,"caller":"ceremony/broadcast_messaging.go:310","msg":"got clock frame","address":"AIAAAAAAAAAEAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAPAVyCbkWVDUUSKVMpVFIXF9cwmk0gg+V4aidqSi9eMAUybnbBt6X7JfvdvQg+hk=","filter":"AIAAAAAAAAAEAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAPAVyCbkWVDUUSKVMpVFIXF9cwmk0gg+V4aidqSi9eMAUybnbBt6X7JfvdvQg+hk=","frame_number":726,"proof_count":1}
{"level":"info","ts":1709084087.3208094,"caller":"ceremony/broadcast_messaging.go:328","msg":"clock frame was valid","address":"AIAAAAAAAAAEAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAPAVyCbkWVDUUSKVMpVFIXF9cwmk0gg+V4aidqSi9eMAUybnbBt6X7JfvdvQg+hk=","filter":"AIAAAAAAAAAEAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAPAVyCbkWVDUUSKVMpVFIXF9cwmk0gg+V4aidqSi9eMAUybnbBt6X7JfvdvQg+hk=","frame_number":726}
{"level":"info","ts":1709084087.3288972,"caller":"ceremony/peer_messaging.go:234","msg":"processing frame","frame_number":727,"aggregate_commits":1}
{"level":"info","ts":1709084087.3289354,"caller":"ceremony/peer_messaging.go:240","msg":"processing commit","frame_number":727,"commit_index":0}
{"level":"info","ts":1709084087.32895,"caller":"ceremony/peer_messaging.go:250","msg":"found matching proof","frame_number":727,"commit_index":0}
{"level":"info","ts":1709084087.3293443,"caller":"ceremony/peer_messaging.go:281","msg":"adding inclusion commitment","frame_number":727,"commit_index":0,"inclusion_commit_index":0,"type_url":"types.quilibrium.com/quilibrium.node.application.pb.IntrinsicExecutionOutput"}
{"level":"info","ts":1709084087.3302824,"caller":"ceremony/peer_messaging.go:310","msg":"found first half of matching segment data","frame_number":727,"commit_index":0,"inclusion_commit_index":0,"type_url":"types.quilibrium.com/quilibrium.node.application.pb.IntrinsicExecutionOutput"}
{"level":"info","ts":1709084087.3311589,"caller":"ceremony/peer_messaging.go:320","msg":"found second half of matching segment data","frame_number":727,"commit_index":0,"inclusion_commit_index":0,"type_url":"types.quilibrium.com/quilibrium.node.application.pb.IntrinsicExecutionOutput"}
{"level":"info","ts":1709084087.3484845,"caller":"ceremony/broadcast_messaging.go:310","msg":"got clock frame","address":"AIAAAAAAAAAEAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAPAVyCbkWVDUUSKVMpVFIXF9cwmk0gg+V4aidqSi9eMAUybnbBt6X7JfvdvQg+hk=","filter":"AIAAAAAAAAAEAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAPAVyCbkWVDUUSKVMpVFIXF9cwmk0gg+V4aidqSi9eMAUybnbBt6X7JfvdvQg+hk=","frame_number":727,"proof_count":1}
{"level":"info","ts":1709084088.9954417,"caller":"ceremony/broadcast_messaging.go:328","msg":"clock frame was valid","address":"AIAAAAAAAAAEAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAPAVyCbkWVDUUSKVMpVFIXF9cwmk0gg+V4aidqSi9eMAUybnbBt6X7JfvdvQg+hk=","filter":"AIAAAAAAAAAEAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAPAVyCbkWVDUUSKVMpVFIXF9cwmk0gg+V4aidqSi9eMAUybnbBt6X7JfvdvQg+hk=","frame_number":727}
{"level":"info","ts":1709084088.9955335,"caller":"ceremony/peer_messaging.go:234","msg":"processing frame","frame_number":728,"aggregate_commits":1}
{"level":"info","ts":1709084088.995553,"caller":"ceremony/peer_messaging.go:240","msg":"processing commit","frame_number":728,"commit_index":0}
{"level":"info","ts":1709084088.9955657,"caller":"ceremony/peer_messaging.go:250","msg":"found matching proof","frame_number":728,"commit_index":0}
{"level":"info","ts":1709084088.995577,"caller":"ceremony/peer_messaging.go:281","msg":"adding inclusion commitment","frame_number":728,"commit_index":0,"inclusion_commit_index":0,"type_url":"types.quilibrium.com/quilibrium.node.application.pb.IntrinsicExecutionOutput"}
{"level":"info","ts":1709084088.9955974,"caller":"ceremony/peer_messaging.go:310","msg":"found first half of matching segment data","frame_number":728,"commit_index":0,"inclusion_commit_index":0,"type_url":"types.quilibrium.com/quilibrium.node.application.pb.IntrinsicExecutionOutput"}
{"level":"info","ts":1709084088.9956262,"caller":"ceremony/peer_messaging.go:320","msg":"found second half of matching segment data","frame_number":728,"commit_index":0,"inclusion_commit_index":0,"type_url":"types.quilibrium.com/quilibrium.node.application.pb.IntrinsicExecutionOutput"}
{"level":"info","ts":1709084089.0091133,"caller":"ceremony/broadcast_messaging.go:310","msg":"got clock frame","address":"AIAAAAAAAAAEAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAPAVyCbkWVDUUSKVMpVFIXF9cwmk0gg+V4aidqSi9eMAUybnbBt6X7JfvdvQg+hk=","filter":"AIAAAAAAAAAEAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAPAVyCbkWVDUUSKVMpVFIXF9cwmk0gg+V4aidqSi9eMAUybnbBt6X7JfvdvQg+hk=","frame_number":728,"proof_count":1}
{"level":"info","ts":1709084089.632033,"caller":"master/master_clock_consensus_engine.go:146","msg":"peers in store","peer_store_count":197,"network_peer_count":56}
{"level":"info","ts":1709084089.8600786,"caller":"p2p/blossomsub.go:519","msg":"found peer","peer_id":"QmaaaSTxEwZeBMzBPA1hALMmmtuqtx956WwT5trjWA8onm"}
{"level":"info","ts":1709084089.8615646,"caller":"p2p/blossomsub.go:519","msg":"found peer","peer_id":"QmczHr69hnVW6ncp5LRa9GXuimewSpbqfAQ9pS5dxsMUk3"}
{"level":"info","ts":1709084090.7928653,"caller":"ceremony/ceremony_data_clock_consensus_engine.go:383","msg":"broadcasting peer info","frame_number":727}
{"level":"info","ts":1709084091.201388,"caller":"ceremony/broadcast_messaging.go:328","msg":"clock frame was valid","address":"AIAAAAAAAAAEAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAPAVyCbkWVDUUSKVMpVFIXF9cwmk0gg+V4aidqSi9eMAUybnbBt6X7JfvdvQg+hk=","filter":"AIAAAAAAAAAEAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAPAVyCbkWVDUUSKVMpVFIXF9cwmk0gg+V4aidqSi9eMAUybnbBt6X7JfvdvQg+hk=","frame_number":728}
{"level":"info","ts":1709084091.2025685,"caller":"ceremony/peer_messaging.go:234","msg":"processing frame","frame_number":729,"aggregate_commits":1}
{"level":"info","ts":1709084091.202639,"caller":"ceremony/peer_messaging.go:240","msg":"processing commit","frame_number":729,"commit_index":0}
{"level":"info","ts":1709084091.202685,"caller":"ceremony/peer_messaging.go:250","msg":"found matching proof","frame_number":729,"commit_index":0}
{"level":"info","ts":1709084091.2027197,"caller":"ceremony/peer_messaging.go:281","msg":"adding inclusion commitment","frame_number":729,"commit_index":0,"inclusion_commit_index":0,"type_url":"types.quilibrium.com/quilibrium.node.application.pb.IntrinsicExecutionOutput"}
{"level":"info","ts":1709084091.2027764,"caller":"ceremony/peer_messaging.go:310","msg":"found first half of matching segment data","frame_number":729,"commit_index":0,"inclusion_commit_index":0,"type_url":"types.quilibrium.com/quilibrium.node.application.pb.IntrinsicExecutionOutput"}
{"level":"info","ts":1709084091.202837,"caller":"ceremony/peer_messaging.go:320","msg":"found second half of matching segment data","frame_number":729,"commit_index":0,"inclusion_commit_index":0,"type_url":"types.quilibrium.com/quilibrium.node.application.pb.IntrinsicExecutionOutput"}
{"level":"info","ts":1709084091.25073,"caller":"ceremony/broadcast_messaging.go:310","msg":"got clock frame","address":"AIAAAAAAAAAEAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAPAVyCbkWVDUUSKVMpVFIXF9cwmk0gg+V4aidqSi9eMAUybnbBt6X7JfvdvQg+hk=","filter":"AIAAAAAAAAAEAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAPAVyCbkWVDUUSKVMpVFIXF9cwmk0gg+V4aidqSi9eMAUybnbBt6X7JfvdvQg+hk=","frame_number":729,"proof_count":1}
{"level":"info","ts":1709084147.9083347,"caller":"master/master_clock_consensus_engine.go:146","msg":"peers in store","peer_store_count":195,"network_peer_count":55}
{"level":"info","ts":1709084189.8258126,"caller":"p2p/blossomsub.go:519","msg":"found peer","peer_id":"QmdTL4K7B2FpNeN9NjosgNYn77BGdh58qtJfdCAjHoznia"}
{"level":"info","ts":1709084175.6170716,"caller":"ceremony/ceremony_data_clock_consensus_engine.go:383","msg":"broadcasting peer info","frame_number":728}
{"level":"info","ts":1709084190.0434618,"caller":"p2p/blossomsub.go:528","msg":"connected to peer","peer_id":"QmdTL4K7B2FpNeN9NjosgNYn77BGdh58qtJfdCAjHoznia"}
{"level":"info","ts":1709084190.0621026,"caller":"p2p/blossomsub.go:519","msg":"found peer","peer_id":"QmVzi7qzCJWs4D7RZRfiLHDzKWc5v4Xm26DU9VnYm83bbu"}
{"level":"info","ts":1709084190.06297,"caller":"p2p/blossomsub.go:519","msg":"found peer","peer_id":"QmPMw1ckZqxWXVB6mcT7VWfA6sTxaGBjgmERiGbeg3pBML"}
{"level":"info","ts":1709084190.0630567,"caller":"p2p/blossomsub.go:519","msg":"found peer","peer_id":"QmdPpxqEfYA6xa6ryVFvaePNpPWgj9Hd9SUZCKgEL2nP2D"}
{"level":"info","ts":1709084190.0634227,"caller":"p2p/blossomsub.go:519","msg":"found peer","peer_id":"QmbptJrsoGjRST86fGuXeD2W2yNQWPmwuJafjitiMNdAnc"}
{"level":"info","ts":1709084190.0635011,"caller":"p2p/blossomsub.go:519","msg":"found peer","peer_id":"Qmauw3h6F42gm7PhwQdXayHWAvEJStS6Zdw2cR5w1bfTWC"}
{"level":"info","ts":1709084190.1134164,"caller":"p2p/blossomsub.go:528","msg":"connected to peer","peer_id":"Qmauw3h6F42gm7PhwQdXayHWAvEJStS6Zdw2cR5w1bfTWC"}
{"level":"info","ts":1709084190.1135147,"caller":"p2p/blossomsub.go:519","msg":"found peer","peer_id":"QmRz9eXgfHamKFx5koCUESsoDqAcgPNUHD5iziP3iqafCm"}
{"level":"info","ts":1709084207.388689,"caller":"master/master_clock_consensus_engine.go:146","msg":"peers in store","peer_store_count":195,"network_peer_count":11}
{"level":"info","ts":1709084224.9965568,"caller":"master/master_clock_consensus_engine.go:146","msg":"peers in store","peer_store_count":195,"network_peer_count":11}
{"level":"info","ts":1709084233.512085,"caller":"p2p/blossomsub.go:519","msg":"found peer","peer_id":"QmXNpYMxrCUNv7fa9reoeLS3uK6ubYcUTJAypb6bbU74uP"}
{"level":"info","ts":1709084273.2635417,"caller":"master/master_clock_consensus_engine.go:146","msg":"peers in store","peer_store_count":195,"network_peer_count":10}
{"level":"info","ts":1709084288.297452,"caller":"ceremony/ceremony_data_clock_consensus_engine.go:383","msg":"broadcasting peer info","frame_number":728}
signal: killed

is there a way to fix this error its start after update

     ###############                                  ###############
       #################&                                ##############%
          #########################&&&#############        ###############
             ########################################%        ############
                 #######################################        ########
                      #############################                ##

                     Quilibrium Node - v1.3.0 – Dawn

panic: yaml: line 3: could not find expected ':'

goroutine 1 [running]:
main.main()
/root/ceremonyclient/node/main.go:141 +0xb96
exit status 2

After running the program for a while, ssh cannot connect.

After the program is started, everything is normal at first. After waiting for about a while, this time varies. Through monitoring, you will find that the IO of the hard disk is directly filled up, and the CPU usage drops directly from the original 50% to about 10%. At the same time, SSH cannot connect and cannot log in. What is going on?
My machine configuration is: 8-core CPU, 16g memory, 250 solid state drive.

程序启动后,刚开始还是一切正常,等过大概一段时间,这个时间不等,通过监控会发现,硬盘的io直接拉满,cpu的使用率从原来的50%直接掉到10%左右,同时ssh无法连接,无法登录上去,请问这是什么情况?
我得机器配置:8核cpu、16g内存,250固态硬盘。

image

How can this be resolved?

The command grpcurl -plaintext -max-msg-sz 5000000 localhost:8337 quilibrium.node.node.pb.NodeService.GetPeerInfo | grep peerId | wc -l ,returned an error: Code: ResourceExhausted
Message: grpc: received message larger than max (5359225 vs. 5000000) . How can this be resolved?

Issue: Balance flags connection refused error

`panic: error getting token info: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 0.0.0.0:8337: connect: connection refused"

goroutine 1 [running]:
main.main()
/home/ubuntu/ceremonyclient/node/main.go:101 +0x745
exit status 2`

Was receiving this error when editing my multiaddr, tried with 127.0.0.1 did not work as well.

I ran a netstat and checked the 8337 port was not occupied.
sudo netstat -tulpn | grep LISTEN tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 507/systemd-resolve tcp 0 0 127.0.0.54:53 0.0.0.0:* LISTEN 507/systemd-resolve tcp6 0 0 :::22 :::* LISTEN 1/init

Working off a VPS (OVHCloud to be exact). Everything else seems to be working. and VPS firewall was disabled on OVH side.
Seeking out to get some help! Thanks,

Error while dialing

After upgrading node to latest version today i get those errors constantly in loop. Please advise.

{"level":"info","ts":1711586504.028091,"caller":"master/master_clock_consensus_engine.go:202","msg":"peers in store","peer_store_count":390,"network_peer_count":192}
{"level":"info","ts":1711586514.0286555,"caller":"master/master_clock_consensus_engine.go:202","msg":"peers in store","peer_store_count":390,"network_peer_count":196}
{"level":"info","ts":1711586524.029287,"caller":"master/master_clock_consensus_engine.go:202","msg":"peers in store","peer_store_count":390,"network_peer_count":202}
{"level":"info","ts":1711586524.7127566,"caller":"ceremony/consensus_frames.go:151","msg":"checking peer list","peers":372,"uncooperative_peers":11,"current_head_frame":644}
{"level":"info","ts":1711586524.7132041,"caller":"ceremony/consensus_frames.go:192","msg":"polling peer for new frames","peer_id":"EiBZ0sKtosHZzWn0huEDy29G8kMla3OFnd+MmLIU7TFK9w=="}
{"level":"error","ts":1711586524.713607,"caller":"ceremony/consensus_frames.go:219","msg":"could not get frame","error":"rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial context: failed to dial: no addresses\"","stacktrace":"source.quilibrium.com/quilibrium/monorepo/node/consensus/ceremony.(*CeremonyDataClockConsensusEngine).sync\n\t/root/Q/ceremonyclient/node/consensus/ceremony/consensus_frames.go:219\nsource.quilibrium.com/quilibrium/monorepo/node/consensus/ceremony.(*CeremonyDataClockConsensusEngine).collect\n\t/root/Q/ceremonyclient/node/consensus/ceremony/consensus_frames.go:272\nsource.quilibrium.com/quilibrium/monorepo/node/consensus/ceremony.(*CeremonyDataClockConsensusEngine).runLoop\n\t/root/Q/ceremonyclient/node/consensus/ceremony/ceremony_data_clock_consensus_engine.go:498"}

I updated using:

service ceremonyclient stop
cd ~/ceremonyclient
git fetch origin
git merge origin
cd ~/ceremonyclient/node
GOEXPERIMENT=arenas go clean -v -n -a ./...
rm /root/go/bin/node
ls /root/go/bin
GOEXPERIMENT=arenas  go  install  ./...
ls /root/go/bin
service ceremonyclient start

Im using go version 1.20.14

prior to upgrade node was running fine

arena

build source.quilibrium.com/quilibrium/monorepo/node: cannot load arena: malformed module path "arena": missing dot in first path element
I ran 'GOEXPERIMENT=arenas go run ./...', but it got an error. My go version is 1.13.8 linux/amd64

Multiple nodes with same key

Hi, is it possible to run multiple nodes on different systems with the same key or does it lead to a conflict?

Issue: Balance flag returns "item not found" error

I'm reaching out to request your assistance with an issue I've encountered while querying the balance on our node, which is running in a Docker container.

Despite confirming successful network connectivity with the following output:

/opt/ceremonyclient/node # nc -zv 0.0.0.0 8337
0.0.0.0 (0.0.0.0:8337) open
/opt/ceremonyclient/node #

Additionally, using the --db-console option indicates a CONNECTED status:

STATUS  CONNECTED

I'm facing an issue when attempting to execute the balance command:

/opt/ceremonyclient/node # go run ./... -balance
panic: error getting token info: rpc error: code = Unknown desc = get token info: get highest candidate data clock frame: item not found

goroutine 1 [running]:
main.main()
	/opt/ceremonyclient/node/main.go:101 +0x745
exit status 2
/opt/ceremonyclient/node #

Could you please advise if there's a specific configuration or step I might be missing?
Thank you!

Make a Documentation

The guide is quite "Hard" to understand , you need lot of research to understand what your team is saying , hope there is a documentation to understand it easier

High Memory Usage

Hello @CassOnMars

After 1.4.7 release memory usage is extremely increased. I have on 32G Ram servers and using %100 of memory. If swap is not enabled or not enough, OOM Killer makes restart the node.

Meanwhile, I realised that even machines with 64GB ram is also suffering...

Something wrong about memory usage at latest 2 version.

FYI,
Thanks

when run node error occurs

when i run command:GOEXPERIMENT=arenas go run ./...

error:

runtime/cgo

xcrun: error: invalid active developer path (/Library/Developer/CommandLineTools), missing xcrun at: /Library/Developer/CommandLineTools/usr/bin/xcrun

i use go1.21.5 darwin/arm64.MAC M1

how do i know node is running correctly?

hi, i have been running the node about two months ago by ./poor_mans_cd.sh scripts. below is the running log. however, my balance is always zero. i have read the FAQ and understand that the node needs to be synced. the size of .config/store is about 50G. how can i check if the node sync have finished? thank you.

$ GOEXPERIMENT=arenas go run . -balance
Owned balance: 0 QUIL
Unconfirmed balance: 0 QUIL
$ du -hs .config/*
16K     .config/config.yml
4.0K    .config/keys.yml
4.0K    .config/MIGRATIONS
4.0K    .config/RELEASE_VERSION
4.0K    .config/REPAIR
4.0K    .config/SELF_TEST
52G     .config/store

image

Node sync problem

The node is running during 2 days. No QUIL accumulated.
maxFrame Number always 34.
When i started the node, i have err below :
{"level":"info","ts":1710308599.9562888,"caller":"p2p/blossomsub.go:378","msg":"error while connecting to dht peer","error":"failed to dial: failed to dial QmPBYgDy7snHon7PAn8nv1shApQBQz1iHb2sBBS8QSgQwW:\n * [/ip4/207.246.81.38/udp/8336/quic] timeout: no recent network activity"} {"level":"info","ts":1710308599.9562833,"caller":"p2p/blossomsub.go:378","msg":"error while connecting to dht peer","error":"failed to dial: failed to dial Qmd233pLUDvcDW3ama27usfbG1HxKNh1V9dmWVW1SXp1pd:\n * [/ip4/204.186.74.47/udp/8317/quic] timeout: no recent network activity"} {"level":"info","ts":1710308599.956279,"caller":"p2p/blossomsub.go:378","msg":"error while connecting to dht peer","error":"failed to dial: failed to dial QmZejZ8DBGQ6foX9recW73GA6TqL6hCMX9ETWWW1Fb8xtx:\n * [/ip4/144.76.104.93/udp/8336/quic] timeout: no recent network activity"} {"level":"info","ts":1710308599.9563026,"caller":"p2p/blossomsub.go:378","msg":"error while connecting to dht peer","error":"failed to dial: failed to dial QmW6QDvKuYqJYYMP5tMZSp12X3nexywK28tZNgqtqNpEDL:\n * [/ip4/186.233.184.181/udp/8336/quic] timeout: no recent network activity"} {"level":"info","ts":1710308599.9564338,"caller":"p2p/blossomsub.go:378","msg":"error while connecting to dht peer","error":"failed to dial: failed to dial QmYVaHXdFmHFeTa6oPixgjMVag6Ex7gLjE559ejJddwqzu:\n * [/ip4/51.15.18.247/udp/8336/quic] timeout: no recent network activity"} {"level":"info","ts":1710308600.3247917,"caller":"p2p/blossomsub.go:378","msg":"error while connecting to dht peer","error":"failed to dial: failed to dial QmXbbmtS5D12rEc4HWiHWr6e83SCE4jeThPP4VJpAQPvXq:\n * [/ip4/75.166.197.187/udp/8336/quic] timeout: no recent network activity"} {"level":"info","ts":1710308600.3350616,"caller":"p2p/blossomsub.go:378","msg":"error while connecting to dht peer","error":"failed to dial: failed to dial QmS7C1UhN8nvzLJgFFf1uspMRrXjJqThHNN6AyEXp6oVUB:\n * [/ip6/2001:41d0:8:823b::/udp/8336/quic] INTERNAL_ERROR (local): write udp6 [::]:44948->[2001:41d0:8:823b::]:8336: sendmsg: network is unreachable\n * [/ip4/5.39.66.59/udp/8336/quic] timeout: no recent network activity"}

after that there is no error.
is the node running normally ?

balance checking error

When I checked the balance I got this error:

Input: root@vmi1703183:~/ceremonyclient/node# GOEXPERIMENT=arenas /root/go/bin/node -balance
Output: -bash: /root/go/bin/node: No such file or directory

The keys.yml is null

When I run the guidance as below:

cd ~/ceremonyclient/node
GOEXPERIMENT=arenas go run ./...

The output is

Creating config directory .config
Generating default config...
Generating random host key...
Generating keystore key...
Saving config...
Clearing test data...
Loading ceremony state and starting node...
{"level":"info","ts":171620352

But the data in keys.yml is :

null

The bug has been issued by issue-100, however, it seems that there is no available method to solve.

Bug: still not progressing passed frames below 5000 :(

Hi, so I'm currently on 1.4.3 and sadly with two nodes I'm not progressing any further. One of them I decided to completely reset, I shouldn't have done that lol. This one is now stuck at frame 1572 for longer than a day now. The other is still stuck at 4067.

error while connecting to dht peer

My node is up and running, and I can input commands to check the balance, but then this error keeps recurring.

{"level":"info","ts":1709386136.641862,"caller":"p2p/blossomsub.go:331","msg":"error while connecting to dht peer","error":"failed to dial: failed to dial QmS7C1UhN8nvzLJgFFf1uspMRrXjJqThHNN6AyEXp6oVUB:\n * [/ip6/2001:41d0:8:823b::/udp/8336/quic] dial backoff"}

image

how to get peer-id

I'm running on M1 Mac using docker, how to get peer-id. The doc only show
docker compose exec node qclient help docker compose exec node qclient token help docker compose exec node qclient token balance

wallet balance from node command.

Request Type : Feature/Enahancement.

Currently the node binary has the below options, however having -get-balance feature will help to get the balance details vs
-db-console and looking at the right bottom screen for balances.
This option should able to give straight forward response for operators query to get current wallet balance.

~/ceremonyclient/node$ ./node --help
Usage of ./node:
  -config string
        the configuration directory (default ".config")
  -db-console
        starts the node in database console mode
  -import-priv-key string
        creates a new config using a specific key from the phase one ceremony
  -peer-id
        print the peer id to stdout from the config and exit

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.