Code Monkey home page Code Monkey logo

pocket's Introduction

⚠️ ❗ Update, November 2023: This repo is not under active development. Please head over to poktroll for the latest ongoing implementation of Shannon (next version of the protocol) or pocket-core for the implementation of Morse (the current version of the protocol). ❗ ⚠️


Pocket

The official implementation of the V1 Pocket Network Protocol Specification.

*Please note that V1 protocol is currently under development and see pocket-core for the version that is currently live on mainnet.*

Implementation

Official Golang implementation of the Pocket Network v1 Protocol.

Overview

Getting Started


Some relevant links are listed below. Refer to the complete ongoing documentation at the Pocket GitHub Wiki.

If you'd like to contribute to the Pocket V1 Protocol, start by:

  1. Get up and running by reading the Development Guide
  2. Find a task by reading the Contribution Guide
  3. Dive into any of the other guides or modules depending on where your interests lie

Guides

Architectures

Changelogs

Project Management Resources

Support & Contact

GPokT

You can also use our chatbot, GPokT, to ask questions about Pocket Network. As of updating this documentation, please note that it may require you to provide your own LLM API token. If the deployed version of GPokT is down, you can deploy your own version by following the instructions here.


License

This project is licensed under the MIT License; see the LICENSE file for details.

pocket's People

Contributors

0xbigboss avatar adshmh avatar bryanchriswhite avatar deblasis avatar derrandz avatar dylanlott avatar ferreiratiago avatar gokutheengineer avatar guettomusick avatar gustavobelfort avatar h5law avatar innocent-saeed36 avatar jacklaing avatar jasonyou1995 avatar jessicadaugherty avatar luyzdeleon avatar okdas avatar olshansk avatar omahs avatar phthan0 avatar prajjawalk avatar profishional avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pocket's Issues

Add Build Tool

We need to integrate a tool that helps automate release-related work as much as possible and reduce the probability of making mistakes when delivering artifacts.

[P2P] Add Connection Handshake functionality + Identity Check

Description


Implement the connections handshake process per the p2p specification

Acceptance Criteria


  • .handshake() authenticates connections and adds them to the connection pool on success
  • .handshake() tracks which connection failed to authenticate for security purposes. (file logs?)
  • .handshake() to take part of inbound and outbound socket opening.

Deliverables


  • .handshake() internal method to authenticate connections

Owners: @derrandz

[Infra] Makefile to magefile migration

The makefile is in place to make it easier and faster to iterate on build target experiments. As those are validated, they would be settling into the magefile. Ideally, we'll start using the magefile as the place for further experiments after the team becomes familiarized with it; in the meantime, we know we need to do the migration (represented by the existence of this issue) and plan to address it as part of the Wiring milestone.

P2P: Add documentation under `p2p/README.md`

Objective

To document the p2p socket behavior and internals.

Deliverables

  • Document the goal of the socket module
  • Document why the socket module implements buffered IO and how it achieves it
  • Provide state diagrams explain all possible states and state changes for a socket
  • Provide a flowchart diagram for the main IO routines: read, write
  • Provide a library documentation on how to use the socket module.

Owners

@derrandz

[P2P] TCP Connection Pooling (Earliest Recently Used, Last Recently Used Push Queue)

Objective

Achieve an efficient connection pool to minimize TCP roundtrips that come with establishing new connections.

Origin Document

When a connection pool is full, and we need to pool new connections, some space has to be made so some existing pooled connections must be "evacuated".

In order to efficiently use the connections' pool, we need some pool re-ordering logic. A suggestion is to use something similar to the RAM's LRU (Last Recently Used) algorithms, that will evacuate the oldest unused connection in the pool, by always queueing first the most recently used connection, making the last position in the queue, the oldest unused connection in the pool, the one to be evacuated.

Deliverables

  • Implement a limited-size connection pool with re-ordering logic:
    • The pool must provide methods to retrieve specific questions by address
    • The pool must provide evacuate unused connections when new connections are wanting to be pooled to make space for them.
    • The pool must not interrupt an on-going pooled connection in favor of a to-be-made connection.

Owners


author/owner: @derrandz

[Persistence] Implement proper SQL migration strategy (i.e. move SQL table schemas to .sql files)

@Olshansk
For this file (persistence/schema/account.go) and all the others, I wanted to add a point about migrations & schemas that I feel pretty strongly about before moving forward.

I looked at different SQL migration tools in Golang and though I haven't picked a specific one, golang-migrate/migrate seems to be the most popular one. However, instead of defining our tables in strings withing .go file, we should follow the best practices for how to manage migrations (note that creation is a migration in itself).

Source: golang-migrate/migrate
tl;dr

We store these in .sql files
Each change will have a .up.sql and a .down.sql file
This will enable:
Cleaner & easier to read code (separation of go and SQL)
Going back & forth during development
Easy to track changes / migrations when we actually do them on chain.

[Utility] Module First Iteration - Followups & Cleanup

Objective

Address all of the discussion and implementation points left in the primary PR: #47

Origin Document

Issue #24: V1 Utility Module First Iteration

Goals

Deliverables

  • Check of all the boxes below
  • Add documentation or refactor code where applicable

Testing Methodology

$ make test_utility_types && make test_utility_module

Non-goals

  • Add any new functionality to the V1 utility module

Creator: @Olshansk
Co-Owners: @andrewnguyen22


Short-term refactor

  • Use semantic variable naming for some of the constants. Related, we set ZeroInt when unpausing, but we do checks using magic value (e.g. “height == 0”)
  • Given the “ethos” of basic types, why not use a uint64 to represent timestamps as an epoch timestamp instead of google.protobuf.Timestamp time?

Mid-term refactor

  • Consider have an “Actor” interface with common functions (Pause, Unpause, Edit, Stake, Unstake, etc…)
  • Change all the tests to use testify
    • E.g. Change if err != nil { t.Fatal() } to require.NoError(t, err)
    • @Olshansk can write a script to do this
  • Some small store related helper functions (e.g. InsertApplication) are only used in one place. For the purpose of verbosity, why not move the business logic inline? Alternatively, maybe we can just lowercase them?
  • Some functions (like CalculateUnstakingHeight) are used across both the validator and fisherman. Should we have an "utility_context_actor_utils.go` type file (obviously with a different name?). This could potentially be a bigger refactor too.

Long-term refactor

  • Avoid having isTesting (isDebug / isDevelop) if/else branches throughout the code, but try to use injection of handlers instead. Example: see persistence/pre_persistence/genesis.go.

Discussion - Utility

  • *** Who is the signer? In different places we have the Address / PublicKey, the Output / OutputAddress, and an optional signer. What is the difference between these three?
  • When / how are evidence and votes gossiped?
  • Discuss if we should use make / new / create for factory functions, just so we are all consistent
  • Various protos / functions specify both an address and a public_key. The former can be derived from the latter, so specifying or passing in both is an entry point for potential errors / discrepancy. E.g. app.proto
    • @Olshansk believes the simpler interface and lower probably of error is better than @andrewnguyen22's argument related to compute cost
  • In service_node.go, we validate the relay chain count but not the actual values within them. Why not?
  • Renaming output in various places to something clearer and more self-explanatory?
    • Also, pick either output or output_address (e.g. used in UnstakingActor) and use the same one everywhere
  • Why is height not a uint64?
  • What’s a pool?
  • Why not use enums for Code errors?
  • Remove Begin from all functions that start with this?
  • Percentage computations - why not create a helper for multiplication, division & sign check?
  • Should we update README or changeling to explain why amounts are strings and percentage are ints?
  • Constants in utility/types/session.go look more like errors, but mixed with business logic constants. How should we separate them?

Discussion - General

AIs on @Olshansk :

  • Go through the list of topics above with the team
  • Show an example of how gov.go can be refactored and made simpler using mappers but still keeping the verbosity in place
  • Write a script to refactor the tests to use testify
  • Add changelog to consensus module
  • While everything is still fresh, add a README on “how to navigate this module”
  • Consolidate errors into one file in consensus.go (similar to this)

[P2P] V1 P2P First Iteration - Followups

Objective

Address all of the discussion and implementation points left in the primary PR: #46

Origin Document

Issue #24: V1 P2P Module - First Iteration

Goals

Deliverables

  • Address all outstanding TODOs in the P2P module
  • Address all of the open items below

Testing Methodology

$ go mod vendor && go mod tidy
$ make protogen_clean && make protogen_local
$ make mockgen
$ make test_all

Non-goals

  • Add any new functionality to the P2P module

Outstanding tasks

Short-term refactor

  • Consider creating a helper for s.closed <- struct{}{} called SignalClose so it's more semantically explicit as one reads the codebase

Mid-term refactor

  • Sync with @Olshansk on how we can use mockgen for socket_mock.go (see utility in consensus PR as a reference)
  • Add additional non-happy path test cases

Large refactor

  • Design & consolidate logger with the rest of the repository
  • Replace WireCodec with protobufs
  • #66

General tasks

Discuss

  • The write and read routines will both exit for the same reasons, so having just one of them block is sufficient . IMO we should await on both (see this as a reference). If we don't, and simply assume they both exit for the same reason, it looks like an opportunity for a bug if one breaks but the other doesn't.

Sampling Protocol / Fisherman is gameable

Problem

The Sampling Protocol of the Fisherman (according to the spec) is able to be manipulated by ServiceNodes. According to the spec, there is a Consistent Sampling Delay between every benchmark/sample of all the service nodes in a session. An attacker could monitor his incoming requests, figure out the delay, and only give service to the Fisherman instead of to all applications.

The aim of the Sampling Protocol is to be discreet, so ServiceNodes do not know which requests are from the Fisherman or an application, and they are forced to give equal service to all requests. However, this is not the case.

image

How to attack the current system

  1. Wait till two or more of your nodes are in the same session.
  2. Monitor all incoming requests and send them to a shared server
  3. Identify requests that came in at the same time
  4. Identify another set of requests that came in at the same time
  5. Subtract and determine the consistent sampling delay
  6. Ignore all other requests between now + Consistent Sampling Delay

Solution

Change the Consistent Sampling Delay to a Random Sampling Delay, created from a private source of randomness.

Origin Document

https://docs.pokt.network/v1/utility#3.3-fisherman-protocol

Creator: DragonDmoney (Pierre)

V1 P2P Module - First Iteration

Objective

Create the first iteration of the P2P Module and integrate it with the other modules within the codebase.

Origin Document

Implement the first iteration of a tcp server and client behavior with read, write and write with ack behaviors, in addition to basic Raintree broadcast (with no redundancy or daisy chain cleanup).

The module should implement:

  • Lifecycle and Telemetry methodes:
  • isReady
  • finished
  • log
  • setLogger
  • Network Behavior Methods
  • listen
  • send
  • broadcast
  • request
  • respond
  • ping
  • pong
  • Network IO Methods:
  • open
  • inbound
  • outbound
  • close
  • read
  • write
  • poll
  • answer

Deliverables:

  • An end-to-end go test script
  • A 27 nodes stack simulating a near-real-life environment for e2e
  • Code documentation

Creator/Owner: @derrandz
Co-Owners: @andrewnguyen22 @oten91

[Tooling] CHANGELOG and Version Bump PreCommit Hook

Description

The v1 codebase is required to document all important changes in a CHANGELOG.md at the root of the repository and module specific CHANGELOG.md if the changes impacted the module in-question. In addition to that, the version should be bumped accordingly.

This issue is a perfect starter task for community contributions.

Origin Document

This issue was created as a starter task for community contribution, as suggested per in the following conversation

Goals

  • Add pre-commit logic to validate if CHANGELOGs and the version were updated accordingly before commit
  • Allow this to be disable locally
  • Configure CI to always run this validation before merge of PRs

Non-Goals

  • Adding CI/CD to the V1 Codebase
  • Automating the population of the CHANGELOG based on git-diff results

Maintainers

Creator: @derrandz

V1 Utility Module First Iteration

Objective

Create the first iteration of the Utility Module and integrate it with the other modules within the codebase

Origin Document

Minimally implement the first iteration of the account based, state machine protocol.
The module should create the fundamental actors:

  • Accounts
  • Validators
  • Fishermen
  • Applications
  • Service Nodes

And implement the basic transaction functionality:

  • Send-Tx

And minimally satisfy the following interface:

CheckTransaction(tx []byte) error
GetTransactionsForProposal(proposer []byte, maxTransactionBytes int, lastBlockByzantineValidators [][]byte) (transactions [][]byte, err error)
ApplyBlock(Height int64, proposer []byte, transactions [][]byte, lastBlockByzantineValidators [][]byte) (appHash []byte, err error)

The module may optionally create 'shells/TODOs' for the following transactions per actor

  • Stake
  • Unstake
  • Edit Stake
  • Pause
  • Unpause

Creator/Owner: @andrewnguyen22
Co-Owner: @Olshansk
Deliverables:

  • Utility Module First Iteration
  • How to build guide
  • How to use guide
  • How to test guide

Contributing Guide

Objective

Include a contribution guide on the project

Origin Document

  • Community members need a contribution guide in order to be able to participate in this project
  • Pocket network Core developers should follow the contributing guide to be able to work with community contributors

Owner: @guettomusick
Co-Owners: @luyzdeleon

V1 Pre2P (i.e. PreP2P) Module

Objective

A stop-gap implementation of the P2P module to unblock the implementation of Consensus & Utility in a localnet environment.

Origin Document

Add a Pre2P (i.e. PreP2P) implementation to mock allow for sending and broadcasting messages between Pocket network actors and validator nodes. This is not a mock per say, but uses Goland's native networking modules to allow the broadcasting and sending messages to other actors in the Pocket Network. Though it is a full implementation of P2P communication, it is not optimized for large networks, nor does it follow the P2P spec in any way.

In short, it'll be a port of the code that was used in consensus-pre-prototype to unblock work in the interim and allow for a very simple substitute for P2P in a localnet.

Creator: @Olshansk
Co-Owners: @andrewnguyen22 @derrandz
Deliverables:

  • Pre2P module implementation
  • Documentation on how to use, build and test

[Dependency Injection] Adopt a widely supported dependency injection framework

Objective

Refactor and migrate the V1 codebase to use a widely adopted and supported dependency injection framework in Go.

Origin Document

The current codebase has dependency injection-like paradigms that are simple but built in-house. As the codebase matures, more features will be necessary and more problems will be faced which have likely already been encountered by users. We should look at other DI frameworks such as uber-go/dig, uber-go/fx or others and see if there's one that fits our goals.

Goals

Deliverables

  • 1. Research and lay out a list of the best Golang dependency injection frameworks in Go with a list of pros/cons, tradeoffs and risks.
  • 2. After (1) is complete, present and get approval from the core team on one of these.

Once (1) and (2) are done, move on to the following:

  • 3. A PR that migrates the V1 codebase to use a dependency injection framework
  • 4. Update all existing/relevant documentation where applicable
  • 5. Update the deps document with rationale of which dependency injection framework was selected and why

Testing Methodology

Existing tests will likely need to be updated/refactored, but new ones will not need to be added. All existing tests should pass after the migration

Non-goals

  • Do not build an in-house dependency injection framework

Creator: @Olshansk
Co-Owners: Anyone from the community

[P2P] Retry logic / Max Retries Logic / Max Timeouts

Objective


Implement Connections IO Retry Logic that can handle network partitions and hazards.

Origin Document


In any network, nodes go on and off line, packets are lost and connections are dropped.
We need to make sure that our network can handle these edge cases when they arise during critical operations such as transferring blocks or broadcasting for consensus rounds.

This impacts the two main aspects of connections IO:

  • The read and write methods of a connection should come with:
    • Operation Deadlines / Timeouts
    • Retry and Max Retries logic when deadlines are met.

Thus, during a read/write operation, two important things should happen to achieve IO safety:

  • Set the connection timeout: A time period (relatively large) that if elapsed with no activity, the connection is declared dead.
  • Put in place a retry mechanism such that when the connection times out, or an IO error is encountered, the retry will begin and try to read/write again multiple times up to the retry_max.
  • If an attempt to read/write succeeds before retry_max is reached, the retry mechanism will stop.

Operation Deadlines / Timeouts*: A time period that if elapsed with no data received/sent, the operation at hand is canceled.

Retry and Max Retries*: A number of re-tries an operation can undergo upon a Timeout up to a max. If the max is reached and the operation still times out, the operation is canceled. At the other hand, if the operation succeeds before the max is reached, then it has succeeded.

Goals


Deliverables

  • A PR to add Safe Network IO logic to our connection IO handlers:
    • Introduce IO-Safety measures to the write operation of connections
    • Introduce IO-Safety measures to the read operation of connections
    • Allow the key parameters of safety to be configurable:
      • Timeouts: The global timeout for a operation that if reached, the operation is failed.
      • Retry Timeouts: The buffer time between a retry and the one to follow.
      • Max Retries: The max amount of retries an operation can undergo before it is considered failed.

Testing Methodology

  1. Use make test_pre2p to run existing tests
  2. Use make test_pre2p_io_safety to run the IO-Safety test suite under io_test.go

Blockers

This effort requires that #86 be successfully implemented.

Owners

Author/Owner: @derrandz
Co-Owners(s): @Olshansk @andrewnguyen22

[P2P]: Convert P2P Wire Codec to Protobuf

Objective


Use protobuf as the one and only codec for communication, and deprecate the wire codec in favor of Protobuf.

The baseline functionality of the wire codec should be satisfied with protobuf. Wire Codec provides functionality that enables three important behaviors:

  • Ability to peak into the received payload size without allocating memory, allowing for validation logic to reject the payload size if too big.
  • Ability to determine if the received payload is an error.
  • Ability to determine the order of received packets. (Useful when data is chunked).

Origin Document


P2P uses wire_codec to guarantee connection metadata exchange before any send/receive takes place. The wire codec helps encode into the wire payload the following information:

  • Size
  • Is Error?
  • Is EOF?
  • Is Encoded?
  • The Payload

It's important to use the wire_codec so that connections can know how many bytes to read, the meaning of these bytes, and how to decode them. This is very important when we are dealing with an open connection that continually sends a series of messages.

We are interested in deprecating wire_codec in favor of a more standarized approach, more specifically using protobuff.

Goals


  • Deprecate the wire codec in favor of protobuff
    • Use protobuff to encode and decode all messages
    • Use a protobuff wrapper message with metadata fields to replace the wire codec's metadata
    • Refactor the read/receive operation to read up to a delimiter and use this delimiter to mark end of message (Check the following bufio method)

Pre-requisites


The way protobuf decoding typically works is that:

  • memory is allocated prior to decoding
  • bytes are decoded into the message type whose memory was allocated.

Protobuf must provide functionality to 'peak' into data to retrieve size without allocating memory prior, to properly replace the wire codec and providing the same functionality.

Previous research regarding this is available at the following document. This research did not attend to the possibility of using Any.pack and Unpack as a way to avoid allocating memory, so feel free to look into that.

An importat issue to help in the research. (Please act on the actionables on the issue when research is done)

[P2P] E2E Tests for p2p

Objective

Achieve a truly e2e-tested p2p

Origin Document

P2P is unit tested and integration-tested through the aid of mocks. We are interested in having a real-life simulation mechanism at our fingertips, that we can create, destroy, monitor and control, to test how p2p nodes perform and interact in a truly distributed environment with real nodes.

The e2e test should spin containerized peers and have them perform random broadcasts and sends with assertion for each operation taking place, and testing results in the form of a report

Deliverable

  • A make test_p2p_e2e command to run a docker-compose enabled local e2e stack
  • Documentation with instructions on how to :
    • create the e2e test
    • destroy the e2e test
    • [optional]monitoring the e2e test
    • [optional] Enable manual control of the broadcast progression
    • [optional] Enable automatic control of the broadcast progression
    • [optional] Enable the replay of the broadcast

The flags will help the e2e test to be smoothly runnable and controllable in CI environments.

Owners

author/owner: @derrandz

[P2P] Standardize cryptographic conversion math

Objective

To standardize the mathematical conversion from ed25519 to curve25519 using filipo's code

Origin Document

We are interested in somewhat standardizing the cryptographic conversion logic that converts from ed22519 to Curve25519.

I was further validated by finding an existing proposal to add this functionality to go’s standard cryptographic library in this issue

Context

A primer on the matter is available in this blogpost written by this dude who happens to be a security research with Go’s team.

Takeaways:

  • Edwards Curve (ed25519) and Montgomery’s curve (curve25519) are birationally equivalent

  • (x,y) are usually referring to points on Edward’s curve, while (u, v) refers to points Montgomery in crypto nomenclature

  • Since they are linked, RFC 7748 conveniently provides the formulas to map (x, y) Ed25519 Edwards points to (u, v) Curve25519 Montgomery points and vice versa:

    (u, v) = ((1+y)/(1-y), sqrt(-486664)*u/x)
    (x, y) = (sqrt(-486664)*u/v, (u-1)/(u+1))
  • Ed25519 public keys are encoded as a Y coordinate and a "sign" bit in place of the full x coordinate.

    • This means that for each X25519 public key, there are two possible secret scalars (k and -k) and two equivalent Ed25519 public keys (with sign bit 0 and 1, also said to be one the negative of the other).
      • Meaning, one ed25519 public key → one x25519

Deliverables

  • Drop-in replacement standardized library for the custom conversion code present in shared/crypto/mont.go using the standardized Mathematical cryptographic operations from filipo's code

Owners

@derrandz

[Persistence] V1 Persistence Foundation

Objective

A basic SQL-based implementation of the Persistence module specification to enable the development of the rest of the Pocket Node.

Origin Document

Pocket protocol persistence specification

Goals / Deliverables

  • Implementation of the persistence module
  • Actor schema specification
  • Actor query specification
  • Dockerized infrastructure to run a LocalNet with the new persistence module implementation
  • Deletion / deprecation of the PrePersistence module
  • Loading of some sort of state from local disc when a LocalNet node starts up
  • Unit Tests & with the 1st iteration of an accompanying unit test library
  • Documentation
    • Module specific README
    • Module specific CHANGELOG
    • Module specific code architecture (text and/or diagram)
    • Instructions on how to run/test/debug the module implementation
    • Global documentation / references updated

Non-goals

  • A comprehensive and complete "block store" mechanism
  • Deployment of the node to a non-local environment
  • Data integrity verification and guarantees
  • Implementation/adoption of a Merkle Tree
  • Implementation/adoption of a Key-Value Store

Testing Methodology

LocalNet and Unit Tests. See ./docs/development/README.md for more details.

Owners: @andrewnguyen22 @Olshansk

[P2P] Add mechanism to handle large messages

Objective

An excerpt from the protobuf's documentation (techniques section)

💡 Streaming Multiple Messages

If you want to write multiple messages to a single file or stream, it is up to you to keep track of where one message ends and the next begins. The Protocol Buffer wire format is not self-delimiting, so protocol buffer parsers cannot determine where a message ends on their own. The easiest way to solve this problem is **to write the size of each message before you write the message itself. When you read the messages back in, you read the size, then read the bytes into a separate buffer, then parse from that buffer.

Also from their website, same section:

Protocol Buffers are not designed to handle large messages. As a general rule of thumb, if you are dealing in messages larger than a megabyte each, it may be time to consider an alternate strategy.

Origin Document

With a maximal block size of 4MB, we need to be able to deliver such large messages. P2P has to put in place validation mechanisms to allow it to accept or decline a received payload depending on its size to avoid bandwidth consumption attacks and network congestions overall.

In order for us to achieve such behavior, we need ways to peak into the payload's metadata such as size before we engage in parsing it.

Protobuff does not provide such random access capability and requires that memory be allocated and for the message to be parsed before peaking into it.

Deliverables

  • A wire-byte level protocol to ensure the splitting, sequencing and streaming of large messages.

Owners

@derrandz

[Consensus] Twins: BFT Systems Made Robust

Objective

Implement the Twins Test originally authored by the Facebook Novi team on top of HotPOKT to guarantee the safety against Byzantine attacks as well as capture bugs during development and DevNet deployments, and prior to TestNet.

Origin Document

The original paper can be found here.

Ths Twins Test generates Byzantine unit tests that simulate 3 types of behaviour:

  • Leader equivocation
  • Double voting
  • Losing internal state

This was implemented atop of DiemBFT, but the same specification can be applied to any other BFT algorithm. Intuitively, since DiemBFT and HotPOKT are both Hotstuff-based algorithms, it should translate well.

Additional Resources

  1. As a potential point of reference, another open-source implementation of the Twins Test atop a Hostuff implementation in Go can be found here.

  2. The implementation of the consensus module can be found here with the existing unit tests accessible here.

  3. The consensus specification is available here.

Goals / Deliverables

  • Implementation: Library / framework implementing the Twins Test around HotPOKT
  • Tests: A few unit tests for a basic LocalNet
  • Documentation: Documentation (text and/or diagram) of the Test's functionality, source code structure, and implementation.
  • Follow up work: Github issues and/or milestones + TODOs for future work needed for exhaustive Byzantine behaviour testing.

Non-goals

  • A fully functional Twins Test on a remote DevNet environment
  • Exhaustive testing of all possible Byzantine combinations

Creator: @Olshansk
Co-Owners: ???

V1 Pre-Persistence Module

Objective

Have an in-memory database with the necessary persistence operations that happen in the application.

Origin Document

Add a pre-persistence implementation to mock needed storage ops. This mock should both unblock module developers and be utilized to demonstrate the storage needs of each module. This is meant to inform the development of the v1 persistence module while enabling integration of core modules.

Creator: @andrewnguyen22
Co-Owners: @iajrz
Deliverables:

  • Pre-Persistence Prototype
  • How to build guide
  • How to use guide
  • How to test guide

V1 Prototype Snapshot

Objective

Immutable code snapshot/reference of the 1st V1 prototype.

Origin Document

One of the goals in mid-February was to build a foundation for the V1 module integration. Since it was just a prototype, the team is aware of the following:

  • As V1 matures, the code will likely change quite a bit.
  • To submit the current prototype to mainline, it'll be split into a handful (i.e. ~5) of smaller PRs.
  • It is just a prototype, and there are a lot of missing gaps in the implementation.
  • A lot of the existing code paths may be unused and there is no guarantee that all the tests are passing.

In order to have a reference to which we can revert, the snapshot of the code, as of Feb 18th 2022 will be submitted into a separate directory which can eventually be completely removed.

Deliverables

  • All the code used for the V1 end-to-end block demo
  • Notes were taken during final integration meeting
  • Notes/instructions on how to test

Context

Note that the goal this issue is not a complete prototype, but rather a history of the first functional end-to-end block.

For example, the following will be done in subsequent iterations:

  1. Removing unused code
  2. Removing pre_p2p with p2p
  3. Fixing the consensus testing framework.
  4. Etc...

Creator: @olshansky
Co-Owners: @andrewnguyen22 @derrandz @luyzdeleon

[Persistence] Optimize gov params schema

Optimize params schema for better de-duplication of data while maintaining fast lookups

NOTE: The fundamental issue with the current 'naive' design is that it recreates the entire parameters list everytime we have an edit to a single parameter. However, it is also important to note, parameters are expected to be updated at a low frequency.

Per a suggestion from @iajrz offline, this schema was suggested

param values table
|------------+------------|
| field name | field type |
|------------+------------|
| height     | bigint     |
| param_id   | int        |
| value      | string     |
|------------+------------|


params table
|------------+------------|
| field name | field type |
|------------+------------|
| id         | int        |
| value_type | int        |
| param_name | string     |
|------------+------------|

My modification to it would be:


CREATE TYPE val_type AS ENUM ('STRING', 'BIGINT', 'SMALLINT');

CREATE TABLE params (
  name    string     PRIMARY_KEY,
  height  bigint     PRIMARY_KEY,
  enabled bool      NOT NULL,
  type    val_type  NOT NULL,
  value   string     NOT NULL,
)

CREATE TABLE flags (
  name    string     PRIMARY_KEY,
  height  bigint     PRIMARY_KEY,
  enabled bool      NOT NULL,
  type    val_type  NOT NULL,
  value   string     NOT NULL,
)

We can have very similar tables for flags and params and not how the primary key is a composite key of the name of the flag/param and the height, along with an enabled/disabled bool, so we can automatically (for example) in one snapshot of the state, predefined the behaviour of a flag only being applied for some number of blocks.

Version Number

Versions should be semver 2 compatible, which is aligned with golang's module versioning guidelines.

I think we should start work with version "0.99.0" to begin with, which is a number that points towards v1, is below 1.x (denoting "in development"), and is far enough from v0's version number to differentiate it.

[Infra] Configuration Loader

At present we are loading configurations from files exclusively. At some point we'll start adding flags to override specific configuration values.

A configuration loader should be written to make it so that flags are auto-generated in very much the same way configuration files can be automatically loaded by matching the Go structure to file contents.

This is low-burning at the moment, since configuration is in flux, but is part of the wiring milestone.

V1 Prototype: Application entrypoint and shared module

Objective

Add to the project the initial application entrypoint and the shared module with the common interfaces to all modules of the app.

Origin Document

Pocket is going to use an application-specific-bus which is going to coordinate the integration of the different modules of the application, this requires a set of common-use interfaces and structures and the aim of this issue to provide those.

Creator: @luyzdeleon
Co-Owners: @Olshansk @andrewnguyen22 @derrandz

Deliverables:

  • Modular code and interfaces to implement the application-specific bus
  • Infrastructure (e.g. Dockerfiles) to run a localnet
  • Tooling (CLI, commands, Makefile) to start localnet
  • A quick design document specifying the design

Non-goals:

  • Any "real" blockchain functionality

V1 Consensus Module - First Iteration

Objective

Create the first iteration of the Consensus Module and integrate it with the other modules within the codebase.

Origin Document

Minimally implement the first iteration of HotPocket, the State Machine Replication engine for Pocket V1. The original specs to the consensus protocol can be found at github.com/pokt-network/pocket-network-protocol/tree/main/consensus.

From a functionality perspective, the consensus module should be able to do the following: drive the blockchain to block creation through inter-module communication.

Below is a non-exhaustive list of deliverables and non-goals out of the first iteration of the consensus module

Goals

Deliverables

  • A basic implementation of Basic Hotstuff
  • A CLI client to help drive/start/stop localnet
  • A functional leader election mechanism
  • Modules for VRF leader election
  • Functional PaceMaker module
  • Starting consensus from a genesis file
  • A basic foundation for a consensus unit testing framework
  • Basic documentation on how to use and test the module
  • Workaround/passthrough DKG and Threshold Signature Mechanisms

Non-goals

  • Functional State Sync
  • Exhaustive testing for liveness & safety
  • Functional leader election
  • Starting consensus from a pre-synched state
  • Updated version of the consensus spec
  • Complete documentation of the codebase
  • Functional DKG and Threshold Signatures

Owner: @Olshansk
Co-owner: @andrewnguyen22

[Cleanup] Style consistency with explicit return values

Example:
In GetPoolAmount, we have named return values but still return values explicitly.
In GetAllPools, we have named return values that we make use of.

There is variation and inconsistency and, so I prefer that we stick with one (whichever that is). My preference is to avoid return values per the suggestion here: https://dave.cheney.net/practical-go/presentations/gophercon-israel.html#_avoid_named_return_values

Note: just leaving this one comment but applies to all the actors.

Originally posted by @Olshansk in #42 (comment)

There is a backlogged protocol hour for this discussion exactly.

The actionable is to refactor all functions to follow consistent style ruling. This spans over all modules

[Telemetry] Add support for telemetry clients other than prometheus

Description

Provide support for multiple telemetry clients.

Origin Document

Could you create a starter task github issue that says "Add support for telemetry clients other than prometheus" with links or references to these code examples?

I think it's "easy enough" and eye-catchy enough that a lot of infra folks would look at it when they try to get onboarded onto the codebase

Originally posted by @Olshansk in #95 (comment)

Changes

In shared/telemetry/module.go, specifically in the Create function, we would like to:

  1. Provide support for multiple telemetry clients by implementing a switch statement that creates the correct telemetry client provided in the config.

  2. Implement this new client(s)

Owners

N/A (Open for picking)

[P2P] Buffered writes with Chunks Ordering + Buffered Reads with Bytes Reordering

Objective


Implement bytes chunking/re-ordering for data streams.

Origin Document


When dealing with large payloads, the p2p module should be able to split the payload into a series of ordered chunks, and stream them to the recipient, such that if the ordered of receipt is flipped, the recipient can re-order the chunks based on their metadata, thus retrieve the large data in full after the stream has ended or a end of message delimiter is encountered in the stream.

Deliverables


  • A small IO library to be used in the read and write loops of the pooled connections, to:
    • provide chunking functionality for large payloads before send
    • provide re-ordering and unchunking functionality for large payloads on receipt

Dependencies:


This effort requires that #86 be successfully implemented.

Owners


author/owner: @derrandz

[Persistence] Module First Implementation

Owner @andrewnguyen22
Co-Owners: @iajrz @Olshansk

End Date May 1, 2022

Objective

Create the first iteration of the Pre-Persistence module and integrate it with the other modules within the /github.com/pokt-network/pocket codebase

Deliverables

Minimally implement the first iteration of the persistence layer.

  • Create clean interfaces and MVP schemas for the P2P dataset, Consensus dataset, Mempool dataset, and the State dataset
  • Use PostgreSQL as the first 'Database Engine'
  • Minimally implement the State Versions Deduplication Strategy
  • Minimally implement the Immutable State Schema (Patricia Merkle Tree) and AppHash functionality
  • Integrate the DB Engine into the development infrastructure stack
  • Integrate the persistence module with the other modules, deprecating pre-persistence

[P2P] Consolidate the socket module with the rest of the repository by implementing `Module` interface for the p2p socket.

Objective

To unify modules' style across the repository.

Origin Document

The socket component in the p2p module is an internal p2p component, however following on the footsteps of Consensus, we will implement the Module interface for the socket, as was done with the leader election and sortition components in Consensus.

We sacrificing access-modification (keeping these components bounded to their modules and private) to achieve code uniformity.

Deliverables

  • Update the p2p/socket.go to implement the Module interface at shared/modules/module.go

Owners

@derrandz

[Automation] CI for pushed / merged branches

Objective

We've identified a need in a CI solution as a first step of building infrastructure and automation for pocket v1, as everything else is going to depend on that being done first.

Origin Document

TBD, some rough notes for the next steps included in the onboarding doc: https://www.notion.so/Infrastructure-Dmitry-Onboarding-d7af22f439a74104892bec35d92651b7

Goals / Deliverables

  • Automatically running unit / integration tests
  • Automated artifact creation
    • Docker image
      • Debug/Development image with delve, troubleshooting CLI tools, etc.
      • Production image that only includes binary and very necessary tools (if any)
    • Support multiple architectures (amd64, arm64)
    • Investigate if we can support ubuntu/alpine (musl) based images

Non-goals

  • List of goals that this issue won't address

Owner: @okdas
Co-Owners: TBD
Deliverables: TBD

[Telemetry] V1 Telemetry Foundation

Objective

Build a foudnation for Pocket V1 Telemetry with a couple use-cases.

Origin Document

We are interested in gaining insight into the application correctness of pocket nodes, thus we want to introduce telemetry to the v1 codebase to monitor the performance and behaviour of both RainTree and HotPOKT as a starting pointing and develop more as we go forward.

68747470733a2f2f6d656469612e646973636f72646170702e6e65742f6174746163686d656e74732f3936353938383735393534353938333034372f3937343339383732383530373130353331302f6469616772616d732e6a70673f77696474683d31393430266865696768743d31343535

Goals

Deliverables

  • (1) Telemtry infrastructure
  • (2) Telemtry module foundation
  • (3) V1 Metrics - RainTree: verify correctness of p2p implementation
  • (4) V1 Metrics - HotPOKT: verify correctness of consensus implementation
  • V1 Logs - Modules: togglable and filterable)
  • V1 Logs - Levels: togglable and filterable)

Testing Methodology

How to properly test this issue

Non-goals

  • List of goals that this issue won't address

Owner: @derrandz
Co-Owners: @okdas @Olshansk

[Pre2P] Refine the P2P module listening and IO behaviour

Objective

Implement and test RainTree's redundancy layer on top of the Pre2P module.

Origin Document

The Pocket Network V1 P2P Specification, supported by this explanation of RainTree is partially implemented in the pre2p/raintree branch atop the Pre2P module at the time of creating this issue.

While #80 is close to being done, #85 was opened to implement the redundancy layer.

Per offline discussions within the core team, we have decided to migrate Pre2P module to be the primary P2P module but transfer all the IO learnings from P2P` atop of it. The existing implementation can be found here: milestone/v1-prototype...integration/module/p2p-simplified-over-9000.

However, there are some major missing components which need to be added per a message from @derrandz:

Listening and IO behavior in Pre2P has consequences:

1. We accept connections sequentially, and we handle them in the same way, no async handling (not suitable for our use case)
2. We read off of connections with the expectation that the sender/writer will immediately close once he's done sending/that the sender is doing fire and dump style of communication (not connection pooling logic atm)

These observations mean:

we won't be able to achieve graceful shutdown with open connections, nor accept incoming connections concurrently or achieve ACK and waiting on ACKs behavior. (and make e2e tests difficult to write)

You can say that those were the main difference between P2P (albeit the approaches are different), as p2p does the following:

1. P2P Pools connections as default behavior and allows for graceful shutdowns.
2. P2P offers functionality to fire and forget with the Write method if you don't need the "connection pool". Send will pool or use a pooled connection. (e.g: the broadcast relies on Write)

The highlighted points are crucial behaviors that the p2p module has to have to allow for handshakes, sending messages and expecting ACKs (or syncing in the future) 

Goals

Deliverables

  • Non-tangible deliverables
    • Transfer learnings from the initial P2P implementation to Pre2P
    • Identify and describe missing gaps in the Pre2P implementation when it comms IO and connection pooling in this PR
  • List out the features which need to be transferred offered (e.g. P2P pooling, async message handling) and the difficulty of adding it to Pre2P
  • Implement the list of features above
  • Update the README.md (to be added in the pre2p/raintree soon) describing the added / modified code layout of Pre2P module with the new modifications

Testing Methodology

  1. Use make test_pre2p to run existing tests
  2. Update the test suite in raintree_utils_test.go and add raintree_redundancy_layer_test.go int he same package with new tests
  3. [Optional] Use LocalNet as described in docs/development/README.md
  4. [Optional] Using Telemetry (if ready) to validate the results from (2)

Non-goals

  • Scaling LocalNet to many nodes
  • Resolving tech debt or optimizing existing code
  • Replacing P2P with Pre2P

Creator: @Olshansk
Co-Owners: @derrandz

[Consensus] HotPOKT Validator Signature Aggregation

Objective

Replace the use of Ed25519 in the first iteration of HotPOKT which uses lists of signatures for block proposals and voting with BLS signatures. This will be closer to the theoretical design of a Hotstuff-based consensus mechanism.

Origin Document

The Pocket protocol consensus specification can be found here.

The current consensus interface can be found here with the shared interface available here.

During our initial research, we found out that k-of-n threshold signature mechanisms are still relatively early from both a design and implementation perspective, but feasible later in the R&D cycle. This aims to prepare us for the next milestone.

Goals / Deliverables

  • Replace the Ed25519 asymmetric key mechanism with BLS for validator signature aggregation
  • Design modules/interfaces/libraries that prepare the codebase for both DKG and k-of-n threshold signing mechanisms in the future
  • Replace round-robin leader election by leveraging the existing VRF and cryptographic sortition algorithm for leader election
  • Testing
    • Add unit tests where appropriate
    • Determine the viability of implementing the “Twins: BFT Systems Made Robust” test
      • Implement it if viable at this stage
  • Documentation
    • Update the module-specific CHANGELOG
    • Update the module-specific README
    • Update the global documentation & references
    • Add details on how to run/test/debug the new functionality
    • A state diagram documenting the interaction between different components / libraries / interfaces
    • If possible: a sequence diagram of the functionality added
  • Identify future work
    • Document small issues / TODOs in the code for future work
    • Document which parts of the spec are implemented and which are mocked to track “spec coverage”

Testing Methodology

LocalNet and Unit Tests. See ./docs/development/README.md for more details.

Non-goals

  • Implement a DKG mechanism
  • Implement a threshold signature mechanism
  • Modifications to the existing spec

Owners: @Olshansk

[P2P] RainTree Redundancy Layer Implementation

Objective

Implement and test RainTree's redundancy layer on top of the Pre2P module.

Origin Document

The Pocket Network V1 P2P Specification, supported by this explanation of RainTree is partially implemented in the pre2p/raintree branch atop the Pre2P module at the time of creating this issue, but without the redundancy and cleanup layers.

This implementation has already been started and is available in the pre2p/raintree_redundancy branch at the time of writing this thread in #80.

The initial implementation of the redundancy layer code is available in the following commit: 756b0a0.

Goals

Deliverables

  • Fully implement the redundancy layers of RainTree including as described in the docs referenced by the Origin Document section:
    • 1. ACK/Adjust/Resend
    • 2. Redundancy layer
    • 3. Daisy Chain clean-up
  • Update raintree_utils_test.go to add support for
    • Dead / faulty nodes
    • Partial visibility of the network
  • Update the README.md (to be added in the pre2p/raintree soon) describing the added / modified code layout of Pre2P module with the new modifications

Testing Methodology

  1. Use make test_pre2p to run existing tests
  2. Update the test suite in raintree_utils_test.go and add raintree_redundancy_layer_test.go int he same package with new tests
  3. [Optional] Use LocalNet as described in docs/development/README.md
  4. [Optional] Using Telemetry (if ready) to validate the results from (2)

Non-goals

  • Scaling LocalNet to many nodes
  • Resolving tech debt or optimizing existing code
  • Replacing P2P with Pre2P

Creator: @Olshansk
Co-Owners: @andrewnguyen22

[Consensus] First Iteration - Followups & Cleanup

Objective

Address all of the discussion and implementation points left in the primary PR: #48

Origin Document

Issue #28: V1 Consensus Module - First Iteration

Goals

Deliverables

  • Address all outstanding TODOs in the Consensus module
  • Address all of the open items below

Testing Methodology

Unit Tests

End-to-end unit testing

# Update/generate protobufs
$ make protogen_clean && make protogen_local

# Update/generate mocks
$ make mockgen

# Run pacemaker unit tests
$ make test_pacemaker

# Run hotstuff unit tests
$ make test_hotstuff
$ EXTRA_MSG_FAIL=true make test_hotstuff

# Run all unit tests
$ make test_all

Localnet testing

Delete any previous docker state

$ make docker_wipe

First Shell:

$ make compose_and_watch

Second Shell:

$ make client_start
$ make client_connect
> ResetToGenesis
> PrintNodeState # Check committed height is 0
> TriggerNextView
> PrintNodeState # Check committed height is 1
> TriggerNextView
> PrintNodeState # Check committed height is 2
> TogglePacemakerMode # Check that it’s automatic now
> TriggerNextView # Let it rip!

Non-goals

  • Add any new functionality to the Consensus module

Outstanding tasks

Short-term refactor

  • Remove temporary vars used for utility integration (e.g. maxTxBytes, emptyByzValidators, etc…)

Mid-term refactor

  • Sync with @Olshansk on how we can use mockgen for socket_mock.go (see utility in consensus PR as a reference)
  • Consolidating various file with persistence and utility. Including but not limited to:
    • Validator
    • Block
    • State
    • Genesis

Large refactor

  • Discuss, design & implement the next step / iteration of the blockchain debug client (see cosmos and others as a references)
  • Add support for dynamic validator sets

Discuss

  • Provide a rationale as to why we’re using "github.com/stretchr/testify/require" and add it to the deps README
  • Sync with @andrewnguyen22 on the difference between blockHash and appHash

Documentaiton

  • Add a proper design doc

Next Steps on Research and Implementation

  • Development
    • Add a lot more unit tests
    • Implement proper leader election
    • Need to work with @derrandz to implement state sync
    • Move over all tests into shared directory
  • Research
    • Create a hotstuff research group
    • Research other projects using hotstuff (which variation, how is it test, how is it benchmarked, documented, etc...)
    • Investigate debug clients & tooling from other projects
    • Investigate benchmark tooling from other clients or build own
    • Find a solution for threshold signatures in large validator sets and implement it!!!!!
    • Understand how other projects deal with NodeIds
    • Discuss the issue of LockedQC and HighQC with @andrewnguyen22

V1 Prototype Integration

Objective

First prototype build of the V1 project.

Origin Document

The first fully end to end integrated V1 version including the following modules:

  • Application: Provide an entrypoint to the application and start the pocket process.
  • Application Specific Bus and Shared Module: The main integration point of the application, where the pocket process gets configured and initialized.
  • Persistence: Have an in-memory database with the necessary persistence operations that happen in the application.
  • Utility: A first prototype of the mempool and the implementation of the Send transaction be played against the state machine persisted using the Persistence module.
  • Consensus: Pacemaker, round-leader election and basic hotstuff sub-module.
  • P2P: Initialize the P2P module with a known set of peers, being able to broadcast messages to those peers and also being able to send direct messages to a particular peer.

Creator: @luyzdeleon
Co-Owners: @andrewnguyen22 @Olshansk @derrandz
Deliverables:

  • Initial prototype documentation per module
  • How to build guide
  • How to use guide
  • How to test guide

V1 Repurpose #31 to migrate the P2P Code to the TLD

Description


Following a slight misunderstanding from my side, I've went a head and put all the p2p module code under the prototype directory in the repository, imitating #27 to avoid conflicts, which was intended only for snapshotting purposes.

After the review and a discussion with the team, we saw that it was more fitting and appropriate that the p2p module impacts the TLD directly.

Acceptance Criteria


  • A module conforming to the module architecture established for V1
  • Runnable passing tests for all the concerns of the module (unit and integration)
  • Integration with the development client

Deliverables


  • a P2P module under ${TLD}/p2p conforming to the v1 module architecture containing:
    • Network IO and Socket capabilities: Read/Write/Poll/Answer
    • Network behavior and communication capabilities: Send/Broadcast/Ping/Pong/Handshake
    • Network structure capabilities: Raintree
  • Unit tests for each networking concern/capabilitiy
    • network_test.go (communication)
    • socket_test.go (io and socket)
    • raintree_test.go (structure)
  • A development client version spinning 5 pocket nodes with this p2p module.

Owners: @derrandz

V1 Prototype: Persistence Module

Objective

V1 Prototype persistence module

Origin Document

The V1 persistence prototype should allow for the following functionality:

  • Creation of a "state machine" specific context with persistence operations
  • A "commit-like" DB which can be rolled back in case an operation fails
  • Interfaces to the difference persistence operations required
  • Genesis state initialization

Creator: @luyzdeleon
Co-Owners: @andrewnguyen22 @derrandz @Olshansk
Deliverables:

  • Code
  • Documentation
  • Tests

V1 PrePersistence Module First Iteration - Followups

Objective

Address all of the discussion and implementation points left in the primary PR: #42

Origin Document

Issue #23: V1 Pre-Persistence Module

Goals

Deliverables

  • Address all outstanding TODOs in the PrePersistence module
  • Address all of the open items below

Testing Methodology

$ go mod vendor && go mod tidy
$ make protogen_clean && make protogen_local
$ make mockgen
$ make test_all

Non-goals

  • Add any new functionality to the pre-persistence module

Outstanding tasks

Mid-term refactor

  • Consolidate genesis.go, test_state.go and gov.go across pre-persistence, consensus and shared modules
  • @Olshansk Refactor (using automated script) tests to use testify library. E.g. require.Equal(actualBalance, expectedBalance, "message")
  • @Olshansk to write a script and present how gov.go can be refactored and modularized

Discussion / Documentation

  • Explain / document how save points work (today and in the future)
  • Explain / document how GetAllAccounts works
  • Discuss this code block in app.go works
    if height == m.Height {
        db := m.Store()
        it = db.NewIterator(&util.Range{
            Start: AppPrefixKey,
            Limit: PrefixEndBytes(AppPrefixKey),
        })
    } else {
        key := HeightKey(height, AppPrefixKey)
        it = m.Parent.GetCommitDB().NewIterator(&util.Range{
            Start: key,
            Limit: PrefixEndBytes(key),
        })
    }

Creator: @Olshansk
Co-Owners: @andrewnguyen22 @iajrz

Make it Run

The app needs an entry point; the two most prominent conventions are having a series of subdirectories in cmd/, each with names related to the executable that will be generated and containing a main.go file inside, and a simple entry point at the root of the repository.

Given the structure we have, we'll opt for a cmd/pocket/main.go file as the entry point.

[Utility] Local Proof of Stake

Objective

Build on and improve the first implementation of the utility module to have a functional local proof of stake network, including staking/unstaking/slashing/transferring assets.

This will set the foundation for relays in a future milestone, but is out of scope of this work.

Origin Document

Pocket protocol utility specification defines the utility specification.

The current utility interface can be found here with the shared interface available here.

Goals / Deliverables

  • Code Health
    • Extract common actor functionality into general interfaces
    • Reduce the code complexity and code footprint so it is more approachable and maintainable
    • Identify and document / cleanup parts of the code that are unclear to new readers
  • CLI
    • Update the existing CLI (or create a new one) to trigger utility-related commands (e.g. stake, unstable, send, pause, etc…)
  • Interface
    • Define interfaces or libraries for top-level functionality components (e.g. Update UtilityContext, GetHeight)
    • Update Persistence where necessary to have end-to-end functionality
  • Testing
    • Add unit tests where appropriate
    • Design & scope a DSL for protocol actors to perform certain actions (stake/unstake/send/pause/etc...)
      • Write a specific list of end-to-end scenarios using the actors
      • Creates certain scenarios using the DSL where the actions are fuzzed
  • Documentation
    • Update the module specific CHANGELOG
    • Update the module specific README
    • Update the global documentation & references
    • Add details on how to run/test/debug the new functionality
    • [M2?] A state diagram documenting the interaction between different components / libraries / interfaces
    • [M2?] If possible: a sequence diagram of the functionality added
  • Identify future work
    • Document small issues / TODOs in the code for future work
    • [M2?] Document which parts of the spec are implemented and which are mocked to track “spec coverage”

Non-goals

  • Complete implementation of the entire utility specification (e.g. geozones, relay volume validation, test score implementation, etc…)
  • Design and/or build a complete RPC spec
  • Modifications to the existing spec
  • Full Integration testing
  • Any functionality related to performing a relay

Testing Methodology

LocalNet and Unit Tests. See ./docs/development/README.md for more details.

Owners: @andrewnguyen22 @Olshansk

[Persistence] Consolidate common behaviour between `Pool` and `Account` into a shared interface

Objective

Remove redundant code used by the Pool and Account actors in the persistence module introduced in #73.

This is a good starter task opportunity to get acquainted with the codebase in the persistence module.

Origin Document

From the utility specification, a Pool is described as:

	A ModulePool is a particular type that though similar in structure to an Account, the
	functionality of each is quite specialized to its use case. These pools are maintained by
	the protocol and are completely autonomous, owned by no actor on the network. Unlike Accounts,
	tokens are able to be directly minted to and burned from ModulePools. Examples of ModuleAccounts
	include StakingPools and the FeeCollector

Similar to how the functionality of Fisherman, ServiceNode, Validator and Application was consolidated via persistence/schema/protocol_actor.go and persistence/schema/base_actor.go, the functionality of Pool and Account can also be shared to remove redundant code.

Goals

Deliverables

  • Extract common functionality of Pool and Account into a common interface
  • At a minimum, the following files will need to be affected: persistence/schema/account.go, persistence/account.go, persistence/test/account_test.go

Testing Methodology

  • $ make test_persistence
  • $ make test_all
  • Run a LocalNet following the instructions in docs/development/README.md

Non-goals

  • Adding new functionality to the persistence module

Creator: @Olshansk
Owner: @DragonDmoney
Co-Owners: @andrewnguyen22

[P2P] Pre2P RainTree

Objective

Implement the RainTree algorithm described here on top of the Pre2P module.

Origina Document

This issue was created after the work in #80 was mostly complete but created for completeness.

Goals

Deliverables

  • Implementaiton
  • Tests
  • Documentaiton

[Optional] Testing Methodology

  • Add new unit tests
  • Create a new test suite for running RainTree tests (similar to HotPOKT)
  • Verify that everything works on the 4 node LocalNet framework described here

Non-goals

  • Migrate Pre2P to P2P (to be done in a different issue)
  • Implement all the components of a fully functional P2P layer (peer discovery, redundancy, complete configs, async IO, etc...)

Creator: @Olshansk
Co-Owners: @andrewnguyen22 @derrandz

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.