peernetofficial / core Goto Github PK
View Code? Open in Web Editor NEWCore library. Use this to create a new Peernet application.
License: MIT License
Core library. Use this to create a new Peernet application.
License: MIT License
Provide an interface for 3rd party packet level debugging.
It probably makes sense to fork github.com/libp2p/go-reuseport
into a sub-directory. This makes sure that in the long-term dependencies don't disappear and this project remains independent. Also prevents 3rd parties from adding malicious code.
When only listening on link-local IPs do not set the IPvX_LISTEN
flag.
Example from peer running in Virtual Box. In this case it listens on IPv4, and while listening on the IPv6 addresses they are link-local, which means that IPv6_LISTEN
should not be set (currently it is).
Listen Address Multicast IP out External Address
10.0.2.15:112 255.255.255.255, 10.0.2.255
[fe80::4dfa:1bb0:1184:d81d]:112 ff05::112
[fe80::4cc5:f7de:a9f2:d1fa]:112 ff05::112
There is a critical bug in the UDT receiving code which breaks delivery to the virtual connection.
It appears that (upstream) packets received not in order fail to be queued properly.
IPv4 addresses in the 16-byte structure are mistakenly considered IPv6 addresses.
Minor change: Instead of always calling publicKey.SerializeCompressed()
, store peerID
permanently in memory.
Creating this issue to keep track of future potential MTU issues.
Linux has this artificial default limitation https://manpages.ubuntu.com/manpages/bionic/man7/udp.7.html:
By default, Linux UDP does path MTU (Maximum Transmission Unit) discovery. This means the
kernel will keep track of the MTU to a specific target IP address and return EMSGSIZE when
a UDP packet write exceeds it. When this happens, the application should decrease the
packet size. Path MTU discovery can be also turned off using the IP_MTU_DISCOVER socket
option or the /proc/sys/net/ipv4/ip_no_pmtu_disc file; see ip(7) for details. When turned
off, UDP will fragment outgoing UDP packets that exceed the interface MTU. However,
disabling it is not recommended for performance and reliability reasons.
This may or may not be an issue on Android as well: https://groups.google.com/g/android-ndk/c/UXvR_yCaH0Q
We'll have to do real life testing to see if this affects Peernet. In theory it could mean that packets above 1472 bytes payload could be dropped. While regular peer messages are likely to be below that limit, file transfer (depending on the UDT implementation) could exceed it.
The log indicates a race condition between shutdown and incoming data packets. Almost at the same time it receives the incoming data and the shutdown message. This is a nasty race condition that needs to be fixed.
Log from receiver:
UDT incoming 64 bytes
(listener) sending handshake(request) (id=1879022832)
UDT send outgoing 64 bytes
UDT incoming 64 bytes
(id=918363991) sending handshake(-1) (id=1879022832)
UDT send outgoing 64 bytes
UDT incoming 32 bytes
udtSocketRecv.ingestData incoming sequence {486181003} (expected {486181003})
(id=918363991) sending ack (id=1879022832)
(id=918363991) sending ack (id=1879022832)
UDT incoming 254 bytes
socket shutdown (type=4)
UDT send outgoing 20 bytes
UDT incoming 16 bytes
* Indicated file size = 238. Target transfer size = 238
* Read 0 bytes (target 238), error: Connection closed
Error downloading file: Connection closed
UDT send outgoing 40 bytes
virtualPacketConn.writeForward termination signal
Log from sender:
header success! now read at offset 0 limit 238 (filesize 238)
(id=1879022832) sending data (id=918363991)
data transfer status 0 bytes 238: <nil>
UDT send outgoing 32 bytes
(id=1879022832) sending data (id=918363991)
(id=1879022832) sending shutdown (id=918363991)
UDT send outgoing 254 bytes
UDT send outgoing 16 bytes
UDT incoming 20 bytes
UDT incoming 40 bytes
Create a separate peerList
for root peers. They will guarantee connectivity even in case of fake peers attacks.
Share the blockchain with other peers. Indicate blockchain height/version number in the outgoing messages and create the new blockchain get messages.
Create a temporary blacklist for nodes that do not respond in the given timeframe.
Include both IPv4/IPv6 addresses of peers in Response message (peer records) for efficiency. To be first changed in the protocol/whitepaper.
Deduplicate the last X packets per peer. This makes sure that broadcasts are deduplicated on arrival.
There's bunch of almost endless loops that need fixing. They result in maxing out CPU.
For example udtSocketSend.goSendEvent
in case of sendStateIdle
. There might be others - to be verified!
To be determined which key-value database will be used to store blockchain data.
Requirements:
Candidates and pro/cons:
We need to keep better track of why UDT transfer might be closed. The virtualPacketConn.Close
function only provides the limited info that it was closed by the downstream protocol, which is not good enough.
Global error codes for transfer termination need to be established.
Add a bit flag for IPv4/IPv6 only for requesting only IPv4 or IPv6 peers.
Improve sharing IPv4/IPv6 crossover for improved connectivity.
Adding a nonce to be verified in responses could make sense to protect against cache poisoning and similar attacks. Details to be worked out.
Deleting files on the blockchain may currently result in orphaned RecordTypeTagData
records. This should be fixed.
The traverse message is right now only a working placeholder. Instead of sending the actual message embedded, it currently uses createVirtualAnnouncement
to create a virtual Announcement one.
Same for the receiving, it does not forward it to the regular processing code (a message flag isRelayed
should be introduced).
Is there any reason to keep this external dependency github.com/btcsuite/btcd/btcec
? We could fork it into a sub-folder.
Add a /file/read
API to immediately read a file from a remote peer.
Support Content-Range which is important for video streaming:
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Range
Store the user's blockchain in a self.blockchain
file which is a key-value database. The initial key-values shall be:
height = count of blocks, 0 if none
version = version number of the blockchain
headersig = signature of height+version combined to prevent tampering
Each block shall be addressed by its block number, i.e. the key for block 0 is '0'.
Add ongoing peer exchange via regular announcement messages
The termination race condition from #45 origins from the fact that the shutdown packet does not (?) contain a packet sequence. If it would, it could be guaranteed to wait for all data packets with a lower sequence to arrive, before shutting down.
If the lingering code would actually work, it would mitigate it.
Place this core
package under MIT license.
For some reason after a while it loses connectivity?
Listen Address Multicast IP out External Address
172.30.62.173:112 255.255.255.255, 172.30.63.255
[fe80::e092:5fa6:9e8f:52c9]:112 ff05::112
[fe80::54a8:c13b:e83a:2ab7]:112 ff05::112
Peer ID Sent Received IP Flags RTT
02b270f6fdac85e76df0d2f7374f33a620ede82542ff7cd62d6934b4c069921322 303 258 N/A R N/A
0382728d11096efb211de8a9b7bb90f8e248be5572d4f5449f58a2af881b22dbe9 297 252 N/A R N/A
03174f370cb6d6f361d0511565b6b456a82c3d16b53d6b63b227d76a4f0f2abd2c 302 256 N/A R N/A
0219a6e643a6825e98378922e7a1114000c47e07acb9b9446acd5fd0efa46d1f2a 1058 1300 N/A N/A
0286aa74ac8203fcb4e673d6bd40b5244453f5f7c11a82ced259790de83d02e2f7 299 255 N/A N/A
02fd417b78326cde1a619070cb6780e2949d95999af8173fb661e7ac22167379d8 293 249 N/A N/A
-------- Node e902e722cf5699dbfb5be2b68e5486b6162664a6364e0cf8b3882c7bdba9e602 Outgoing Ping --------
Receiver Peer ID: 02b270f6fdac85e76df0d2f7374f33a620ede82542ff7cd62d6934b4c069921322
--------
-------- Node e902e722cf5699dbfb5be2b68e5486b6162664a6364e0cf8b3882c7bdba9e602 Outgoing Ping --------
Receiver Peer ID: 02b270f6fdac85e76df0d2f7374f33a620ede82542ff7cd62d6934b4c069921322
--------
-------- Node e902e722cf5699dbfb5be2b68e5486b6162664a6364e0cf8b3882c7bdba9e602 Outgoing Announcement --------
Receiver Peer ID: 02b270f6fdac85e76df0d2f7374f33a620ede82542ff7cd62d6934b4c069921322
--------
-------- Node e902e722cf5699dbfb5be2b68e5486b6162664a6364e0cf8b3882c7bdba9e602 Outgoing Ping --------
Receiver Peer ID: 02b270f6fdac85e76df0d2f7374f33a620ede82542ff7cd62d6934b4c069921322
--------
Wireshark indicates incoming and outgoing messages:
Did perhaps the listener encounter an error and exited the listening loop? To be verified.
Edit: The listener is attached to port 112 on all IPs according to netstat. Must be something in between.
Add UPnP support.
Introduce connection level statistics. This will help for debugging.
Introduce records with the type RecordTypeDelete
. Honor delete records in UserBlockchainListFiles
.
There's some bad loop somewhere when receiving data, waiting for it.
Add fields:
Keeping track of important links to UDP implementations.
Specs via https://udt.sourceforge.io/doc.html:
Protocol of choice: UDP
UDP seems to fit the use case (file transfer) squarely. QUIC might be an overkill and uTP might have undesired side effects (quote from the spec "This effectively makes uTP yield to any TCP traffic").
The trick will be to fork the implementation from odysseus654 and instead of using new UDP connections, using the existing Peernet protocol as transport layer.
Before starting with the actual implementation, a new file transfer message must be defined in Peernet with a small header. The header must include:
Evict peers from the peerList
:
Feature request: Provide a virtual folder in Windows Explorer.
The UDT library is too trusting of arbitrary input by remote peers, easily causing almost endless loops and exhaustion of memory, essentially causing a denial of service (DoS).
One example is receiving an out of order packet with a high (or lower) sequence number, causing a huge loss list to be created.
PIPs will allow anyone to suggest improvements to the protocol or the reference implementation of such. PIPs may extend functionality, provide clarity or give general guidelines for Peernet client developers.
We can learn from the Bitcoin community and their BIP structure https://github.com/bitcoin/bips who have an established track record.
To prevent flooding, keep track of contacted peers in cmdResponseBootstrapFindSelf
. It probably makes sense to blacklist any contacted peers for 10 minutes or an hour. Especially during bootstrap the same peers get returned all the time.
Note that an incoming packet from a peer should automatically remove any associated blacklist entries. This is important in case the peer is soon removed (due to full peer list) but then soon later (within minutes) required for querying data.
Add a format field to the blockchain header in case the format changes and needs to be upgraded.
Todo: Automatically add firewall exclusions if listen IPs are not defined in the config.
Also document necessary firewall settings.
Implement UDP hole punching via new message.
Create a new endpoint to delete the account. This should do the following:
Right now there are a couple of low-level send functions:
sendAllNetworks
used by contactArbitraryPeer
peer.sendConnection
peer.send
By using either a virtual peer structure or adding a flag to PeerInfo
these send functions could be merged.
Add a feature to fragment files into chunks similar to how torrents work.
Currently only the hash of the entire file is stored on the blockchain.
The file record type on the blockchain supports metadata tags, we can use that to create a new tag "file fragments" which will be a list of hashes for each fragment.
We can look into the implementation of torrent clients and directly into .torrent files what common fragmentation strategies are.
The code to store the fragment hashes (or just the merkle tree?!) will go here: https://github.com/PeernetOfficial/core/blob/master/blockchain/Block%20Record%20File.go
Merkle Tree
Torrent files support a merkle tree: https://en.wikipedia.org/wiki/Torrent_file#Merkle_trees
Do we want that? In that case only the root hash needs to be stored.
We probably should just do some calculations to see how large files (that could be even up to TBs) would impact that file record size. We have a soft limit of 64 KB.
Torrents and Piece Length
The Wikipedia article mentions a common piece length of 256 KB:
piece length—number of bytes per piece. This is commonly 28 KiB = 256 KiB = 262,144 B.
For a 2 TB file that would mean 8,388,608 pieces. Blake3 digest size is 32 bytes, resulting in at least 256 MB of hash data.
A 2 GB file would require 256 KB, which is still substantial when we are considering the target block size (which is smaller is better and should fit in a single UDP packet).
Interesting related discussion: https://www.reddit.com/r/torrents/comments/dzxfz1/2019_whats_ideal_piece_size/
https://wiki.vuze.com/w/Torrent_Piece_Size mentions "All in all, a torrent should have around 1000-1500 pieces" providing this table:
When deleting files, the distance to RecordTypeTagData
records may change. This means metadata gets then potentially referenced incorrectly.
The solution is to refactor the entire block if a file is deleted (completely decode the block - at least all file and tag records, then encode it).
Detect network change:
There is no idiomatic way to detect network change. Enumerating via net.Interfaces()
and monitoring results will have to do the trick.
There is some code out there like https://github.com/play175/wifiNotifier (and fork https://github.com/stenya/wifiNotifier) but that has some restrictions (only Windows and Darwin), needs some serious testing and would only be an addon.
Keeping track of limitations of the current implementation. These limitations may be addressed later on.
blockchainIterateDeleteRecord
isn't 100% efficient. It does not delete orphaned records or double records. However, this could (and probably should) be addressed in a different place, when writing a record.Blockchain vs Merkle Tree?
Current concept: Any operation other than append (such as replace or delete) causes a blockchain version number increase. This means the entire blockchain needs to be recalculated and redistributed. This could be, if there are frequent changes, rather expensive.
Potential future concept: The version numbers inside blocks do not need to change. They will retain the version number at creation. Is the previous block hash field of any value? (open q) Instead of relying on (entire) blockchain versioning, the Merkle Tree (root hash) will be versioned?! This way individual blocks can be replaced or deleted, without affecting other blocks. This of course brings other complexity - the Merkle Tree needs to be maintained and (at least the root hash) distributed on every update.
This needs more research and thought. Introducing or combining a Merkle Tree for blocks could help with some things, but brings in additional complexity.
Alternative: Instead of recalculating the entire blockchain on each replace/delete, it could just recalculate the part starting at the block that changed. This means allowing blocks in a single blockchain with multiple (increasing) version numbers. Fetching (updating) of someone else's blockchains could be done top down until the previous block hash matches.
Provide a callback (or channel?) to intercept log messages.
No RTT is available if a connection is established from one side and pings keep coming from that side.
Ports I/E will be known (via the incoming Announcement message) but the RTT will not.
This does not mean there is a problem; only that no Announcement and Ping was sent out. The only implication is with Kademlia eviction where it considers the RTT field. If RTT is unavailable for existing one but available for new one, it will be kicked out, which can be considered a good choice since no outgoing request was ever needed.
In case the RTT should be measured a simple outgoing ping will do the magic.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.