Code Monkey home page Code Monkey logo

tentacle's Introduction

Tentacle

Build Status image

Overview

This is a minimal implementation for a multiplexed p2p network based on yamux that supports mounting custom protocols.

Architecture

  1. Data stream transmission
+----+      +----------------+      +-----------+      +-------------+      +----------+      +------+
|user| <--> | custom streams | <--> |Yamux frame| <--> |Secure stream| <--> |TCP stream| <--> |remote|
+----+      +----------------+      +-----------+      +-------------+      +----------+      +------+
  1. Code implementation

All data is passed through the futures channel, yamux splits the actual tcp/websocket stream into multiple substreams, and the service layer wraps the yamux substream into a protocol stream.

Detailed introduction: 中文/English

Note: It is not compatible with libp2p.

Status

The API of this project is basically usable. However we still need more tests. PR is welcome.

The codes in the protocols/ directory are no longer maintained and only used as reference

Usage

From cargo

[dependencies]
tentacle = { version = "0.4.0" }

Example

  1. Clone
$ git clone https://github.com/nervosnetwork/tentacle.git
  1. On one terminal:

Listen on 127.0.0.1:1337

$ RUST_LOG=simple=info,tentacle=debug cargo run --example simple --features ws -- server
  1. On another terminal:
$ RUST_LOG=simple=info,tentacle=debug cargo run --example simple
  1. Now you can see some data interaction information on the terminal.

You can see more detailed example in these three repos:

Run on browser and test

  1. setup a ws server:
$ cd tentacle && RUST_LOG=info cargo run --example simple --features ws -- server
  1. setup a browser client
$ cd simple_wasm/www && wasm-pack build
$ npm install && npm run start

all wasm code generate from book

  1. Use a browser to visit http://localhost:8080/

  2. Now you can see the connection on the server workbench or on browser's console

Other Languages

Implementations in other languages

Why?

Because when I use rust-libp2p, I have encountered some difficult problems, and it is difficult to locate whether it is my problem or the library itself, it is better to implement one myself.

tentacle's People

Contributors

aimeedeer avatar chanhsu001 avatar dependabot-support avatar doitian avatar driftluo avatar jjyr avatar keroro520 avatar kingwel-xie avatar liya2017 avatar pencil-yao avatar quake avatar rtzoeller avatar thewawar avatar thomasdezeeuw avatar umiro avatar yangby-cryptape avatar yejiayu avatar zonyitoo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tentacle's Issues

Need more clear error define

The handle_error in trait ServiceHandle, using io::Error for its error type.
It better to have a more clear error defined to distinguish what has happened in p2p framework.
For example, we need a special kind of error type to show Connected to the connected node.

Release p2p 0.1

  • A name for crate.io
  • English introduction #50
  • English module doc
  • Is there any API that need to modify, or where need to refactor on a large scale? @TheWaWaR
  • Support /dns4/example/tcp/1337 to dial and listen #43

An easy-to-use client mode

Hi there,

First of all, thanks for the lib.
I'm able to connect to different peers, then send messages from one to another.
One thing I couldn't achieve so far is, to use a the underlying stream (or substream) in order to forward the data via (io::copy()), would is this possible?

Thank you.

Bugs when setting the service timeout to 0 or a very large value

In ServiceBuilder, if the timeout is set to 0:

        let mut app_service = ServiceBuilder::default()
            .key_pair(key_pair)
            .timeout(std::time::Duration::new(0, 0))
            .build(AppServiceHandle);

The dialing always fails.

If the timeout is set to very large, e.g.,

        let mut app_service = ServiceBuilder::default()
            .key_pair(key_pair)
            .timeout(std::time::Duration::new(u64::MAX, 0))
            .build(AppServiceHandle);

The process panicked because of the error:

thread 'main' panicked at 'overflow when adding duration to instant', library/std/src/time.rs:368:33

Unable to open protocol randomly, especially when the client only has limited CPU resources.

Description

Unable to open protocol randomly, especially when the client only has limited CPU resources.

This issue was introduced since PR 288: fix: fix some msg left on buffer.

// For receive events from sub streams (for clone to new stream)
event_sender: Sender<StreamEvent>,
// For receive events from sub streams
event_receiver: Receiver<StreamEvent>,
// The control information must be sent successfully and will not cause memory problems
unbound_event_sender: UnboundedSender<StreamEvent>,
unbound_event_receiver: UnboundedReceiver<StreamEvent>,

Reason

  • [1] When client create a Session and fetch messages from it, the follow code was running:

    is_pending &= self.recv_unbound_events(cx)?.is_pending();
    is_pending &= self.recv_events(cx)?.is_pending();

  • [2] When client want to open a protocol, the follow code was running concurrently with [1]:

    let task = async move {
    let handle = match control.open_stream().await {
    Ok(handle) => handle,
    Err(e) => {
    debug!("session {} open stream error: {}", id, e);
    return Err(io::ErrorKind::BrokenPipe.into());
    }
    };
    client_select(handle, proto_info).await
    };
    self.select_procedure(task);

    We step into those functions, we could found there are two frames would be sent:

    [3] First:

    let frame = Frame::new_window_update(flags, self.id, delta);
    self.unbound_event_sender
    .unbounded_send(StreamEvent::Frame(frame))
    .map_err(|_| Error::SessionShutdown)

    [4] Second:

    socket.send(proto_info.encode()).await?;

    if let Err(e) = self.event_sender.try_send(event) {

  • The above two pieces of code are running concurrently.

    If line 627 of [1] is done before executing line 178 in [3] and line 115 in [4] is done before executing line 628 of [1].

    The client will send the Data frame before WindowUpdate frame.

    So the server will drop Data frame without do anything.

    } else {
    // TODO: stream already closed ?
    false
    }

    Then the server will open the protocol with WindowUpdate frame:

    debug!("[{:?}] Accept a stream id={}", self.ty, stream_id);

    But the server will never reply to the client.

    let (raw_remote_info, socket) = socket.into_future().await;
    let mut remote_info = match raw_remote_info {
    Some(info) => match ProtocolInfo::decode(&info?) {
    Some(info) => info,
    None => return Err(io::ErrorKind::InvalidData.into()),
    },

    So, at last, a ProtocolSelectError(Elapsed()) will be thrown.

Reproduce

  • Append ::std::thread::sleep(::std::time::Duration::from_millis(300)); after line 627 in [1].

  • Append ::tokio::time::delay_for(::tokio::time::Duration::from_millis(100)).await; after line 328 in [2].

  • Run CKB Integration Tests Spec PoolReconcile.

    The follow code will be panicked frequently.

        info!("Connect node0 to node1");
        node0.connect(node1);
    
        waiting_for_sync(nodes);

Reduce log level

  • ERROR tentacle::session stream send back error: SendError("...")
  • ERROR tentacle::session session send to sub stream error: send failed because receiver is gone
  • WARN tentacle::service timer is shutdown
  • WARN tentacle::substream protocol [2] close because of extern network
  • WARN tentacle_secio::codec::secure_stream send message error: Os { code: 104, kind: ConnectionReset, message: "Connection reset by peer" }

0.4 tentacle break proposal

tokio/bytes upgrade to 1.0

PR #293

TargetSession/TargetProtocol change

Currently, these two enumerated types under multi will force users to save all known session/protocol ids and specify them accurately, otherwise, they cannot be selectively broadcast, but this is not a good decision.

There are two proposals(mutually exclusive):

  1. Add a tag like Exclude(HashSet<ID>), it can remove some known options, but you don’t need to know all the options(can manually control the sending sequence)
  2. Change Multi(Vec<SessionId>) to Filter(Box<Fn(ID) -> bool + send + ‘static>), also achieved the purpose of flexibility(can't manually control the sending sequence )

Remove the deprecated event output mode

PR #293

Remove flutbuffer codec

PR #293

Extract core crate

Separate runtime encapsulation and transport layer into one library, and tentacle as a high-level encapsulation framework

Based on this design, it may be possible to implement a tentacle client to cover #205

Low priority, will not break API, can be done after release 0.4

MultiAddr problem

With the use of Multiaddr, the official itself has also found some problems, and there are problems with their own descriptions that are missing and inaccurate, and there are proposals to solve them by some means.

Parity In the case that the proposal has not yet passed, a non-standard protocol such as x-parity-ws has been added for practical needs.

Now, we have to rethink, what are our real needs for this increasingly complex specification, and whether we need to really go further and further with non-standard or complex standards.

ref: multiformats/multiaddr#87
ref: libp2p/rust-libp2p#1093

Tls reconnect error

I found a stituation: in tls mod, using correct tls-certificate, the client end could reconnect to server many many times. But after any bad node using error certifacte and occur tls-handshake error, the good node can't reconnect forever.

For reproducing error, I write a ut https://github.com/Pencil-Yao/tentacle/blob/tls-reconnect-err/tentacle/tests/test_tls_reconnect.rs. In the ut, node0 and node1 is ok, node2 use error certificate, and test_tls_reconnect_wrong can not be done.

about secio

I was looking into the rust-libp2p but soon I realized it is quite unlikely to figure out what they are doing, as the code is written like a hell - overwhelming abstraction, stacked state machines...

Later I happend to find this repo, much smaller, and kind of neat. However, after reading the code, I believe there are something wrong in your secio implementation.

  1. secio should be message oriented. In your code, it is not. Messages will be combined as big as the input buffer provided
  2. Don't really get why having channels in secio. Code could be very simple if making SecureStream 'AsyncRead/AsyncWrite', in this case we don't have to pass frames with channel. Furthermore the task running SecureStream.next can be eliminated

I'm quite new to rust, but I wonder if the async code in rust has to be written like this: poll in poll with state machine? I tend to believe code should be written in a better way after we have the formal async/await. Otherwise, it is a huge burden for rusters to write and maintain code written like this.

Of course I can make a pull request later if you like, just let me know...

Handle the collection errors both reading and writing

Handle the collection errors when reading data:

Poll::Ready(Some(Err(err))) => {
debug!("sub stream codec error: {:?}", err);
match err.kind() {
ErrorKind::BrokenPipe
| ErrorKind::ConnectionAborted
| ErrorKind::ConnectionReset
| ErrorKind::NotConnected
| ErrorKind::UnexpectedEof => self.dead = true,
_ => {
self.error_close(cx, err);
}
}
Poll::Ready(None)
}

But not when writing:

if let Err(err) = self.send_data(cx) {
// Whether it is a read send error or a flush error,
// the most essential problem is that there is a problem with the external network.
// Close the protocol stream directly.
debug!(
"protocol [{}] close because of extern network",
self.proto_id
);
self.output_event(
cx,
ProtocolEvent::Error {
id: self.id,
proto_id: self.proto_id,
error: err,
},
);
self.dead = true;
}

Future task loop forever

In the case of overweight tasks and poor machine performance, this problem will occur in the original code, #150 has been resolved, but the test can not reproduce the problem

We may bump to 0.2 version as soon as possible

  • The 0.1 version has an error not reported to the user #54
  • There is a type of error using #54
  • Lack of defense against fd attacks, need a timeout error #54 #57
  • Some users want to be able to support event stream output instead of callback,0.2 should support both programming modes at the same time #58
  • Let user control turn protocol on and off, breaking change #62
  • May need to support each protocol has their own codec, and does not require the same structure(trait object) #63
  • Add poll API to all protocol handle, to allow users to customize some stream tasks in the handle #65

release 0.2.0-alpha.1

  • Abstract transport layer #76
  • Maybe there are other problems, let us wait and see in the evolution process.

[yamux] how to use the tokio-yamux when the client stream is created continually?

Hi, I'm trying to use the tokio-yamux in my project, but i have encountered a problem.

in the example i can find the codes below

tokio::spawn(async move {
            loop {
                match session.next().await {
                    Some(_) => (),
                    None => break,
                }
            }
        });

I know this is like an engine for client to send the message out. but this session is moved to the async block, and we can not use it anymore. so all the streams of this session should be opened before spawn the async block.

but in my senario, the yamux client is a proxy, the stream is opened based on the new connection coming to the proxy, so I can not open all streams before spawn the async block. I tried to workaround by calling other methods in session to make sure the events of client be sent out. but i found that only calling next() can the client send events out.

So is there any way to solve my issue?

thanks very much

The stream is not closed when the underlying connection closed?

Hi, I'm using tokio-yamux in my project, and this tokio-yamux helps me a lot, thanks for this excellent lib.

I have encountered a problem.

I opened some streams, and copy the io from my served connections to those stream and also from those streams to my served connections.
I have tested an exceptional use case that when I close the server side of yamux, I expect that the copying threads should stop. but the reality is that it is still running.

So my question is: How to stop the Read or Write of the yamux stream, when the underlying connection is closed.
I tried to add control.close() in the "next call thread" but it is still no use.

tokio::spawn(async move {
                loop {
                    match agent_session.next().await {
                        Some(_) => (),
                        None => {
                            debug!("closing session");
                            health_control.close().await;
                            break;
                        }
                    }
                }
            });

Is it something I ignored? or are there any other suggestion for this use case?

Are we support the wasm platform yet?

Runtime compatible

  • yamux timer (#268 )
  • tentacle timer and spawn(#269 )

Browser compatible

  • websocket abstract encapsulation(#274 )

Cryptographic algorithms compatible

Essential
  • Secp256k1(#273 )
  • Other hash algorithms, such as sha256 etc..(#273 )
  • Hmac Sha256/Sha512(#273 )
Optional priority support for one of the major categories
  • ECDH private key negotiation
    • EcdhP256
    • EcdhP384
    • X25519(maybe we should add support for this algorithm)(#273 #271 )
  • AEAD symmetric encryption algorithm
    • Aes128Gcm
    • Aes256Gcm
    • ChaCha20Poly1305(#273 )

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.