Code Monkey home page Code Monkey logo

tls's Introduction

Tokio Tls

Overview

This crate is the home of the tokio-native-tls crate:

(tokio-rustls now lives in the rustls org.)

Getting Help

First, see if the answer to your question can be found in the Tutorials or the API documentation. If the answer is not there, there is an active community in the Tokio Discord server. We would be happy to try to answer your question. Last, if that doesn't work, try opening an issue with the question.

Contributing

🎈 Thanks for your help improving the project! We are so happy to have you! We have a contributing guide to help you get involved in the Tokio project.

Related Projects

In addition to the crates in this repository, the Tokio project also maintains several other libraries, including:

  • tokio: A runtime for writing reliable, asynchronous, and slim applications with the Rust programming language.

  • tracing (formerly tokio-trace): A framework for application-level tracing and async-aware diagnostics.

  • mio: A low-level, cross-platform abstraction over OS I/O APIs that powers tokio.

  • bytes: Utilities for working with bytes, including efficient byte buffers.

Supported Rust Versions

Tokio is built against the latest stable, nightly, and beta Rust releases. The minimum version supported is the stable release from three months before the current stable release version. For example, if the latest stable Rust is 1.29, the minimum version supported is 1.26. The current Tokio version is not guaranteed to build on Rust versions earlier than the minimum supported version.

License

This project is licensed under the MIT license.

Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in Tokio by you, shall be licensed as MIT, without any additional terms or conditions.

tls's People

Contributors

atouchet avatar behrat avatar benesch avatar bluejekyll avatar briansmith avatar cpu avatar ctz avatar detly avatar divergentdave avatar djc avatar dmolokanov avatar doumanash avatar erickt avatar fanatid avatar gopherj avatar hawkw avatar jacott avatar jayvdb avatar jebrosen avatar jeromegn avatar jwodder avatar luciofranco avatar neolegends avatar nickelc avatar noah-kennedy avatar paolobarbolini avatar pzread avatar quininer avatar taiki-e avatar univerio avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tls's Issues

rustls seems to have unexpected behavior when used in tokio::io::copy

Hi, I'm building a proxy. The server side accepts a TlsStream (from the client side) and makes a Tcp connection to the target website. Then we just need to copy everything we get from the TlsStream to the TcpStream and collect the response vice versa.
I was initially using tokio::io::copy for this task. However, I encountered a weird bug that my client never received nearly half of the response from the target website, and the connection just stuck until timeout. I then replace the io::copy with my simplified version:

// (didn't work, same as tokio::io::copy)
pub async fn copy_tcp<R: AsyncRead + Unpin, W: AsyncWrite + Unpin>(
    r: &mut R,
    w: &mut W,
) -> Result<()> {
    let mut buf = [0; 2048];
    loop {
        let len = r.read(&mut buf).await?;
        if len == 0 {
            return Ok(());
        }
        w.write(&buf[..len]).await?;
    }
}

And this didn't work until I added a flush after the write call,

pub async fn copy_tcp<R: AsyncRead + Unpin, W: AsyncWrite + Unpin>(
    r: &mut R,
    w: &mut W,
) -> Result<()> {
    let mut buf = [0; 2048];
    loop {
        let len = r.read(&mut buf).await?;
        if len == 0 {
            return Ok(());
        }
        w.write(&buf[..len]).await?;
    --> w.flush().await?;
    }
}

I found that in tokio-tls/rustls,

/// Note: that it does not guarantee the final data to be sent.
/// To be cautious, you must manually call `flush`.

that we are required to manually call flush.
While in tokio::io::copy,
https://github.com/tokio-rs/tokio/blob/c306bf853a1f8423b154f17fa47926f04eecd9b4/tokio/src/io/util/copy.rs#L69-L74
only when EOF is seen does it call poll_flush.

So, should we avoid using tokio::io::copy for this purpose?

test fails with only feature http1 enabled

One test fenced with feature http1 fails like this when tested with cargo test --no-default-features --features http1:

error[E0599]: no method named `build` found for struct `ConnectorBuilder` in the current scope
   --> src/connector/builder.rs:271:14
    |
27  | pub struct ConnectorBuilder<State>(State);
    | ------------------------------------------ method `build` not found for this
...
271 |             .build();
    |              ^^^^^ method not found in `ConnectorBuilder<WantsProtocols2>`

Apparently that test additionally requires feature2, or need to be rewritten somehow.

Run example got error


    Finished dev [unoptimized + debuginfo] target(s) in 0.04s
     Running `/data/projects/tls/target/debug/server '127.0.0.1:12345' -c /etc/letsencrypt/live/xx.com/cert.pem -k /etc/letsencrypt/live/xx.com/privkey.pem -e`
thread 'main' panicked at 'removal index (is 0) should be < len (is 0)', library/alloc/src/vec/mod.rs:1378:13
stack backtrace:
   0: rust_begin_unwind
             at /rustc/c8dfcfe046a7680554bf4eb612bad840e7631c4b/library/std/src/panicking.rs:515:5
   1: core::panicking::panic_fmt
             at /rustc/c8dfcfe046a7680554bf4eb612bad840e7631c4b/library/core/src/panicking.rs:92:14
   2: alloc::vec::Vec<T,A>::remove::assert_failed
             at /rustc/c8dfcfe046a7680554bf4eb612bad840e7631c4b/library/alloc/src/vec/mod.rs:1378:13
   3: alloc::vec::Vec<T,A>::remove
             at /rustc/c8dfcfe046a7680554bf4eb612bad840e7631c4b/library/alloc/src/vec/mod.rs:1383:13
   4: server::main::{{closure}}
             at ./src/main.rs:61:34
   5: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
             at /rustc/c8dfcfe046a7680554bf4eb612bad840e7631c4b/library/core/src/future/mod.rs:80:19
   6: tokio::park::thread::CachedParkThread::block_on::{{closure}}
             at /root/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.12.0/src/park/thread.rs:263:54
   7: tokio::coop::with_budget::{{closure}}
             at /root/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.12.0/src/coop.rs:106:9
   8: std::thread::local::LocalKey<T>::try_with
             at /rustc/c8dfcfe046a7680554bf4eb612bad840e7631c4b/library/std/src/thread/local.rs:399:16
   9: std::thread::local::LocalKey<T>::with
             at /rustc/c8dfcfe046a7680554bf4eb612bad840e7631c4b/library/std/src/thread/local.rs:375:9
  10: tokio::coop::with_budget
             at /root/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.12.0/src/coop.rs:99:5
  11: tokio::coop::budget
             at /root/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.12.0/src/coop.rs:76:5
  12: tokio::park::thread::CachedParkThread::block_on
             at /root/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.12.0/src/park/thread.rs:263:31
  13: tokio::runtime::enter::Enter::block_on
             at /root/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.12.0/src/runtime/enter.rs:151:13
  14: tokio::runtime::thread_pool::ThreadPool::block_on
             at /root/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.12.0/src/runtime/thread_pool/mod.rs:77:9
  15: tokio::runtime::Runtime::block_on
             at /root/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.12.0/src/runtime/mod.rs:463:43
  16: server::main
             at ./src/main.rs:67:5
  17: core::ops::function::FnOnce::call_once
             at /rustc/c8dfcfe046a7680554bf4eb612bad840e7631c4b/library/core/src/ops/function.rs:227:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.

Call to TlsAcceptor.accept is not terminating if we send plain text traffic from client

I am trying this echo server example with TlsAcceptor:
https://github.com/tokio-rs/tls/blob/master/tokio-native-tls/examples/echo.rs

And sending some plain-text traffic to this sample echo server like following:

curl -v "http://127.0.0.1:12345"

Of course, sending http request to echo server does not make much sense but I am just checking that call to tls_acceptor.accept should fail if it receives non-ssl traffic. On linux machine I am receiving following output with error on server-

accept connection from 127.0.0.1:48271
thread 'tokio-runtime-worker' panicked at 'accept error: Ssl(Error { code: ErrorCode(1), cause: Some(Ssl(ErrorStack([Error { code: 336027804, library: "SSL routines", function: "SSL23_GET_CLIENT_HELLO", reason: "http request", file: "s23_srvr.c", line: 414 }]))) }, X509VerifyResult { code: 0, error: "ok" })', src/main.rs:33:68
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

However on Mac OS X (11.4) call to tls_acceptor.accept(socket).await.expect("accept error") never terminates and even curl command keeps on waiting.

I just want to handle scenario where plain-text traffic is received on server instead of ssl-traffic and tls_acceptor should error out. Let me know if I am missing something here.

I am using rust 1.52.1 with following cargo dependencies-

tokio = { version = "1", features = ["full"] }
tokio-native-tls = "0.3.0"
native-tls = "0.2.7"

How to use tokio-rustls new api?

There is no ClientConfig::new() any more, how to use ClientConfig::builder() to get the same result?

let mut config = ClientConfig::new();
config.root_store.add_server_trust_anchors(&webpki_roots::TLS_SERVER_ROOTS);
let config = TlsConnector::from(Arc::new(config));
let dnsname = DNSNameRef::try_from_ascii_str("www.rust-lang.org").unwrap();

let stream = TcpStream::connect(&addr).await?;
let mut stream = config.connect(dnsname, stream).await?;

TLS streams are not guaranteed to be "split" (full-duplex) safe

Currently every TLS stream can be split into a reading half and a writing half using tokio::util::split. This operation is however not always guaranteed to be correct and safe to perform:

The reason here is that with TLS reading and writing on a stream are not guaranteed to be decoupled, since TLS streams also transfer control data besides application data. Due to this, performing a read operation on the TLS stream might trigger a write operation on the socket (e.g. for sending a key update or alert), and the other way around.

This property can break assumptions that tokio/tokio-tls and the libraries make at the moment. As one example:

  • The user performs a read, which triggers an alert to be sent to the peer
  • The TLS library performs a write on the underlying socket, which report a blocked status. It forwards that blocked status to the application.
  • The task yields and waits to get woken up based on read readiness. Which might not happen, since the task which last registered for write readiness might be woken up. So the reading half would be stuck.
  • The last point happens if the read task doesn't install its Waker for write readiness. If it instead identifies being write blocked, and overwrites the write Waker the situation is not necessarily better. Now a concurrent write operation might instead be starved, since the Waker is gone.

What will exactly happen or won't happen is a bit of a property of the TLS library, and might therefore be handled completely different by rustls, openssl, schannel and security-framework. Therefore it's rather hard to describe what exactly "could go wrong" if someone tries to split a TLS stream.

With rustls reading and writing onto actual sockets is currently purely handled by tokio-tls, therefore its the most easy thing to grasp. Here any read or write operations in the wrapper simply don't seem to care whether rustls wants to perform IO in the opposite direction. That might not lead to Waker stealing and tasks getting starved. However I guess it could lead to a delay in sending certain updates. @ctz might know more whether this is problematic.

With native-tls + openssl it is very likely that it will perform IO in the opposite direction and handle readiness notifications wrong. https://dzone.com/articles/using-openssl-with-libuv provides a bit of information what needs to be done to handle those cases, but I think native-tls doesn't since it purely forwards all TLS calls to openssl which uses the fd to perform IO. From there on things rely on mio to report readiness, which only cares about the sockets read/write state and not the TLS streams read/write state.

How can this be fixed

It's unfortunately not easy. To avoid people running into starved streams, one solution is to get rid of tokio::util::split and hightlight in docs that streams support only one common Waker for both read and write operations. But that's not making streams full duplex.

I think to really enable full-duplex the following things could be done:

  • Make sure the TLS libraries don't directly perform IO, they just get fed incoming data, and buffer outgoing data (or write it to a buffered writer). If that buffer is full they exercise backpressure on the caller. That is kind of what rustls is already doing.
  • Besides the applications read and write task, there is a TLS IO task which makes sure that if there is any buffered outgoing data (either produced via a read or write operation) that data gets written to a socket. Only that task deals with OS readiness notifications. If sufficient data had been flushed, it wakes up the potentially blocked application write task. If new data from the socket had been received and decoded via the TLS library, it wakes up the read task.

This is kind of also the model that other multiplexing solutions - like HTTP/2 - use in their implementations. The downside is obviously the need for spawning as task, and potential overhead of task-switching which can degrade performance - especially if a multithreaded runtime is used, which would require synchronization between the application and TLS IO task.
It also makes the solution less runtime agnostic, and harder to force-cancel ongoing IO.

use features to make tokio-rustls compatible with `futures` interface

I have forked this repo and made a commit 2fb62ea which added a new feature called "use-futures" that makes this library compatible with futures interface and the async-std library.

The downside of this change is that it involves several blocks of duplicate code due to the different naming of AsyncWrite trait, where poll_shutdown is called poll_close in futures crate. This could make the code base harder to maintain. Moreover, the testing part of this crate can be more cumbersome too, since it should involve testing under two different async frameworks (For the moment, there are still two tests under tokio that I failed to migrate to async-std).

The upside of such a change is obviously that it enables users of futures inteface to benefit from the update in this repository, and also enables them to contribute to this library too.

The diff in Cargo.toml looks like this:

 tokio = "1.0"
 rustls = { version = "0.20", default-features = false }
 webpki = "0.22"
-
+futures = { version = "0.3", optional = true }
 [features]
 default = ["logging", "tls12"]
+use-futures = ["futures"]
 dangerous_configuration = ["rustls/dangerous_configuration"]
 early-data = []
 logging = ["rustls/logging"]
 tls12 = ["rustls/tls12"]

 [dev-dependencies]
 tokio = { version = "1.0", features = ["full"] }
+tokio-async-std = "1"
 futures-util = "0.3.1"
 lazy_static = "1"
 webpki-roots = "0.22"

Feature: Implement into_split for TlsStream

TlsStreams should support splitting them into two separate structs for reading and writing (and re-uniting them again to a TlsStream struct). This is already implemented for TCP streams: TcpStream::into_split

This feature would be very nice to have when using Tokio with the Actor Model. Splitting the stream into halves is necessary in order to have them in separate threads (actors).

rustls: unable to do session resumption with api.devicecheck.apple.com

There are two streams in the following pcap files, the first to appleid.apple.com, the second to api.devicecheck.apple.com, both use session_ticket extension.
out.pcap.gz

The second failed while the first succeeded. The main difference seems to be, in stream one, packet 6 contains Server Hello + Change Cipher Spec + Encrypted Handshake Message, but in stream two, packet 23 only contains Server Hello + Change Cipher Spec, the missing Encrypted Handshake Message is later in packet 27, which seems to be ignored by rustls as it reply alert message and close the connection before packet 27.

tokio-rustls version 0.23.2, rustls version 0.20.4

Stalled event loop since rustls 0.20

Under certain conditions, handshakes start blocking and clog the event loop.

I'm still debugging this with @djc, but I figured I'd open an issue to possibly block the release of tokio-rustls 0.23.

Reproduction:

  • Run the tokio-rustls example server w/ #[tokio::main(flavor = "current_thread")] (for faster reproduction)
    • cargo run -- 127.0.0.1:2314 -c ../../tests/end.cert -k ../../tests/end.rsa (from the tokio-rustls/examples/server directory)
  • Run the https://testssl.sh/ suite: testssl.sh 127.0.0.1:2314

For me, this stalls at the first test. It appears to be able to complete 2-3 handshakes and then can't handle anymore.

Investigation: `LazyConfigAcceptor` potentially causes a spin loop

Hey!

Thank you for this library! We are in the process of migrating our server from rustls v0.19 to rustls v0.20 and tokio-rustls v0.23.1.

In our server, we are accepting TLS connections over tokio::net::TcpStreams lazy loading the certificates along the way using the LazyConfigAcceptor. During tests of the new version, we have seen the server go into spin loops pretty frequently (maxing out the CPU), and profiles recorded using perf point to the Future implementation of LazyConfigAcceptor. Precisely, to this read:

if let Err(err) = ready!(Pin::new(io).poll_read(cx, &mut buf)) {

Running some quick tests, I believe the spin loop might be caused by a zero-length read from the socket, which LazyConfigAcceptor does not handle properly (although it should). Rustls happily accepts zero-length data, it seems:

use rustls::server::Acceptor;

fn main() {
    let mut acceptor = Acceptor::new().unwrap();

    let arr: &mut &[u8] = &mut &[][..];
    let x = acceptor.read_tls(arr).unwrap();

    assert_eq!(x, 0);
    assert!(acceptor.accept().unwrap().is_none());
}

... but it will also cause its Acceptor to return Ok(None) below, forcing the loop in line 258 to go round and round and round and ...

Handshake fails when using a buffered stream

I'm not sure that's the correct project to put this issue, please redirect me if need be.

I'm trying to use TLS over Tor, which provides a buffered stream in order to reduce the number of message on the network. The issue is that when handshaking, no flush is called on the stream, but it is excepted to reply to the written TLS header. This isn't an issue when using socket direclty, as every write actually sends it, but it becomes one when wrapping the socket in a BufWriter (or when using a Tor stream).

I was able to trigger it in the tokio-rustls' tests by changing in do_handshake the good stream into a BufWriter::new(Good(server)).
tokio-native-tls is a bit harder to trick as it simply hangs the fetch_google when using .connect("google.com", BufWriter::new(socket)), but not the other tests, dunno why. FYI: I'm running Linux, so openssl is my backend library.

I didn't manage to put a PR together, but I'm happy to try a bit more if given some pointers on the best way to approach it.

rsa_private_keys() returns an empty vector

I found this problem with the rustls::pemfile::rsa_private_keys(), I made some tests and looks like this function returns an empty vector from a buffer created from a private key generated with:

openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:4096 -keyout private.key -out certificate.crt

To test my problem create an private key with the command above and run this test:

#[test]
fn test_rsa(){
    let file = std::fs::File::open("private.key").unwrap();
    let rd = &mut std::io::BufReader::new(file);
    let vector = tokio_rustls::rustls::internal::pemfile::rsa_private_keys(rd).unwrap();
    assert!(vector.len() > 0);
}

For me at least it fails on having at least 1 member:

assertion failed: vector.len() > 0

I'm new on this but based on the example I think this shouldn't do this.

tokio-tls: creating server with *.pem certificate

Version

├── tokio-tls v0.2.1
│   ├── futures v0.1.25 (*)
│   ├── native-tls v0.2.2 (*)
│   └── tokio-io v0.1.12 (*)

├── hyper v0.12.25
│   ├── bytes v0.4.12 (*)
│   ├── futures v0.1.25 (*)
│   ├── ...
│   ├── tokio v0.1.17
│   │   ├── bytes v0.4.12 (*)
│   │   ├── futures v0.1.25 (*)

Platform

Windows 10 64-bit

Subcrates

tokio-tls

Description

I'm trying to create a hyper server that uses tokio-tls, but I have *.pem and *-key.pem files instead of a *.p12 archive. How can I achieve this? From what I can see, there's a way of creating a Certificate, but nothing shows me what do with it after creating it.

Some people are of the view that p12 is deprecated/legacy, is this the case? (FiloSottile/mkcert#58 (comment))

DTLS support

Title, I’m currently looking at supporting a CoAP+DTLS+UDP connector client, for which it would be nice if there are async-native DTLS wrappers for AsyncRead+AsyncWrite streams. (Context: ruma/lb#3)

tokio-rustls: bring back writev

Since Tokio 0.3 removed support for vectored IO, tokio-rustls will no longer use writev when available (see #29). When tokio-rs/tokio#3135 is resolved, and Tokio re-adds support for vectored IO, it would be great to get writev support back in tokio-rustls.

In an ideal world, the solution with the new API in Tokio would no longer require specialization, so vectored IO could be supported on the stable compiler as well.

Cut release with rustls 0.20 support

Hello!

Thanks for your work! I'm trying to orchestrate bumping actix to use rustls 0.20, and this seems to be a blocking dependency.

I see that there's already work to implement rustls 0.20. May I have an update on:

  1. If there's any ETA for a version bump for rustls 0.20?
  2. If not, are there any significant concerns from blocking a release?

I'm just looking to document the progress for my own PRs, so please don't feel the need to rush out a release or anything like that; that's not my intent.

Thanks!
Eddie

Handshake exception in high concurrency

When I perform a pressure test using Wrk, a few connection handshakes fail.
The details are as follows:
Use Wrk to send 20k request packets to test service A, and test service A forwards the packets to test service B. For a few connections, the client of test service A considers that the handshake is successful, but the server of test service B considers that the handshake sent by test service A fails. The client and server of test service A are both tokio-trustls, and test service B is also tokio-trustls.
The wrk pressure test command is as follows:
wrk -t 300 -c 3000 -d 1 m -T 10 --script=xxx.lua --latency https://***/ssssss

Feature request: Support Rustls

Can we add the support for Rultls so that we can switch between OS native implementations and rust native implementation?

tokio-rustls stream never ends

I have a relatively basic server that uses split() to get a ReadHalf and WriteHalf. The ReadHalf, when read using copy(), never seems to end.

Submit an advisory about tokio-tls deprecation

It might be useful to submit an advisory to the advisory DB that tokio-tls is no longer being maintained and that people should upgrade to tokio-native-tls instead. At least for projects that aren't upgrading to tokio-0.3 (or will take a long time to do so), might be good to get some kind of notification that the upgrades are coming under a different crate name.

tcp Recv-Q accumulation appears on the tls server

I wrote a tls server, but occasionally the program gets stuck after a few days of actual operation.
The tcp listening port is 50011.
I found that Recv-Q is always 1025.

$ ss -ltnp
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 1025 0 0.0.0.0:50011 0.0.0.0:* LISTEN -
tcp6 0 0 :::111 :::* LISTEN -
tcp6 0 0 :::22 :::* LISTEN -
tcp6 0 0 ::1:631 :::* LISTEN -
tcp6 0 0 :::5900 :::* LISTEN -

$ netstat -anp |grep 50011 |grep CLOSE_WAIT
878

The server code:

impl DiscoveryServer {
    pub async fn new(s: &Settings, n: &Arc<RwLock<HashMap<String, NodeDescription>>>) -> Self {
        log::info!(
            "create discovery server listener on {:?}",
            format!("{}:{}", "0.0.0.0", s.server.listen_port)
        );
        DiscoveryServer {
            tcp_socket: new_listener(format!("{}:{}", "0.0.0.0", s.server.listen_port), false)
                .await
                .unwrap(),
            settings: s.clone(),
            nodes: n.clone(),
        }
    }

    pub async fn start(self) -> ResultType<()> {
        log::info!("start discovery server");
        let tls_acceptor = new_tls_acceptor();
        tokio::spawn(async move {
            loop {
                match self.tcp_socket.accept().await {
                    Ok((stream, addr)) => {
                        let acceptor = tls_acceptor.clone();
                        let res_servers = self.nodes.clone();
                        let res_cities = self.settings.config_item.city_list.clone();

                        tokio::spawn(async move {
                            match TlsFrameStream::from(stream, acceptor).await {
                                Ok(mut tls_stream) => {
                                    if let Some(Ok(bytes)) = tls_stream.next_timeout(MESSAGE_TIMEOUT).await {
                                        if let Ok(msg_in) = DiscoveryMessage::parse_from_bytes(&bytes) {
                                            match msg_in.union {
                                                Some(discovery_message::Union::request(req)) => {
                                                    log::info!("msg from client:{}, request:{}", addr, req);
                                                    handle_request(&res_servers, res_cities, tls_stream, req).await;
                                                }
                                                _ => {
                                                    log::warn!("unknown union type from msg_in, type:{:?}", msg_in.union);
                                                }
                                            }
                                        }
                                    }
                                },
                                Err(e) => log::error!("error accept client, err: {}", e),
                            }
                        });
                    }
                    Err(err) => {
                        log::error!("error accept tcp socket, err: {}", err);
                    }
                }
            }
        });
        Ok(())
    }
}

Tls wrapper:

pub fn load_certs(filename: &str) -> Vec<rustls::Certificate> {
    let certfile = File::open(filename).expect("cannot open certificate file");
    let mut reader = BufReader::new(certfile);
    rustls_pemfile::certs(&mut reader)
        .unwrap()
        .iter()
        .map(|v| rustls::Certificate(v.clone()))
        .collect()
}

pub fn load_private_key(filename: &str) -> rustls::PrivateKey {
    let keyfile = File::open(filename).expect("cannot open private key file");
    let mut reader = BufReader::new(keyfile);

    loop {
        match rustls_pemfile::read_one(&mut reader).expect("cannot parse private key .pem file") {
            Some(rustls_pemfile::Item::RSAKey(key)) => return rustls::PrivateKey(key),
            Some(rustls_pemfile::Item::PKCS8Key(key)) => return rustls::PrivateKey(key),
            None => break,
            _ => {}
        }
    }

    panic!(
        "no keys found in {:?} (encrypted keys not supported)",
        filename
    );
}

pub fn lookup_ipv4(host: &str, port: u16) -> SocketAddr {
    let addrs = (host, port).to_socket_addrs().unwrap();
    for addr in addrs {
        if let SocketAddr::V4(_) = addr {
            return addr;
        }
    }

    unreachable!("Cannot lookup address");
}

fn make_client_config(
    ca_file: &str,
    certs_file: &str,
    key_file: &str,
) -> Arc<rustls::ClientConfig> {
    let cert_file = File::open(&ca_file).expect("Cannot open CA file");
    let mut reader = BufReader::new(cert_file);

    let mut root_store = RootCertStore::empty();
    root_store.add_parsable_certificates(&rustls_pemfile::certs(&mut reader).unwrap());

    let suites = rustls::DEFAULT_CIPHER_SUITES.to_vec();
    let versions = rustls::DEFAULT_VERSIONS.to_vec();

    let certs = load_certs(certs_file);
    let key = load_private_key(key_file);

    let config = rustls::ClientConfig::builder()
        .with_cipher_suites(&suites)
        .with_safe_default_kx_groups()
        .with_protocol_versions(&versions)
        .expect("inconsistent cipher-suite/versions selected")
        .with_root_certificates(root_store)
        .with_single_cert(certs, key)
        .expect("invalid client auth certs/key");
    Arc::new(config)
}

fn make_server_config(certs: &str, key_file: &str) -> Arc<rustls::ServerConfig> {
    let client_auth = NoClientAuth::new();
    let suites = rustls::ALL_CIPHER_SUITES.to_vec();
    let versions = rustls::ALL_VERSIONS.to_vec();

    let certs = load_certs(certs);
    let privkey = load_private_key(key_file);

    let mut config = rustls::ServerConfig::builder()
        .with_cipher_suites(&suites)
        .with_safe_default_kx_groups()
        .with_protocol_versions(&versions)
        .expect("inconsistent cipher-suites/versions specified")
        .with_client_cert_verifier(client_auth)
        .with_single_cert_with_ocsp_and_sct(certs, privkey, vec![], vec![])
        .expect("bad certificates/private key");

    config.key_log = Arc::new(rustls::KeyLogFile::new());
    config.session_storage = rustls::server::ServerSessionMemoryCache::new(256);
    Arc::new(config)
}

pub async fn new_tls_stream(
    domain: &str,
    addr: std::net::SocketAddr,
    ca_file: &str,
    cert_file: &str,
    key_file: &str,
) -> ResultType<ClientTlsStream<TcpStream>> {
    let config = make_client_config(&ca_file, &cert_file, &key_file);

    let connector = TlsConnector::from(config);

    let tcp_stream = TcpStream::connect(&addr).await?;
    let domain = rustls::ServerName::try_from(domain)
        .map_err(|_| io::Error::new(io::ErrorKind::InvalidInput, "invalid dnsname"))
        .unwrap();
    let tls_stream = connector.connect(domain, tcp_stream).await?;
    Ok(tls_stream)
}

pub fn new_tls_acceptor() -> TlsAcceptor {
    let config = make_server_config(CERT.server_cert_file, CERT.server_key_file);
    let acceptor = TlsAcceptor::from(config);
    acceptor
}

pub struct TlsFrameStream {
    pub client_stream: Option<ClientTlsStream<TcpStream>>,
    pub server_stream: Option<ServerTlsStream<TcpStream>>,
    peer_addr: SocketAddr,
}

impl TlsFrameStream {
    pub async fn from(stream: TcpStream, acceptor: TlsAcceptor) -> ResultType<Self> {
        let addr = stream.peer_addr()?;
        let tls_stream = match acceptor.accept(stream).await {
            Ok(tls_stream) => tls_stream,
            Err(e) => {
                return Err(anyhow!("accept stream failed.., error: {:?}", e));
            }
        };

        Ok(TlsFrameStream {
            client_stream: None,
            server_stream: Some(tls_stream),
            peer_addr: addr,
        })
    }

    pub async fn new_for_client(server_addr: SocketAddr, ms_timeout: u64) -> ResultType<Self> {
        let tls_stream = super::timeout(
            ms_timeout,
            new_tls_stream(
                "localhost",
                server_addr,
                CERT.ca_file,
                CERT.client_cert_file,
                CERT.client_key_file,
            ),
        )
        .await??;

        Ok(TlsFrameStream {
            client_stream: Some(tls_stream),
            server_stream: None,
            peer_addr: server_addr,
        })
    }

    #[inline]
    pub async fn next(&mut self) -> Option<Result<BytesMut, Error>> {
        let mut bytes = BytesMut::with_capacity(DEFAULT_BUFFER_SIZE);
        match self.client_stream.as_mut() {
            None => {}
            Some(stream) => {
                stream.read_buf(&mut bytes).await.unwrap();
                return Some(Ok(bytes));
            }
        };
        match self.server_stream.as_mut() {
            None => None,
            Some(stream) => {
                stream.read_buf(&mut bytes).await.unwrap();
                return Some(Ok(bytes));
            }
        }
    }

    #[inline]
    pub async fn send(&mut self, msg: &impl Message) -> ResultType<()> {
        self.send_raw(msg.write_to_bytes()?).await
    }

    #[inline]
    pub async fn send_raw(&mut self, msg: Vec<u8>) -> ResultType<()> {
        match self.client_stream.as_mut() {
            None => {}
            Some(stream) => {
                stream.write_all(&msg).await.unwrap();
                return Ok(());
            }
        }
        match self.server_stream.as_mut() {
            None => return Ok(()),
            Some(stream) => {
                stream.write_all(&msg).await.unwrap();
                return Ok(());
            }
        }
    }

    #[inline]
    pub async fn next_timeout(&mut self, ms: u64) -> Option<Result<BytesMut, Error>> {
        if let Ok(res) =
            tokio::time::timeout(std::time::Duration::from_millis(ms), self.next()).await
        {
            res
        } else {
            None
        }
    }

    pub async fn shutdown(&mut self) -> ResultType<()> {
        log::info!("shutdown connection {:?}", self.peer_addr);
        match self.client_stream.as_mut() {
            None => {}
            Some(stream) => {
                stream.shutdown().await?;
                return Ok(());
            }
        }
        match self.server_stream.as_mut() {
            None => return Ok(()),
            Some(stream) => {
                stream.shutdown().await?;
                return Ok(());
            }
        }
    }
}

impl Drop for TlsFrameStream {
    fn drop(&mut self) {
        match block_on(self.shutdown()) {
            Err(e) => {
                log::error!("close connection {:?} failed, reson: {:?}", self.peer_addr, e);
            }
            _ => {}
        };
    }
}

Is it wrong with my code?

Better documentation for `write`

It's better to put this note into the module level docs.

I am using this crate with the tokio TcpStream. Since this example doesn't flush after write, I was assuming we don't need to flush in this crate as well.

Thanks!

test fails without feature tls12

One test requires feature tls12, or it fails like this:

error[E0425]: cannot find value `TLS12` in module `rustls::version`
  --> tests/badssl.rs:46:53
   |
46 |         .with_protocol_versions(&[&rustls::version::TLS12])
   |                                                     ^^^^^ help: a static with a similar name exists: `TLS13`
   |
  ::: /tmp/tmp.oYhCcK9rbh/registry/rustls-0.20.6/src/versions.rs:31:1
   |
31 | pub static TLS13: SupportedProtocolVersion = SupportedProtocolVersion {
   | --------------------------------------------------------------------- similarly named static `TLS13` defined he
re

Redirect stdio over tokio_native_tls in Rust

Hi

Sorry if this is not the right place to ask this question. I am trying to replicate the ncat's "-e" function to redirect stdio to a remote ncat instance.

I was able to replicate this on tokio::net::TcpStream by using dup2 and then executing the command. I am not exactly sure how this can be done over tokio_native_tls as TlsStream does not seem to provide the file descriptors that dup2 needed.

Is there a way that stdio can be redirected over TLS?

[tokio-rustls] Offer a `readable()` method like TcpStream

The rustls (0.20.x) documentation states that the number of plaintext bytes which can currently be read can be obtained at any time by calling process_new_packets.

Presumably, this would allow to provide a readable() method (like it exists on TcpStream).

The use case here is an application which will have north of one thousand mostly idle connections. Keeping receive buffers around for each of them to pass to read_buf is very inefficient: Instead, it is desirable to wait for the socket to actually become readable before allocating a buffer to receive into.

The tokio-native-tls/examples/echo failed to compile

error[E0277]: the trait bound tokio::net::TcpStream: tokio::io::async_read::AsyncRead is not satisfied
--> src/main.rs:33:54
|
33 | let mut tls_stream = tls_acceptor.accept(socket).await.expect("accept error");
| ^^^^^^ the trait tokio::io::async_read::AsyncRead is not implemented for tokio::net::TcpStream

error[E0277]: the trait bound tokio::net::TcpStream: tokio::io::async_write::AsyncWrite is not satisfied
--> src/main.rs:33:54
|
33 | let mut tls_stream = tls_acceptor.accept(socket).await.expect("accept error");
| ^^^^^^ the trait tokio::io::async_write::AsyncWrite is not implemented for tokio::net::TcpStream

error[E0599]: no method named read found for struct tokio_tls::TlsStream<tokio::net::TcpStream> in the current scope
--> src/main.rs:38:18
|
38 | .read(&mut buf)
| ^^^^ method not found in tokio_tls::TlsStream<tokio::net::TcpStream>
|
::: /Users/tianjia/.cargo/registry/src/mirrors.sjtug.sjtu.edu.cn-7a04d2510079875b/tokio-tls-0.3.1/src/lib.rs:60:1
|
60 | pub struct TlsStream(native_tls::TlsStream<AllowStd>);
| ------------------------------------------------------------
| |
| doesn't satisfy _: AsyncReadExt
| doesn't satisfy _: AsyncRead
|
= note: the method read exists but the following trait bounds were not satisfied:
tokio_tls::TlsStream<tokio::net::TcpStream>: AsyncRead
which is required by tokio_tls::TlsStream<tokio::net::TcpStream>: AsyncReadExt

error[E0599]: no method named write_all found for struct tokio_tls::TlsStream<tokio::net::TcpStream> in the current scope
--> src/main.rs:49:18
|
49 | .write_all(&buf[0..n])
| ^^^^^^^^^ method not found in tokio_tls::TlsStream<tokio::net::TcpStream>
|
::: /Users/tianjia/.cargo/registry/src/mirrors.sjtug.sjtu.edu.cn-7a04d2510079875b/tokio-tls-0.3.1/src/lib.rs:60:1
|
60 | pub struct TlsStream(native_tls::TlsStream<AllowStd>);
| ------------------------------------------------------------
| |
| doesn't satisfy _: AsyncWriteExt
| doesn't satisfy _: AsyncWrite
|
= note: the method write_all exists but the following trait bounds were not satisfied:
tokio_tls::TlsStream<tokio::net::TcpStream>: AsyncWrite
which is required by tokio_tls::TlsStream<tokio::net::TcpStream>: AsyncWriteExt

State of the tokio-tls crate

Could you please clarify what is the state of tokio-tls and difference from the current crate?

I was looking for a way to get a peer certificate for my TLS server and haven't found a way to do so with the tokio-tls which I'm using right now. But in the current crate, you merged a PR to support this in PR #6. But this crate wasn't released yet.

Now, I'm confused about which version I should use going forward. Do you have any plans to backport changes from #6 to the tokio-tls? Or as alternative to release tokio-native-tls instead? Any estimates on release date if yes?

Really appreciate any information. Thanks!

HTTP upgrade on the same port

I have this issue quiet often where I host a HTTPS server on a non-standard port, let's say 3000, but when I connect to it I get an error because I have to specify https:// otherwise it will attempt to use HTTP by default.

So I've been using a lot of different frameworks that use tokio-rustls and I am wondering where this could possibly be implemented. Is this in scope for tokio-rustls, or should this be implemented further up the chain?

Illegal SNI hostname received [49, 48, 46, 48, 46, 48, 46, 52]

I am getting Illegal SNI hostname received [49, 48, 46, 48, 46, 48, 46, 52] when hooking up a server to a legacy system.

This is my configuration:

let tls_cfg = {
    // Load public certificate.
    let server_cert = X509::stack_from_pem(server_cert.as_bytes()).unwrap();

    let mut server_certs: Vec<Certificate> = Vec::new();
    for x509 in server_cert {
        let certificate = tokio_rustls::rustls::Certificate(x509.to_der().unwrap());
        server_certs.push(certificate);
    } 

    // Load private key.
    let server_key = pem_parser::pem_to_der(server_key);
    let server_key = tokio_rustls::rustls::PrivateKey(server_key);                
    // Do not use client certificate authentication.
    let mut cfg = ServerConfig::builder()
        .with_safe_defaults()
        .with_no_client_auth()
        .with_single_cert(server_certs, server_key)
        .unwrap();

    // Configure ALPN to accept HTTP/2, HTTP/1.1 in that order.
    //cfg.alpn_protocols = vec![b"h2".to_vec(), b"http/1.1".to_vec()];
    sync::Arc::new(cfg)
};
let acceptor = tokio_rustls::TlsAcceptor::from(tls_cfg);
let client_stream = acceptor.accept_with(client_stream, session).await.unwrap();

Is there a way to ignore that illegal SNI message since I cannot change the code the generates it?

native-tls TLS streams are not guaranteed to adhere to AsyncWrite due to TLS libraries incorrectly reporting the amount of consumed bytes

The AsyncWrite::poll_write method is defined to return

  • Poll::Ready(Ok(n)) if n bytes of data have been immediately written
  • Poll::Pending means that no data was written from the buffer provided

This means that if call to poll_write returns Ready(Ok(3)) it should be guaranteed that 3 bytes have been written and the remaining buf.len() - 3 bytes have not been touched. Based on that assumption, an application could present a different buffer the in the next poll_write call, and expect content from that buffer to be correctly transmitted.

This behavior is currently not guaranteed if the native-tls implementations are utilized, due to how some TLS libraries are behaving in case of non-blocking IO:

When the TLS libraries public write API is called, it isn't aware yet how many byte can later on be actually written into the socket. Therefore it at this point in time will try to copy as much bytes as possible (e.g. N) from the user supplied buffer into its internal buffer, and create and encrypt a TLS record. At this point M bytes have been copied, creating a TLS record of size M+K. After this is done, the TLS library tried to flush the created record to the socket. This might however lead to a partial write (e.g of M+K-5, which equals to N-5).

Now the TLS library could theoretically return N as the return value of the write call, since all data had been buffered. However if it would do that, the application wouldn't be aware that not all data was written, and wouldn't e.g. be aware that a new epoll registration would be necessary. Therefore some of the libraries return a value < N towards the application, in order to make sure the write call is repeated if the socket gets ready for writing again. On the next write the first set of bytes is skipped, since those already have been copied into a record, and only new bytes after a certain offset will really be taken into account.

This means that if an application presents different data on the next write call, a certain amount of bytes at the beginning of the buffer might be ignored/skipped, and only the remaining buffer would be written.

If an application relies on being able to dynamically change the data before each poll_write call, it might end up with a corrupted stream. This could e.g. happen with a HTTP/2 library which creates frames in an on-demand fashion before each poll_write call, and where an event between 2 poll_write calls - like closing a Stream - could lead to the previous data being discarded in favor of newer data.

TLS libraries therefore sometimes document that representing the same data on each write is necessary. E.g. openssl provides the following warning for SSL_write:

When a write function call has to be repeated because SSL_get_error(3) returned SSL_ERROR_WANT_READ or SSL_ERROR_WANT_WRITE, it must be repeated with the same arguments. The data that was passed might have been partially processed. When SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER was set using SSL_CTX_set_mode(3) the pointer can be different, but the data and length should still be the same.

The same was reported for s2n.

Applications which purely make use of write_all APIs should not encounter any issues.

The rustls version of tokio-tls does not have the problem, since it always reports all bytes have been copied from the input buffer into the TLS buffer. It might therefore not be able to report that some bytes are "stuck" in the TLS session, and haven't been flushed. But since it has a dedicated poll_flush method that is expected and ok.

Potential fixes

  • Only offer async fn write_all(&[u8]) for native TLS instead of AsyncWrite
  • Improve documentation to highlight the issue

Add changelog

Hi!
There doesn't seem to have a changelog in this repository.
It would be easier for us devs if there was one :)
Thanks!

TLS error when try to write payload with size >= 5kb

   version: TLSv1_3,
   payload: Alert(
       AlertMessagePayload {
           level: Fatal,
           description: InternalError,
       },
   ),
}```

We get this error when we try to publish package of large size.

Any ideas, what may be wrong here

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.