http-rs / async-h1 Goto Github PK
View Code? Open in Web Editor NEWAsynchronous HTTP/1.1 in Rust
Home Page: https://docs.rs/async-h1
License: Apache License 2.0
Asynchronous HTTP/1.1 in Rust
Home Page: https://docs.rs/async-h1
License: Apache License 2.0
@jbr reported that the following request:
GET / HTTP/1.0\r\nX-Real-IP: 127.0.0.1\r\nX-Forwarded-For: 127.0.0.1\r\nHost: localhost:8090\r\nX-NginX-Proxy: true\r\nConnection: close\r\nUser-Agent: curl/7.64.1\r\nAccept: */*\r\n\r\n
generates the following error:
Unsupported HTTP version 0
This seems like the right message should mention we don't support HTTP 1.0.
How can I detect from a handler that the underlying request is canceled? Other server implementations such as hyper
drop the handle future on request cancelation, but it seems async-h1
doesn't.
How to reproduce:
struct DropGuard;
impl Drop for DropGuard {
fn drop(&mut self) {
println!("dropped!")
}
}
// Take a TCP stream, and convert it into sequential HTTP request / response pairs.
async fn accept(stream: TcpStream) -> http_types::Result<()> {
println!("starting new connection from {}", stream.peer_addr()?);
async_h1::accept(stream.clone(), |_req| async move {
let g = DropGuard;
println!("got request!");
async_std::task::sleep(std::time::Duration::from_secs(3)).await;
println!("sending response!");
let mut res = Response::new(StatusCode::Ok);
res.insert_header("Content-Type", "text/plain");
res.set_body("Hello world");
Ok(res)
})
.await?;
Ok(())
}
In a terminal, curl localhost:8080
and ctrl+C before the 3 second timeout.
What you see is:
got request!
...(3 seconds later)
sending response!
dropped!!!!
what you would see if the future was dropped is:
got request!
...(on request cancel, less than 3 seconds later)
dropped!!!!
Question up front: is it possible to use this crate with tokio?
I'm interested in building a library for using the Spotify API; the existing rspotify seems to be tied to tokio through its dependence on reqwest
. I'd like this library to be executor-independent, and looking through some recent discussions led me to this crate in particular.
I was initially a bit disappointed to see that there was no client_tokio
example or something similar given that it's been claimed that async-h1
doesn't care about the server it's running with. After trying to make such an example myself, I seemed to run into a bigger issue: because tokio's TcpStream
doesn't implement futures::io::AsyncRead
(it implements tokio::io::AsyncRead
instead), I think the orphan rule prevents async-h1
from working with tokio.
Am I crazy, or is it currently impossible to use this crate with tokio?
EDIT: After further research, it appears like this can be accomplished using a compatibility wrapper, there's an example here: https://github.com/Nemo157/futures-tokio-compat/blob/master/src/lib.rs (which seems to have an additional benefit of not allocating the Pin
, meaning this should be zero-cost?). I'm planning to keep working on it, if I finish up an example, is that something this project is interested in having added to the examples?
I would like to use this crate to parse http messages out of any impl futures::io::AsyncRead + futures::io::AsyncWrite + Send + Sync + Unpin
type (Specifically a type that could be either a &mut TcpStream
or a &mut TlsStream
(from the async_tls crate)), however I have hit the roadblock that your decoders and encoders are not part of your public API, so I am unable to use them for single-shot http message reading or writing from an existing stream, and have found no other crates capable of doing this (somehow?) - this one seemed the closest.
Furthermore the trait bounds on the input of your decoders seems too strict - currently decode
requires IO: Read + Write + Clone + Send + Sync + Unpin + 'static
. I believe it should be able to accept IO: Read + Write + Unpin
. The Clone may be unnecessary as it is being done both here https://docs.rs/async-h1/2.0.0/src/async_h1/server/mod.rs.html#58 and here https://docs.rs/async-h1/2.0.0/src/async_h1/server/decode.rs.html#24
I may be wrong about the trait bounds as I have not tested it though.
Similar to actix's ConnectionInfo::host
struct we should expose peer_addr
, local_addr
and host
for responses, and remote
for requests.
This is a requirement for implementing http-rs/tide#462 and #99. Thanks!
I'm running into an assert!(...)
here, which should probably just return an error, instead: https://docs.rs/async-h1/2.1.2/src/async_h1/client/decode.rs.html#32
(I think that's what the TODO
there is suggesting.)
In this case, I know that the server I'm querying may not be up yet -- I'm issuing requests in a loop with a timeout, for the purpose of waiting for the server to come up all the way.
Hi there,
Please consider this more of a question than a bug report. I am trying to implement a TLS listener with tide, which requires me to implement a ToListener/Listener. For this to work I need to be able to implement a similar pattern as found in the tide unix listener such as:
fn handle_unix<State: Clone + Send + Sync + 'static>(app: Server<State>, stream: UnixStream) {
task::spawn(async move {
let local_addr = unix_socket_addr_to_string(stream.local_addr());
let peer_addr = unix_socket_addr_to_string(stream.peer_addr());
let fut = async_h1::accept(stream, |mut req| async {
req.set_local_addr(local_addr.as_ref());
req.set_peer_addr(peer_addr.as_ref());
app.respond(req).await
});
if let Err(error) = fut.await {
log::error!("async-h1 error", { error: error.to_string() });
}
});
}
The issue that I'm having is that async_h1::accept requires trait bounds of Clone, future::io::AsyncRead and future::io::AsyncWrite. I have noticed though that tokio_openssl and tokio-native-tls both implement only tokio::io::AsyncRead and AsyncWrite which are not compatible to the future::io versions. As well neither tokio openssl or native-tls are Clone on the resulting TlsStream.
So I think my question is:
Anyway, thanks very much for your time :)
We should use the Trailers
type from http-types
, ref http-rs/http-types#57. The patch is not done yet, because we're missing a Sender
half, but once that's done we should migrate this repo to it as well.
async_h1::accept
panics with the following endpoint and request:
|req| async move {
let mut res = Response::new(StatusCode::Ok);
res.set_body(req);
Ok(res)
}
POST / HTTP/1.1
content-length: 0
Right now we hard code certain values for the server including max number of requests and keep alive timeout. We should have API that allows for this to be configured.
Something I've been talking about with @rylev (and I believe @dignifiedquire as well) is to create a single Error / ErrorKind type as part of http-types, that can in turn be reused by all other packages on top (async-h1, http-service, tide, etc.)
This would make it easier to implement those crates, would greatly reduce code duplication between them, and ideally would include a way to have a "catch-all" mode that can even be used for middleware.
Software | Version(s) |
---|---|
async-h1 | master branch |
Rustc | rustc 1.41.0-nightly (7dbfb0a8c 2019-12-10) |
Operating System | Linux 4.19.77 x86_64 |
The body of the request should have been read into the buffer.
Parsing the body of the request in the server example causes the compiler to error out with a bunch of errors about lifetimes.
I have attached the output from cargo containing the entire error output. I didnt copy and paste it here because its long.
Additionally, calling let method = req.method();
or let headers = req.headers();
inside the async block all cause similar lifetime issues.
I have pushed up a branch containing a reproducable example, specifically the code in question is as follows:
async fn accept(addr: String, stream: TcpStream) -> Result<(), async_h1::Exception> {
server::accept(&addr, stream.clone(), stream, |req| {
async {
let mut body = vec![];
req.read_to_end(&mut body).await?;
let resp = Response::new(StatusCode::Ok)
.set_header("Content-Type", "text/plain")?
.set_body_string("Hello".into())?;
Ok(resp)
}
})
.await
}
Moving the conversation from #1 here; @dignifiedquire proposed we build an HTTP client like so:
async fn main() -> throws {
let tcp_stream = net::TcpStream::connect("127.0.0.1:8080").await?;
let (tcp_reader, tcp_writer) = &mut (&tcp_stream, &tcp_stream);
let http_stream = client::connect(tcp_reader);
// lets make 10 requests, with the same connection
for i in 0..10 {
// Send a response
let body = Body::from(format!("hello chashu {}", i));
let mut req = client::encode(Request::new(body));
tcp_writer.write_all(req).await?;
// read the response
let res = http_stream.next().await?;
println!("Response {}: {:?}", i, res);
}
}
I've found that client
example fails with nginx on raspberrypi 4B.
It's OK on local nginx and httpd on raspberry pi.
I slightly modified client
example to my environment.
use async_h1::client;
use async_std::net::TcpStream;
use http_types::{Error, Method, Request, Url};
#[async_std::main]
async fn main() -> Result<(), Error> {
// Address for my raspberry pi 4B
// On raspberry pi, server is runnning via `docker run -d -p 80:80 nginx`
let stream = TcpStream::connect("192.168.10.50:80").await?;
let peer_addr = stream.peer_addr()?;
println!("connecting to {}", peer_addr);
for i in 0usize..2 {
println!("making request {}/2", i + 1);
let url = Url::parse(&format!("http://{}/", peer_addr)).unwrap(); // Changed URL to /
let req = Request::new(Method::Get, url);
let res = client::connect(stream.clone(), req).await?;
println!("{:?}", res);
// dbg!(res.body_string().await); // Works fine if this line is uncommented
}
Ok(())
}
And when it runs.
โฏ cargo run --example client
Finished dev [unoptimized + debuginfo] target(s) in 0.06s
Running `target/debug/examples/client`
connecting to 192.168.10.50:80
making request 1/2
Response { status: Ok, headers: Headers { headers: {HeaderName("server"): [HeaderValue { inner: "nginx/1.17.10" }], HeaderName("connection"): [HeaderValue { inner: "keep-alive" }], HeaderName("content-type"): [HeaderValue { inner: "text/html" }], HeaderName("accept-ranges"): [HeaderValue { inner: "bytes" }], HeaderName("last-modified"): [HeaderValue { inner: "Tue, 14 Apr 2020 14:19:26 GMT" }], HeaderName("etag"): [HeaderValue { inner: "\"5e95c66e-264\"" }], HeaderName("date"): [HeaderValue { inner: "Sat, 09 May 2020 08:36:04 GMT" }], HeaderName("content-length"): [HeaderValue { inner: "612" }]} }, version: None, sender: Some(Sender { .. }), receiver: Receiver { .. }, body: Body { reader: "<hidden>", length: Some(612) }, local: TypeMap }
making request 2/2
Error: invalid HTTP version
Failed on second request.
// dbg!(res.body_string().await); // Works fine if this line is uncommented
As I commented above, it succeeds if read the body of the first request.
After some investigation, I've found that async-h1
reads the first body as the second header and fails.
Still, I don't know why it fails only nginx on raspberry pi and success on other servers.
Most of the chunked encoder logic has already been written when we were implementing the server, but we need to hook it up to the client as well.
Relevant part of the RFC - in short, if the server receives an Expect
header set to 100-continue
(case insensitive), the client won't sent the body of the request until the server responds with an intermediate response (that can't have a body) of 100 Continue
; if a full response is sent then the client won't send the body.
Most clients will have a timeout if the server doesn't respond with 100 Continue and will send the body anyway, but that's not required (it is, but it could be very very long) and so a deadlock could theoretically happen.
tldr: We're currently always accepting Expect: 100-continue
headers, where we should be enabling header validation before we send back an intermediate 100
status code.
The 100 continue
header exists in order to signal to a client that the request has been understood, and all the headers are correct. This enables validating things like encoding, offsets, and authentication before proceeding to transfer a significant amount of data.
The validation of these headers should be defined by end-users, as they are the ones that will know which combination of headers is acceptable.
async-h1
is currently hardcoded to always reply with 100
to the Expect: 100-continue
header. This means end-users don't have a chance to validate headers before proceeding.
Lines 132 to 144 in 3a368bd
We should provide some way for end-users to validate headers before sending back the 100 continue response. I'm not quite sure how to do this, but one option would be to handle only send over 100-continue
when the first chunk of the request body is requested. That would indicate that the framework has successfully parsed the headers, and is now ready to receive the body.
If a Response
is returned before the Request
body has finished, that would likely carry a non-100
status code, and the client would know not to send the request body. So semantically I think that would be the right way to go?
If at all possible I think we should evade exposing the Expect: 100-continue
semantics to end-users, since the back and forth dance is quite complex, needs to work out of the box, and doesn't fit well with the req/res
model we use in http-rs.
In the decode
function, if 0 bytes are read, that means we've hit the end of the stream. In that case, no error was encountered, but we also didn't parse a HTTP request. So we need to return something like Ok(None)
.
In the server example, we then need to handle that case. So if server::decode
returns Ok(None)
, we should exit the task instead of sending an (unsolicited) response with server::encode
.
This reduces the number of errors reported by autocannon to 0-2 errors per 10-second benchmark run. I've no idea what the remaining errors are.
I think https://github.com/yoshuawuyts/async-h1/blob/master/examples/server.rs#L16-L17 and the following Stream implementations can now be removed, async_std TcpStream now implements clone.
Using test_chunked_echo
test (it should also work with fixed-length bodies), with the following request:
POST / HTTP/1.1
content-type: text/plain
content-length: 11
aaaaabbbbb
refs: #114
It seems like an useful function to have when one can't use async_h1::accept
.
An Error
always has an associated StatusCode
. From this, I assumed that errors are converted into HTTP responses when using the accept
function.
However, that does not seem to be the case. The errors are just propagated, which results in a connection closing without a response:
Line 75 in 5606a9d
Should this be handled automatically by accept
, or is this the job of the user? If the latter, then what is the advantage of the StatusCode
requirement for errors?
In order for us to implement http-rs/tide#462 we should require passing the peer addr into the server constructor as a string. Knowing the peer_addr is overall useful, and was one of the things we were missing in Hyper.
Reading headers in this fashion will grow the receive buffer in an unbounded fashion as long as peers don't send \r\n\r\n
.
I'm using rustc nightly version:
rustc 1.47.0-nightly (bf4342114 2020-08-25)
Clone the repo and run cargo test
, below error shown:
Compiling version_check v0.9.2
Compiling cfg-if v0.1.10
Compiling proc-macro2 v1.0.24
Compiling unicode-xid v0.2.1
Compiling syn v1.0.48
Compiling typenum v1.12.0
Compiling libc v0.2.80
Compiling memchr v2.3.4
Compiling cache-padded v1.1.1
Compiling futures-io v0.3.7
Compiling futures-core v0.3.7
Compiling log v0.4.11
Compiling getrandom v0.1.15
Compiling fastrand v1.4.0
Compiling event-listener v2.5.1
Compiling autocfg v1.0.1
Compiling pin-project-lite v0.1.11
Compiling subtle v2.3.0
Compiling waker-fn v1.1.0
Compiling parking v2.0.0
Compiling once_cell v1.4.1
Compiling serde_derive v1.0.117
Compiling proc-macro-hack v0.5.19
Compiling byteorder v1.3.4
Compiling vec-arena v1.0.0
Compiling lazy_static v1.4.0
Compiling async-task v4.0.3
Compiling serde v1.0.117
Compiling percent-encoding v2.1.0
Compiling ppv-lite86 v0.2.9
Compiling matches v0.1.8
Compiling opaque-debug v0.2.3
Compiling const_fn v0.4.2
Compiling ryu v1.0.5
Compiling atomic-waker v1.0.0
Compiling pin-utils v0.1.0
Compiling tinyvec v0.3.4
Compiling maybe-uninit v2.0.0
Compiling slab v0.4.2
Compiling serde_json v1.0.59
Compiling itoa v0.4.6
Compiling cpuid-bool v0.1.2
Compiling opaque-debug v0.3.0
Compiling anyhow v1.0.33
Compiling base64 v0.12.3
Compiling data-encoding v2.3.0
Compiling http-types v2.6.0
Compiling httparse v1.3.4
Compiling ansi_term v0.11.0
Compiling stable_deref_trait v1.2.0
Compiling infer v0.2.3
Compiling remove_dir_all v0.5.3
Compiling difference v2.0.0
Compiling concurrent-queue v1.2.2
Compiling async-mutex v1.4.0
Compiling simple-mutex v1.1.5
Compiling unicode-bidi v0.3.4
Compiling form_urlencoded v1.0.0
Compiling generic-array v0.14.4
Compiling standback v0.2.11
Compiling time v0.2.22
Compiling cookie v0.14.2
Compiling crossbeam-utils v0.7.2
Compiling pretty_assertions v0.6.1
Compiling async-dup v1.2.2
Compiling unicode-normalization v0.1.13
Compiling async-channel v1.5.1
Compiling futures-lite v1.11.2
Compiling kv-log-macro v1.0.7
Compiling polling v2.0.2
Compiling nb-connect v1.0.2
Compiling num_cpus v1.13.0
Compiling quote v1.0.7
Compiling rand_core v0.5.1
Compiling rand_chacha v0.2.2
Compiling idna v0.2.0
Compiling crossbeam-queue v0.2.3
Compiling byte-pool v0.2.2
Compiling rand v0.7.3
Compiling async-io v1.1.10
Compiling async-executor v1.3.0
Compiling blocking v1.0.2
Compiling digest v0.9.0
Compiling block-cipher v0.7.1
Compiling universal-hash v0.4.0
Compiling crypto-mac v0.8.0
Compiling block-buffer v0.9.0
Compiling aead v0.3.2
Compiling aes-soft v0.4.0
Compiling polyval v0.4.1
Compiling hmac v0.8.1
Compiling sha2 v0.9.1
Compiling hkdf v0.9.0
Compiling ghash v0.3.0
Compiling async-global-executor v1.4.3
Compiling aes v0.4.0
Compiling aes-gcm v0.6.0
Compiling tempfile v3.1.0
Compiling time-macros-impl v0.1.1
Compiling async-attributes v1.1.1
Compiling thiserror-impl v1.0.21
Compiling async-std v1.6.5
Compiling time-macros v0.1.1
Compiling thiserror v1.0.21
Compiling duplexify v1.2.2
Compiling async-test v1.0.0
Compiling url v2.1.1
Compiling serde_urlencoded v0.7.0
Compiling serde_qs v0.7.0
Compiling async-h1 v2.1.3 (/Users/jm/github/async-h1)
error[E0308]: mismatched types
--> src/chunked/decoder.rs:527:38
|
527 | let sender = Sender::new(s);
| ^ expected struct `async_channel::Sender`, found struct `async_std::sync::Sender`
|
= note: expected struct `async_channel::Sender<http_types::Trailers>`
found struct `async_std::sync::Sender<_>`
error[E0308]: mismatched types
--> src/chunked/decoder.rs:553:38
|
553 | let sender = Sender::new(s);
| ^ expected struct `async_channel::Sender`, found struct `async_std::sync::Sender`
|
= note: expected struct `async_channel::Sender<http_types::Trailers>`
found struct `async_std::sync::Sender<_>`
error[E0308]: mismatched types
--> src/chunked/decoder.rs:583:38
|
583 | let sender = Sender::new(s);
| ^ expected struct `async_channel::Sender`, found struct `async_std::sync::Sender`
|
= note: expected struct `async_channel::Sender<http_types::Trailers>`
found struct `async_std::sync::Sender<_>`
error: aborting due to 3 previous errors
For more information about this error, try `rustc --explain E0308`.
error: could not compile `async-h1`.
To learn more, run the command again with --verbose.
warning: build failed, waiting for other jobs to finish...
error: build failed
Any clue how to fix that?
Given the following input:
const REQUEST: &'static str = concat![
"GET / HTTP/1.1\r\n",
"host: example.com\r\n",
"user-agent: curl/7.54.0\r\n",
"content-type: text/plain\r\n",
"transfer-encoding: chunked\r\n",
"\r\n",
"453\r\n",
"0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"\r\n",
"0",
"\r\n",
"\r\n",
];
The body is decoded into:
[src\chunked\encoder.rs:172] String::from_utf8(src[0..msg_len].to_vec()).unwrap() = "00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\r\n0\r\n\r\n"
It looks like the trailing \r\n0\r\n
is not being removed. The failing test can be run from the zero-test branch.
cc/ @dignifiedquire
Currently client::decode internally wraps the passed AsyncRead in a BufReader. So if the server sends some data immediately after the HTTP response that is not part of the response body, that data will be lost in the BufReader's inner buffer.
My use case is that I have a server that switches to a custom binary protocol after an initial HTTP request-response exchange where the server sends some data after sending a 200 OK response for example. Is this something the could be supported by this crate ? One solution would be to make the parameter to decode an AsyncBufRead so that the caller has access to the BufReader and can use the unread data in the buffer as needed. This would be similar to the API offered by the GoLang standard HTTP library which is being in used the Go implementation of the same protocol - ReadResponse
Hi,
We're working on some server code on our end and use Wiremock to test our code.
Our server supports receiving multiple headers with the same name, as does this crate.
However, we've observed our tests failing when trying to verify all headers with the same name are received.
I've narrowed this down to these lines:
Lines 79 to 81 in 6273556
I believe the line in the for loop should be
req.append_header(header.name, std::str::from_utf8(header.value)?);
Wiremock uses async-h1 as its lightweight server, and as a result we can't validate our expectation.
Changing this locally compiles and passes all of your tests, but I believe decode
is not tested.
I'm working on a PR once I write a test case that covers my change.
There are many HTTP response headers that we should be sending back to the client. Here is a list of headers that we may want to support for every single HTTP request. Note, this list may be incomplete and new ones should be added when discovered or removed when deemed not necessary.
What if instead of doing this:
while let Some(stream) = incoming.next().await {
task::spawn(async {
let stream = stream?;
println!("starting new connection from {}", stream.peer_addr()?);
let stream = Stream::new(stream);
server::connect(stream.clone(), stream, |_| {
async { Ok(Response::new(StatusCode::Ok)) }
})
.await
});
}
We wouldn't do the io::copy
call internally, but instead would return the stream of responses:
while let Some(stream) = incoming.next().await {
task::spawn(async {
let stream = stream?;
println!("starting new connection from {}", stream.peer_addr()?);
let stream = Stream::new(stream);
let res = server::connect(stream.clone(), |_| async {
Ok(Response::new(StatusCode::Ok))
});
io::copy(res.await?, stream).await
});
}
This would follow the principle of "never take a writer", and chain slightly better. Now this is probably less important right now, as we're just trying to make things work. But I think it'd be overall a bit nicer in the long term. In particular because things like async-tls
would be used on both sides of the closure, and don't need to be passed in, which seems like a big plus.
The only issue there is that io::copy
doesn't know how to handle BufReader
+ BufWriter
so we may need to expose a new function for that.
cc/ @rylev
Looking at the HTTP client decode implementation, it looks like multiple occurrences of the same HeaderName in the response is not handled properly -
More specifically should append_header be called at
Line 177 in 1dab1d4
For instance, in cases where the response contains multiple individual Set-Cookie headers, it seems like in the current implementation only the first cookie will get added to the Response.
Currently we have a stateless encoder + decoder protocol. server::encode
encodes stuff. And server::decode
decodes stuff. However if we want to introduce http keepalive semantics, we'll have to move over to a stateful protcol.
I'd been considering this may be needed for a while, not in the least to be able to reuse buffers. But this makes it a more pressing issue, as it seems our benchmarks may be limited by the number of connections we establish.
note: I just finished sketching the API, and I think we can split "buffer reuse" from "keepalive". It doesn't necessarily need to become entirely stateful because iterating over a single connection (in the case of http/1.1) is entirely serial.
use async_h1::{server, Body};
use async_std::prelude::*;
use async_std::{net, task, io};
#[async_macros::main]
async fn main() -> Result<(), async_h1::Exception> {
let listener = net::TcpListener::bind(("127.0.0.1", 8080)).await?;
println!("listening on {}", listener.local_addr()?);
// The outer iter is parallel, the inner iter is serial.
listener.incoming().par_stream().try_for_each(|stream| {
server::connect(stream).try_for_each(|conn| {
let req = conn.recv().await?;
println!("request: {:?}", req);
let res = http::Response::new("hello chashu");
server.send(res).await?;
Ok::<(), async_h1::Exception>(())
}).await?;
Ok::<(), async_h1::Exception>(())
}).await?;
Ok(())
}
TcpStream
object because that means we couldn't determine at runtime whether to use async-h1
, async-h2
, etc. Also it would tie us directly to a single runtime, which mimics Hyper's design and gets us into trouble.Ref http-rs/tide#623
When http-types is publicly available we can introduce CI.
We should have some sort of logic when the chunked stream ends prematurely. Right now we just end the stream and leave it to later steps to (maybe?) catch this issue: https://github.com/http-rs/async-h1/pull/69/files#r392666474
minimal repro using http_client for clarity:
#[async_std::main]
async fn main() -> http_types::Result<()> {
let url = http_types::url::Url::parse("https://httpbin.org/stream/100")?;
let req = http_types::Request::new(http_types::Method::Get, url);
let client = http_client::h1::H1Client::new();
use http_client::HttpClient;
let response = client.send(req).await?;
async_std::io::copy(response, async_std::io::stdout()).await?;
Ok(())
}
this hangs indefinitely until it times out, which on my machine is exactly one minute after the last byte is received
I have a use case where I need to keep the connection (the RW type in connect
) open to send more data on it after an initial HTTP request/response exchange. This currently doesn't seem to be possible since the connect function takes ownership of the connection. I know that async-std's TcpStream implements Clone but the same is not the case for async-tls's TlsStream.
Is this use case supported by this crate ? How would I get around this ?
I'm using schemathesis (just the worst name) to run OpenAPI tests against my Tide server.
A number of these tests outright fail with connection resets by the server and obtuse errors on the server side.
Easy reproduction for exactly what I'm doing would be:
pip install schemathesis
RUST_LOG=debug tide-serve -b localhost:8000
~/.local/bin/schemathesis run ./openapi.yaml -O PutObject --base-url=http://localhost:8000/ --show-errors-tracebacks
Observe numerous client-side failures such as:
requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
Running a dummy server with python -m SimpleHTTPServer
doesn't result in these connection resets however.
Is there a plan to Http version 1.0 be supported? It's legacy but used with some bots and tools.
Unsupported HTTP version 1.0
In our chunked encoder impl we try and index outside of the permitted length which causes a panic. We should investigate what is causing this and fix it. First steps towards this were taken in #75 so that our chunked encoding logic is easier to follow. Thanks!
We should have tests for:
thanks!
All checks that require looking into headers should work regardless of the actual casing of the headers.
Currently we check for http headers without ensuring we're not taking the case of the characters into account. Http headers are case _in_sensitive.
An example of this is here
I'm not sure if you're interested in this approach, but I swapped out usage of async-std
for futures
(and futures-timer
). hwchen@41f57c6
I saw that there was mention of using a core
feature in async-std
. Just using futures
might simplify even further?
Anyways, my thought is only to use futures
as much as possible as a clear indicator of async ecosystem compatibility. I'm sure there's many design decisions and trade-offs I don't know about. If you don't feel like discussing, feel free to close, I won't take it personally :) .
While working on the announcement post I realized the http client API could probably receive a similar ergonomic improvement as the server API:
use async_h1::client;
use async_std::io::{self, Read, Write};
use async_std::net::TcpStream;
use http_types::{Method, Request, Url};
#[async_std::main]
async fn main() -> http_types::Result<()> {
// open a tcp connection to a host
let stream = TcpStream::connect("127.0.0.1:8080").await?;
let peer_addr = stream.peer_addr()?;
// create a request
let url = Url::parse(&format!("http://{}/foo", peer_addr))?;
let req = Request::new(Method::Get, url);
// send the request and print the response
let res = client.connect(stream, req).await?;
println!("{:?}", res);
Ok(())
}
The plan would be to replace the individual encode
/ decode
steps with a single accept
function which operates on both. Streams can still be wrapped the same way we do it in the server impl: by wrapping the TcpStream
before passing it into accept
.
I used async-std and surf in unleash-client-rust recently, and trying to document the minimum supported rust version for it is turning out harder than expected ;(. async-h1 seems to be using latest-everything features - and thats probably fine, async is new etc, but it would be nice to know what the actual policy is going forward. Just a one liner in README.md.
Thanks!
(rust-lang/rust#65721 is what caught me trying to make 1.40 the MSRV, which is used by async-h1).
async-h1 depends on async-std
with the unstable
feature; that in turn pulls in wasm dependencies via futures-timer
then gloo-timers
. Please consider adjusting the feature flag dependencies to avoid pulling gloo-timers
and its pile of wasm-specific dependencies on non-wasm platforms.
we should copy the impl from here https://crates.io/crates/httpdate (or use as a dep, unsure which is better)
Related to #50, https://github.com/dignifiedquire/http-client/blob/h1/src/h1.rs#L44 contains host parsing that we should include as part of async-h1
. The only custom code left in http-client/async-h1
left would be matching http/https
to a tls stream.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.