mozilla / neqo Goto Github PK
View Code? Open in Web Editor NEWNeqo, an implementation of QUIC in Rust
Home Page: https://firefox-source-docs.mozilla.org/networking/http/http3.html
License: Apache License 2.0
Neqo, an implementation of QUIC in Rust
Home Page: https://firefox-source-docs.mozilla.org/networking/http/http3.html
License: Apache License 2.0
This changes the tests significantly. It exposes the tests to timing variance that the fixed value didn't. If we can fix the value here, that will help a lot.
I like that this now uses now() instead of 0 in a lot of places, but I think that we want a constant function. In doing the 0-RTT code, I need a lot more control over the way that we manage timers and tying this to the system clock is unworkable.
I know that it's hard to get a concrete Instant instance, but maybe we can use Once for picking that point in time.
Yes it's a bikeshed, but it seems like some things are gated on resolving this. Here's some possibilities I'll throw out there:
As of January 1 2019, Mozilla requires that all GitHub projects include this CODE_OF_CONDUCT.md file in the project root. The file has two parts:
If you have any questions about this file, or Code of Conduct policies and procedures, please see Mozilla-GitHub-Standards or email [email protected].
(Message COC001)
I think I just made it just do stop sending, but no it should be a connection error.
We appear to only send an Initial from the client once. We should probably try a few times.
#45 needed some NSS changes and these were taking a long time to make it through review, so @martinthomson put in a temp hack. This should be undone once the needed NSS changes are approved, but I don't want to hold up #45 any longer.
In h3, it's possible that the qpack encoder stream could get blocked. When that happens, we will want to send literals rather than block. If we block on the encoder stream, we also have to block the request stream. In order to test this, we need a way to ask the stream how much flow control credit it has.
This should also check the connection-level flow control.
huffman_decode_helper.rs is pretty big. In theory, this is to help make the decoding process more efficient, but we don't have any evidence that a simpler design is significantly less efficient. These files are impossible to review, so they really have to justify their existence pretty well.
We shouldn't accept STREAM in Initial packets. Just as an example.
In #45 I added transport parameters to the "resumption token" that the client hands back to applications when resuming so that the transport can know what its bounds are for 0-RTT. However, HTTP doesn't do that.
quicwg/base-drafts#2790 points out that SETTINGS and session tickets don't always arrive in the right order for this to happen. That probably means withholding resumption tokens until both arrive. That implies that there might need to be another state involved or some sort of notification arrangement so that applications know when a token is available. Right now, the crypto and transport pieces don't have any way to signal the availability of a resumption token, but that might need to be added.
transport send stream uses SliceDeque to get a single slice of bytes to send, even though the buffer is circular and therefore can sometimes require two slices if the buffer wraps. (It does this with virtual memory magic.)
This is cool but probably not worth the added dependency. We should rewrite to use a regulat VecDeque. Either the users of next_bytes
must handle two slices, or maybe we're ok with next_bytes not returning the entire range of available bytes if it wraps (could lead to two smaller stream frames instead of one, perhaps?)
like #15 but for server role.
This would allow us to avoid wasting bytes on padding.
This is very tricky though. We don't currently determine what our 0-RTT parameters are until after we've generated - and constructed a packet - for the ClientHello. In order to do this, I think that we would need to have the set_resumption_token()
function also have TLS generate the first CRYPTO frames. Then we'd have access to all of the 0-RTT state before calling generating any packets with process()
. I think that this all just "works", but we'll need to be careful not to generate the ClientHello twice.
We're currently transitioning state and sending an event but we should also probably be cleaning up streams, and maybe sending events that they have been closed/reset as well? Refer to the spec for desired behavior.
In addition to the interface in #31, we should add a means of forcing the transport to send an appropriate blocked frame.
Right now, the transport has some provisions for 0-RTT failing, but HTTP does not. It needs to throw everything out and start over. If we consider HTTP to have exclusive use of the transport, then it doesn't need to worry too much about getting an incompatible ALPN value, but we might need to worry about which of the different HTTP ALPN values are chosen.
@martinthomson any more?
I think the server might need to be a little smarter to deal with when the connection's send buffer (64KiB) is full.
Disabled running clippy on some files in neqo-crypto, see lib.rs. It would be great to re-enable it, but I don't have enough knowledge of the code to determine if Clippy is finding things that should be fixed or not.
see -transport 10.2.
It's looking a little LISPy, so maybe make the calculation steps a little more explicit.
This currently fails:
RUST_LOG=trace ./target/debug/neqo-http3-client http://test.privateoctopus.com:4433/ --db ./neqo-crypto/db
but client is still waiting for something instead of exiting. The ConnectionClose should be exposed in some (both?) event APIs so the client code can see it and do the right thing?
The recovery spec (A.8) does say something about "This algorithm may result in the timer being set in the past, particularly if timers wake up late. Timers set in the past SHOULD fire immediately."
Should we just be ensuring the caller re-calls us, or should we go ahead and assert this never happens?
Originally posted by @agrover in #39
Well, for our API, we can't pass out a negative Duration
for the delay time, so if loss recovery sets a timer in the past, we will probably clamp to 0 (I hope we don't underflow). In that case, I think that it is our responsibility to drive the state forward before we return.
We currently have some APIs in the Http3Connection code that are exclusively for the client, and also others that are just server. Although they both share a great deal of common code, I'd like to raise the question as to whether at an API level these should be distinct -- a Http3Client class and an Http3Server class.
@martinthomson says: It would be good to have a test that verified that we were getting "blocked" messages appropriately.
See #45 review comments.
btw lazy_static is already used in gecko so we should not hesitate to use it in Neqo if it's useful.
Both branches do the same thing.
if loss_time == 0 {
loss_time = packet_space.loss_time;
pn_space = *space;
} else if packet_space.loss_time != 0 && packet_space.loss_time < loss_time {
loss_time = packet_space.loss_time;
pn_space = *space;
}
@ddragana what do you think?
For testing purposes, it would be convenient if there was an automatable way to shut down the test server. @ddragana maybe could can say what you think the most useful way might be? we can't send a SIGINT via kill() because... it's not cross platform, is that why?
If we don't prevent buffering, we could create a deadlock. By blocking writes, applications (see h3) can abort writes when a stream is blocked, which is important if there is an inter-stream dependency.
Right now, all writes are buffered up to TX_STREAM_BUFFER, which means that deadlocks are quite possible.
We probably need a server class at the transport layer that manages routing of packets to different connections, based on connection ID.
RUST_BACKTRACE=1 ./target/debug/neqo-client http://127.0.0.1:4433/6600000 --db ./neqo-crypto/db
causes:
thread 'main' panicked at 'attempt to multiply with overflow', /builddir/build/BUILD/rustc-1.34.2-src/src/libcore/num/mod.rs:3516:24
stack backtrace:
0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
1: std::sys_common::backtrace::_print
2: std::panicking::default_hook::{{closure}}
3: std::panicking::default_hook
4: std::panicking::rust_panic_with_hook
5: std::panicking::continue_panic_fmt
6: rust_begin_unwind
7: core::panicking::panic_fmt
8: core::panicking::panic
9: core::num::<impl u64>::pow
at /builddir/build/BUILD/rustc-1.34.2-src/src/libcore/num/mod.rs:3516
10: neqo_transport::connection::LossRecovery::set_loss_detection_timer
at neqo-transport/src/connection.rs:2787
11: neqo_transport::connection::LossRecovery::on_packet_sent
at neqo-transport/src/connection.rs:2633
12: neqo_transport::connection::Connection::output_path
at neqo-transport/src/connection.rs:1276
13: neqo_transport::connection::Connection::output
at neqo-transport/src/connection.rs:1160
14: neqo_transport::connection::Connection::process_output
at neqo-transport/src/connection.rs:934
15: neqo_http3::connection::Http3Connection::process_output
at neqo-http3/src/connection.rs:347
16: neqo_client::process_loop
at neqo-client/src/main.rs:152
17: neqo_client::client
at neqo-client/src/main.rs:261
18: neqo_client::main
at neqo-client/src/main.rs:291
19: std::rt::lang_start::{{closure}}
at /builddir/build/BUILD/rustc-1.34.2-src/src/libstd/rt.rs:64
20: std::panicking::try::do_call
21: __rust_maybe_catch_panic
22: std::rt::lang_start_internal
23: std::rt::lang_start
at /builddir/build/BUILD/rustc-1.34.2-src/src/libstd/rt.rs:64
24: main
25: __libc_start_main
26: _start
with PTO 807073 and pto_count 134 immediately prior.
dragana: I will try to do this, by Monday, but the time it too short.
the qpack algorithm needs optimization.
Right now, generators are just a little dynamic -- there are three up until the connection enters Closing state, and then the three get replaced by the one CloseGenerator.
This could be improved by being a little more dynamic and fine-grained. For example, right now StreamGenerator iterates through streams and each stream presents available data in lowest to highest offset so retransmit ranges are first, but between streams, new data for one stream could go out before retransmitted bytes for another, which is unfortunate.
This just all needs a rethink and refactor.
// Packets with packet numbers before this are deemed lost.
let lost_pn = self
.space_mut(pn_space)
.largest_acked
.saturating_sub(PACKET_THRESHOLD);
qdebug!(
[self]
"detect lost packets - time={}, pn={}",
lost_send_time,
lost_pn
);
let packet_space = self.space_mut(pn_space);
let mut lost = Vec::new();
for (pn, packet) in &packet_space.sent_packets {
// Mark packet as lost, or set time when it should be marked.
if *pn <= packet_space.largest_acked {
if packet.time_sent <= lost_send_time || *pn <= lost_pn {
qdebug!("lost={}", pn);
lost.push(*pn);
I just need to think about if using 1, 2, or 3 as an example value for .largest_acked, all do the correct thing. I have a hunch there's a bug if largest_acked saturates to 0 and then pn is small.
Maybe just make one internally?
This probably needs a separate class: as a server you want to hold this globally (for all connections), but for a client you want to have this per-connection.
There are lots of constants here, like Duration::from_micros(94_609)
. The test would be far more readable with constants. Separating this into T1, T2, or STEP1, STEP2 and so forth would help a lot.
Originally posted by @martinthomson in #39
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.