Flow control in Yamux and HTTP/2
The Yamux flow control mechanism is very similar to HTTP/2's flow control. This is to no surprise, given that Yamux is inspired by the early SPDY efforts.
In both Yamux and HTTP/2 the WindowUpdate message is the integral part of the flow control mechanism.
To prevent the streams from stalling, window update frames should be sent regularly. Yamux can be configured to provide a larger limit for windows sizes.
https://github.com/hashicorp/yamux/blob/master/spec.md#flow-control
Flow control is based on WINDOW_UPDATE frames. Receivers advertise how many octets they are prepared to receive on a stream and for the entire connection. This is a credit-based scheme.
https://tools.ietf.org/html/rfc7540#section-5.2.1
In HTTP/2 it is up to the receiver when to send a WindowUpdate message. If I understand the short Yamux specification correctly, the same applies to Yamux.
HTTP/2 defines only the format and semantics of the WINDOW_UPDATE frame (Section 6.9). This document does not stipulate how a receiver decides when to send this frame or the value that it sends, nor does it specify how a sender chooses to send packets. Implementations are able to select any algorithm that suits their needs.
See https://tools.ietf.org/html/rfc7540#section-5.2.1
For a general overview of HTTP/2 flow control I can recommend "HTTP/2 in Action" [1]. Chapter 7.2 on the topic can be (pre-)viewed on the publishers website.
WindowUpdate message strategies
HTTP/2 implementations can use WindowUpdate messages to implement various (possibly advanced) flow control strategies. One example of a simple strategy is the nghttp2 library which sends a WindowUpdate message once it (receiver) has received and consumed more than half of the flow control window.
Today with WindowUpdateMode::OnRead
this Yamux implementation sends a WindowUpdate message once (a) the window credit has been fully depleted and (b) the read buffer is empty, thus all bytes have been consumed. See implementation for details.
Comparison
Imagine the following simplified scenario:
A sender S is communicating with a receiver R. S wants to send 1 MB in multiple chunks to R. R uses a receive window of 256 KB (Yamux default). S and R are connected on some network inducing both a delay and a bandwidth constraint.
Algorithm 1 (Yamux today): Send WindowUpdate message once (a) window is fully depleted by sender and (b) receiver has consumed all send buffered bytes.
Once S has depleted its window, having sent 256 KB to R, S is blocked and has to wait for a WindowUpdate message to be send by R. This message is only send by R once 256 KB have been received and all of those are consumed. Thus every 256 KB the sender is blocked, having to wait for a whole round-trip (for all its data to arrive as well as for the WindowUpdate to be received).
Algorithm 2 (nghttp2): Sending WindowUpdate message once half or more of the window has been received and consumed.
While S sends the first 256 KB of data, R receives and consumes chunks of that data in parallel. Instead of waiting for the window to be fully depleted by S and fully consumed by R, R already sends a small WindowUpdate message (# bytes consumed of the current window) once half or more of the window has been depleted and consumed. This WindowUpdate will likely arrive at S before it depletes the entire window and thus S never stalls.
Summary
Long story short, to prevent senders from being blocked every time they have sent RECEIVE_WINDOW (256 KB) number of bytes, I am suggesting Yamux to adapt the same WindowUpdate strategy as nghttp2, namely to send a WindowUpdate message once half or more of the window has been received and consumed. An early benchmark with a patched Yamux using an adapted version of @twittner bench tool looks promising. I still have to do more testing on a high-latency network.
What do people think? Let me know if I am missing something. Happy for any pointers to similar discussions in the past.
[1] Pollard, Barry. HTTP/2 in Action. Manning, 2019.