httpwg / http2-spec Goto Github PK
View Code? Open in Web Editor NEWWorking copy of the HTTP/2 Specification
Home Page: https://httpwg.org/
Working copy of the HTTP/2 Specification
Home Page: https://httpwg.org/
The REFUSED_STREAM error code on RST_STREAM has an unexplained, yet potentially important property. This error code is a clear signal to the receiver that the framing layer did not pass any frames to an upper layer for processing.
This allows a client to use this error code as a signal that the HTTP request it made can be safely retried, even if that request would not ordinarily permit that. That is, a client (or intermediary ?) can safely retry an non-idempotent request.
This gets around the retry prohibition in http://tools.ietf.org/html/draft-ietf-httpbis-p1-messaging-22#section-6.3.1
This needs to be better explained, perhaps with a new section in the HTTP usage section.
See thread starting at http://lists.w3.org/Archives/Public/ietf-http-wg/2013JanMar/1353.html
Clearly, HEADERS can be used to support HTTP/1.1 trailers, or even mid-response additional headers. Trailers do have their uses, but are poorly supported. We need to say something about them, but what will that be?
Since trailers are non-critical, we could remove support entirely. Or we could allow arbitrary numbers of HEADERS frames with some rules about which ones can be ignored safely. Or something in between.
We need to give advice on how proxies, etc. should handle Expect/continue interactions.
Eliot Lear was given an AI in Tokyo to propose text for a registry of tokens that could be used to identify things to upgrade to, in places like the Upgrade "dance", NPN, DNS, etc.
This is fork of issue #38 to focus on discussing "defaulting to no-push."
The thinking is that client and server endpoints should have different minimum values and default values for SETTINGS_MAX_CONCURRENT_STREAMS.
From issue #20 we understand that SETTINGS_MAX_CONCURRENT_STREAMS is directional and a client can advertise a value of 0 to prevent the server from issuing Push Streams.
The suggestion is that the default value should also be 0. That is, the communication defaults to no-push. If the client is sophisticated then it can advertise a non-zero value in its initial SETTINGS frame to allow the server to push.
This is a tracking ticket for issues related to the mechanism for upgrading from HTTP/1.x.
As per our charter, this is:
A negotiation mechanism that is capable of not only choosing between HTTP/1.x and HTTP/2.x, but also for bindings of HTTP URLs to other transports (for example).
While protocol portability is widely regarded as a good thing, the idea that HTTP/2.0 might use a different substrate than TCP is a dangerous one. It is likely that many design choices will be made based on the assumption that the protocol runs atop TCP and not some other transport protocol.
We should decide if the HTTP/2.0 specification is written exclusively for TCP or whether we want to pay lip service to protocol agnosticism. Note that exclusivity does not preclude the use of other protocols in a later specification, it just removes one axis of freedom in the design.
The following references aren't used and should be removed:
RFC2285, RFC4366, TLSNPN
We're waiting for a proposal from Will Chan WRT session-level flow control.
The prioritisation mechanism has been flagged for discussion; may not be immediate, as we need more deployment experience.
Make it clear that streams and stream identifiers cannot be reused.
Current text:
SETTINGS_MAX_CONCURRENT_STREAMS allows the sender to inform
the remote endpoint the maximum number of concurrent streams
which it will allow. By default there is no limit. For
implementors it is recommended that this value be no smaller
than 100.
It's the same text as we've had in SPDY/2, and our SPDY/4 draft is the
same. This wording is technically correct, but it does not
particularly emphasize that the limit is directional. I can imagine
first time readers misinterpreting it.
In http/1.x, trailers are negotiated for with hop-by-hop headers (and, therefore, semantics). Should this continue in HTTP/2.x?
Without TE: Trailers, we could effectively get rid of hop-by-hop headers in HTTP/2. E.g., the Connection header could be forced to be dropped, and not forced to be processed. All semantics of hop-by-hop headers are forced into the framing layer.
Some implementers have expressed interest in defining a variety of payload frame sizes, e.g., to assist with doing sendfile().
This is a tracking ticket for issues related to Server Push. From our charter:
As part of the HTTP/2.0 work, the following issues are explicitly called out for consideration [...] Server push (which may encompass pull or other techniques)
@grmocg suggested that his header compression (#2) would include the ability to continue header blocks across frames. We should track this as a separate issue.
Important consideration is the way that header blocks mutate session state (for header compression). Interleaving of continued header frames will cause issues if we don't address this.
We should also consider whether this is a general facility or not.
This is a tracking ticket for issues related to HTTP/2.0 header compression.
See also:
http://trac.tools.ietf.org/wg/httpbis/trac/wiki/HTTP2Compression
An intermediary that naively converts HTTP/2.0 to HTTP/1.1 (or vice versa) might allow header values that open vulnerabilities. E.g., encoding a newline into a header value.
This issue is to track whether the client should advertise its settings (e.g. contents of SETTINGS frame) as part of the Upgrade GET (e.g. as HTTP/1.1 headers in the GET request).
Normally, the first HTTP/2.0 frame the client emits is the SETTINGS frame. This means the server will receive the client's settings before getting the SYN_STREAM from the client. However, in the Upgrade Dance the server receives the GET and has to respond with a 101 HTTP/1.1 response followed by the HTTP/2.0 SYN_REPLY.
The ugliness here is that the server will be in a situation where it has to send the SYN_REPLY (and possibly DATA frames and possibly start push streams) without knowing the client's settings. This means the server may blow the client’s flow control buffers, or emit a pushed stream even though the client is incapable of processing pushed streams, etc.
If we include the settings as part of the initial Upgrade GET then
Replace with "Work in Progress"
Need a write-up of push promise frame and its usage.
I think that this is probably not contentious. However, I'm not sure then how to cancel a push without waiting for it to start. Because pushes currently use SYN_STREAM, it's possible to reject using RST_STREAM. That is not possible with a mere promise.
Currently, the header block allows 32 bits for the header field name. This seems excessive.
(see subject)
For frame types, settings, and error codes.
HTTP/2 currently allows for multiple sets of headers. However, it was asserted in Tokyo that data on the same stream could race the header blocks, so seeing a data frame on a stream is not necessarily an indication that the headers are complete.
See http://lists.w3.org/Archives/Public/ietf-http-wg/2013JanMar/1182.html:
http://greenbytes.de/tech/webdav/draft-ietf-httpbis-http2-01.html#Authentication:
"There are four options for proxy authentication, Basic, Digest, NTLM
and Negotiate (SPNEGO). The first two options were defined in RFC2617
[RFC2617], and are stateless. The second two options were developed by
Microsoft and specified in RFC4559 [RFC4559], and are stateful;
otherwise known as multi-round authentication, or connection
authentication."
As far as I can tell, RFC4559 does not actually define an NTLM auth
scheme. If it did, we'd need to add it to
http://greenbytes.de/tech/webdav/draft-ietf-httpbis-authscheme-registrations-latest.html.
(And yes, I know that there's a NTLM scheme used in practice, I just
don't see it defined by RFC4559).
Later on:
"Unfortunately, the stateful authentication mechanisms were implemented
and defined in a such a way that directly violates RFC2617 - they do not
include a "realm" as part of the request. This is problematic in
HTTP/2.0 because it makes it impossible for a client to disambiguate two
concurrent server authentication challenges."
If these schemes need HTTP/2.0-specific fixes, these should be defined
in a separate document, updating RFC4559. Optimally, we can get rid of
the whole section.
We need some magic that clearly indicates that HTTP/2 is being spoken on the wire. It should fast fail when sent to a reasonable set of HTTP/1.1 servers.
We need to re-consider the section on cross protocol attacks. The statement that is made is no longer true. The final answer will depend on the outcome of #1.
RFC 6455, section 10.3 cites the following paper:
[TALKING] Huang, L-S., Chen, E., Barth, A., Rescorla, E., and C.
Jackson, "Talking to Yourself for Fun and Profit", 2010,
http://w2spconf.com/2011/papers/websocket.pdf.
This attack ultimately lead to thewebsocketsprotocol adopting a masking scheme. This needs to be considered.
Set max stream limit to 0 for that endpoint.
We need to resolve whether an Upgrade is always necessary, or whether clients can start the HTTP/2.0 session immediately if they have prior knowledge about server capabilities.
Requiring upgrade might be necessary if the set of intermediaries or servers that are involved are potentially homogenous. For example, a phased upgrade of a load-balanced server farm might result in some servers being HTTP/2.0-capable and others not.
There is a lot of guidance in the draft that is specific to browsers. This guidance needs to be made more generic, or removed, if appropriate.
Discussion raised the issue of what can and cannot be cached when resources are pushed.
From an HTTP/1.1 caching perspective, a pushed resource could be considered analogous to responses where Content-Location != effective request URI. We need to consider how these resources can be cached.
This also needs to carefully cover the effect on Vary header fields. The current draft specified that request header fields for pushed resources are inherited from the request that triggered the push. A cache would have to pull details from that original request.
Since this opens Pandora's box, we might also consider cacheability of other resources when Content-Location and Cache-Control header fields are present.
Say that the settings frame is mandatory to send first, as proposed in Tokyo.
HTTP/2.0 requires that implementations support a minimum frame size of 8192 bytes. The draft does not specify how an implementation is expected to learn that its peer has limited frame sizes other than by trial and error. Using RST_STREAM causes the error to be discovered after the problem has been encountered.
It's also not possible to use RST_STREAM to reject a too-large frame that is not bound to a specific stream.
This could be indicated in the SETTINGS frame.
The following attributes - which are normally attributed to HTTP/2.0 requests - are not available for the pre-Upgrade request in HTTP/1.1. These require defaults:
We also need to determine the status of the stream, which probably needs to be half-closed from the server side.
The StreamErrorHandler section is fairly widely used, but the session-level flow control changes for WINDOW_UPDATE revealed two issues:
Content-Length is largely only needed as entity metadata in HTTP/2.0. It does provide a limited function in learning the complete size of a resource prior to receiving an entire message. (This is the behavior explicitly relied upon for POST, which is based on browser information only. For example, node.js always sends chunked encoding unless explicitly overridden.)
Since compression is applied by the framing layer, there's an ambiguity in the spec with respect to what value Content-Length is given. If the data frames are compressed at the framing layer, the pre-compression size is possibly, but not certainly, the size that is reported in Content-Length.
Accept Limit vs Initial Limit
Currently an endpoint advertises what it is capable of accepting:
• When a client sends SETTINGS_MAX_CONCURRENT_STREAMS =123 it is saying that it will accept up to 123 concurrent pushed streams.
• When a server sends SETTINGS_MAX_CONCURRENT_STREAMS =123 it is saying that it will accept up to 123 concurrent HTTP request streams.
Are there scenarios for an endpoint to advertise what it is capable of issuing? For example, is it useful for a server to know that a client will issue at most 123 concurrent HTTP request streams? Or is it useful for a client to know that a server will issue at most 123 concurrent push streams? If the answer is "no", then we can avoid complicating the protocol.
Limit Values
There is a race condition where the client can issue more streams to the server before the server can advertise its accept limit to the client. Note that a race condition in the reverse path is not possible because a client must issue a SYN_STREAM before the server can push anything, which means it can definitely send the initial SETTINGS frame before emitting the first SYN_STREAM. (And in the future, it will be mandatory for the client to send the SETTINGS frame upon connection) Furthermore, there is no clear rationale for the value of SETTINGS_MAX_CONCURRENT_STREAMS to “be no smaller than 100”. To offer clearer requirements, the following is suggested:
A server MUST be able to handle at least 8 concurrent streams initiated by the client. A server MUST NOT advertise a value less than 8. A client MUST generate a session error if it receives a value less than 8 from the server. The default value emitted by servers is 8. The default value emitted by clients is 0. It is recommended that servers pick a much larger value to allow parallelism.
This ensures that there is a minimum value so that we don’t fall into a race hole but is large enough so that client is not bottlenecked on RTT for the initial requests. A default client-side of 0 means the communication defaults to no-push. That is, a smart client has to proactively advertise a non-zero value for the server to enable push.
In Tokyo, there was discussion of whether it would be useful to allow opaque data in RST_STREAM and GOAWAY. Waiting for a full proposal.
Rather than specify a flow control algorithm, we should simply discourage its use.
Mark, I think that we resolved this one at the interim. I'd just like to confirm with you before proceeding.
Right now, routing data (in particular, :scheme, :host and :path) appear as headers along with the rest.
This means that the recipient needs to parse through the header collection to find them -- potentially at the end.
Different ways of addressing this have been proposed; e.g., requiring them to be at the top of the header block, or serialising them in different fields.
From the SPDY draft:
""A SETTINGS frame [...]. When the server is the sender, the sender can request that configuration data be persisted by the client across SPDY sessions and returned to the server in future communications.""
The concern here is that this provides another mechanism by which servers are able to track clients.
See also http://lists.w3.org/Archives/Public/ietf-http-wg/2012OctDec/0495.html
As discussed in Tokyo.
RFC1738: [PROPOSED STANDARD] obsoleted by RFC4248 RFC4266
RFC4366: [PROPOSED STANDARD] obsoleted by RFC5246 RFC6066
draft-agl-tls-nextprotoneg-01: Alternate version available: 04
As discussed in Tokyo:
Redundant, given HEADERS.
See:
http://www.w3.org/mid/CABkgnnU5he8x=v+UvV8Oe7mS-3FnMtLmjaz_xk+Ns84LzCpvwQ@mail.gmail.com
SPDY has a flow control mechanism; we need to review this and consider whether we need flow control at all, whether this is the right approach, whether we should accommodate pluggable flow control, etc.
Gabriel et al have done a draft discussing initial considerations and proposing a framework for the discussion:
http://tools.ietf.org/html/draft-montenegro-httpbis-http2-fc-principles
In the Speed+Mobility draft, we removed CREDENTIAL because:
CREDENTIAL: This is removed from HTTP Speed+Mobility because we
believe it is not compatible with options such as TLS SNI. For
this proposal, a session MUST only target one origin as described
in [RFC6454].
Concerns were also raised in "CREDENTIAL really needed?" (https://groups.google.com/forum/?fromgroups#!searchin/spdy-dev/credential/spdy-dev/WazzPBFbdpk/yayPrNTehYYJ). Based on the responses, it appears that CREDENTIAL was an experimental feature not used in SPDY/3, but intended to be replaced with a different design in the future; therefore, it could safely be deprecated or ignored.
I propose that CREDENTIAL be removed from the HTTP/2.0 draft.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.