Code Monkey home page Code Monkey logo

Comments (34)

bengreenstein avatar bengreenstein commented on July 22, 2024 5

Following discussion at an unconference session at BlinkOn, below is a modified proposal:

What is this?

This is a proposal for two new HTTP request headers and an extension to the Network Information API to convey the HTTP client's network connection speed.

  • The first header provides an estimate of the current round trip time (RTT) measured at the transport layer.
  • The second header provides an estimate of network bandwidth provided by the transport layer.
  • The API extension provides the same information as the headers.

The headers will be sent together, and will be sent only to hosts that opt in to receiving them.

Why do we care?

Web servers and proxies benefit, because they can tailor content to network constraints. For example, on very slow networks, a simplified version of the page can be provided to improve document load and first paint times. Likewise, higher-bandwidth activities like video playback can be made available only on faster networks. Users benefit by being offered only that content that they can consume given network constraints, which results in faster paints, and less frustration.

Goals

The goal of the headers and API is to provide network performance information, as perceived by the client, in a format that's easy to consume and act upon. The headers convey the bandwidth and latency constraints of the network. (Below we provide guidance as to how these map to levels of service supported by the network.) The headers aim to allow proxies and web servers to make performance-based decisions even on a main frame request. The API extension aims to make it easy to make speed-related decisions from within JavaScript.

Non-goals

We do not plan to convey the actual connection type used, because it is already available via the Network Information API's downlinkMax and its mapping to underlying connection technology, and it is not as actionable as the providing the performance of the connection. E.g., a Wi-Fi or 4G connection can be slower than a typical 2G connection at times. We also do not aim to convey the flakiness (or variability) of the connection quality.

Headers

Network speed is provided as estimates of current transport RTT and network bandwidth.

network-rtt = "Network-RTT" ":" delta-milliseconds  
delta-milliseconds = 1\*DIGIT
network-bw = "Network-BW" ":" kbps-value  
kbps-value = 1\*DIGIT  

As a guide, below are mappings of RTT and bandwidth to typical cellular generation performance.

Cellular Generatio Typical transport RTT (ms) Typical bandwidth (kbps) Explanation
2G 2800 40 The network is so slow that it can only support very small transfers, such as for text-only or highly simplified pages.
2.5G 1500 75 The network supports loading images, but not video.
3g 200 400 The network loading high-resolution images, feature-rich Web pages, audio, and SD videos.
4g 80 1600 The network support HD video, and real time video conferencing.

The above table was generated from observations by Chrome over different connection types and network technologies (e.g., EDGE, GPRS). Observations are agnostic to variations in the characteristics of network technologies of different countries.

The Network-RTT and Network-BW headers have numeric, continuous values, which limits the applicability of the Vary response header (rfc7234) with them.

When is the header sent?

The headers are sent and are sent together only after an explicit per-origin opt-in. The opt-in is via a response header. The browser should, but is not guaranteed to retain the preference across browsing sessions.

allow-network-speed = "Allow-Network-Speed" ":" boolean  
boolean = "True" | "False"

Origins may also use an equivalent HTML meta element with http-equiv attribute ([W3C.REC-html5-20141028]).

In the future, origins can avoid sending the opt-in header by specifying in their Origin Policy.

Network Information API extension

The Network Information API provides read only attributes that contain the connection type (e.g., wifi, cellular, ethernet) and the maximum downlink speed of the underlying connection technology in Mbit/s, as well as a way for workers to register for connection change events. In addition, the API will be extended to provide RTT and bandwidth estimates:

partial interface NetworkInformation : EventTarget {
readonly attribute Milliseconds rtt;
Readonly attribute Megabit downlink;
}

The rtt and downlink attributes provide the same values that are provided by the Network-RTT and Network-BW headers, except that the bandwidth attribute is in Mbits to be consistent with downlinkMax, which the header is in Kbps to avoid use of floating point. Implementations should provide null when an rtt or downlink estimate is not available.

Browsers can compute the RTT and bandwidth estimates by computing the median of the RTT and bandwidth observations across all transport-layer sockets. When observing bandwidth, browsers should employ heuristics to determine that the network is being used to capacity.

from netinfo.

n8schloss avatar n8schloss commented on July 22, 2024 3

Hey all, we've been using the JS portion of this API for a bit now so I wanted to quickly chime in with our experience. We're using the JS api and specifically effective connection in our performance logging and have seen a nice correlation between effective connection and performance. There are clear bands that seem to match nicely with effective connection. We're excited for the headers part of this api so we can start shipping different product experiences (ie, better quality videos on good connections) based on connection.

In terms of if the effective connection types vs raw number usage, currently we’re mostly using effective connection type, but this is because we’re only looking at doing simple logging estimates. When it comes to shipping different product experiences I expect that we’ll mostly rely on the raw rtt/downlink numbers and not effective connection type. I think that’s one of the best aspects of this api, we have a great simple field to help quickly get started to do some quick rough analysis and then for more detailed things we still expose the raw numbers.

from netinfo.

igrigorik avatar igrigorik commented on July 22, 2024 2

I worry that there are a fair number of features like this where the browser should ideally send information about the client state to the server -- examples include pixel ratio, screen size, time zone, etc. I'd love to see us come up with a really solid idiom for "this is a piece of information that the browser can tell you, we'd like to send it only if necessary"

We spent a lot of time discussing this in context of Client-Hints and the resolution and guidance from http-wg was to use separate headers. Yes, it may consume a few more bytes on the wire, but on the upside you at least have a chance of caching and dictionary re-use. The names for the headers don't need to be large either, so the actual byte difference is very small -- said differently, any packing you come above can be replicated with separate headers and small delta overhead. I propose we keep it simple and stick with terse, separate headers.

from netinfo.

bmaurer avatar bmaurer commented on July 22, 2024 2

Yeah, totally fine with separate headers. My suggestion here is a clear, consistent api for opt-in headers. Client hints is a great example of this. The idiom there is that you pass the Accept-CH header with the list of client hint headers that you want and those headers are returned to you. If you say Accept-CH: X, Y you will get headers X and Y. This is extensible -- more examples can easily be added to this pattern.

OTOH the currently proposed api doesn't feel consistent. You say Allow-Network-Speed and you get headers Network-RTT and Network-BW. This spec also defines a caching mechanism which is value but not in a way that could be reused across other parameters.

It seems like the key issue here is the Accept-CH has no means of being cached for future document requests. One of the primary use cases of this api is for the server to send different main-page content based on the network bandwidth.

What about doing the following:

  1. Make NetBW and NetRTT a Client Hint
  2. Make a client hints cacheable.

Accept-CH-For-Document:

Would cache the Accept-CH list for future fetch()s with destination == document.

Not only would this be more consistent, I actually think it'd solve a pretty concrete problem that we face which is that many of the client hints are most useful on the main document (for example, the way FB works today we're pretty dependant on knowing the DPR during the request).

from netinfo.

igrigorik avatar igrigorik commented on July 22, 2024 2

Landed the JS interface that exposes effective rtt, downlink, and type: 0653980. You can see it live @ https://wicg.github.io/netinfo/ - ptal and let me know if you spot any issues.

We'll tackle header advertisement + CH integration in a separate PR.

from netinfo.

igrigorik avatar igrigorik commented on July 22, 2024 1

@n8schloss thanks for the great feedback! With Accept-CH-Lifetime landing this week in Chrome Canary, we can now tackle the second portion of this issue.. which is the RTT, Downlink, and ECT headers.

from netinfo.

jokeyrhyme avatar jokeyrhyme commented on July 22, 2024

So the OS / browser keeps some sort of sliding-window of RTTs, and uses this to provide a single average/estimate RTT to JavaScript and/or to an HTTP server via headers?

What happens in the train-tunnel scenario, where a request is initially started under ideal network conditions, but those conditions immediately worsen for the rest of the request? In this scenario, what would the user experience if the app / proxy / server uses information that is based on past performance to deliver content that is ill-suited for current network conditions?

from netinfo.

jokeyrhyme avatar jokeyrhyme commented on July 22, 2024

It occurs to me that a device could use travel speed/direction and a map of poor coverage (or something to that effect) to enhance/augment the accuracy of historical RTTs. So a device could sense that it was travelling towards an area with poor coverage, and the APIs / headers would indicate this.

It seems to me that accurate portrayal of network conditions is a very complex subject, which makes it such a fun and exciting topic to discuss. :)

from netinfo.

n8schloss avatar n8schloss commented on July 22, 2024

This is amazing! The info provided here won't be perfect in a train tunnel secerio, but that's okay, this is so far ahead of what we have right now. Being able to understand the user's bandwidth on the first request is going to be pretty big in terms of the improvements that we can make to performance and performance logging.

from netinfo.

bmaurer avatar bmaurer commented on July 22, 2024

The headers are sent and are sent together only after an explicit per-origin opt-in. The opt-in is via a response header. The browser should, but is not guaranteed to retain the preference across browsing sessions.

I'd like to suggest that we try to make it so that the information is sent to origins who have not provided any indication as to if they want the information. Ideally I think you want this flow:

  1. The first time you talk to an origin you get the header. You are also told this is the first request
  2. If you want to get networking data, you say so. You don't repeat this on following requests (so you avoid the header bloat)
  3. After N days the cached state could expire (so as to allow sites to change their opt-in status)

I think this would be a nice pattern for any kind of case where we send data about the state of the browser

from netinfo.

bengreenstein avatar bengreenstein commented on July 22, 2024

@jokeyrhyme: Yes, the browser maintains an estimate. For the train-tunnel scenario, browsers can be a little smarter. Chrome, for example, will decrease the weight of older RTT samples if newer ones have a very different signal strength. Using location is an interesting idea, but would require careful consideration of privacy and privacy/performance tradeoffs.

@n8schloss: Awesome!

@bmaurer: I considered you're protocol before writing the proposal. I think the hard part is that the browser would then need to keep a list of every origin it has every communicated with. Not impossible. Just a PITA. Also, I think having a header that turns off the network headers explicitly is more useful than a timeout. We could also do both. Wdyt?

from netinfo.

jokeyrhyme avatar jokeyrhyme commented on July 22, 2024

@bmaurer

You don't repeat this on following requests (so you avoid the header bloat)

HTTP/2.0 has header compression with a compression dictionary. I feel that it would be a shame to make user agent implementations more complex due to perceived "bloat" in an old (but admittedly very popular) protocol.

from netinfo.

bmaurer avatar bmaurer commented on July 22, 2024

HTTP/2.0 has header compression with a compression dictionary. I feel that it would be a shame to make user agent implementations more complex due to perceived "bloat" in an old (but admittedly very popular) protocol.

You'll still need to send the headers on the first request on that socket, namely the one where you actually render your main page.

I worry that there are a fair number of features like this where the browser should ideally send information about the client state to the server -- examples include pixel ratio, screen size, time zone, etc. I'd love to see us come up with a really solid idiom for "this is a piece of information that the browser can tell you, we'd like to send it only if necessary"

from netinfo.

igrigorik avatar igrigorik commented on July 22, 2024

Gotcha, thanks Ben.. all that makes sense.

  • There is nothing special about "making X a client hint"... we can simply define the headers we want and specify that the opt-in can be communicated via Accept-CH, as it is an extensible mechanism.
  • Making Accept-CH opt-in cacheable: that would be nice, as it would benefit other hints as well. In theory origin policy should (indirectly) solve this.. but the timelines for that are not clear. /cc @mikewest

from netinfo.

tarunban avatar tarunban commented on July 22, 2024

The updated protocol based on the feedback here, and discussion with @igrigorik and @bengreenstein :

  • Browser will include NQ headers only if the server has opted-in to receiving the NQ hints via Accept-CH mechanism.
  • Chromium will start caching Accept-CH opt-ins on the disk across browser restarts.
  • On each response, Chromium will update the opt-ins for the corresponding server. So, if the opt-ins in the response headers are different from what the browser remembers, then the browser will update its values for the opt-ins. This provides a way for the server to opt-out from receiving the hints.

from netinfo.

bmaurer avatar bmaurer commented on July 22, 2024

Awesome, sounds like a great chance.

Will not sending an accept-ch on a future request be an explicit opt-out? It may be hard to get all images, etc to send the header

I also still wonder if it makes sense to be able to scope Accept-CH to specific fetch destinations (eg only document for bandwidth). Even if hpack reduces the networking cost it seems like sites could start getting an excessive number of headers and increase the processing cost on the server, etc. Maybe i'm just worrying about it too much though

from netinfo.

tarunban avatar tarunban commented on July 22, 2024

Yes, in the current proposal not sending an accept-ch on a future request will be considered as an explicit opt-out. What are the other possible ways of opt-out? Would opting-out N days after no implicit opt-out is received be a better option?

I think restricting the header to certain content types is feasible, but it is not clear what content types should be whitelisted. e.g., a case can be made for images and media content to be whitelisted too.

from netinfo.

bmaurer avatar bmaurer commented on July 22, 2024

I think it'd be better to require an explicit Accept-CH: clear

I think it'd make sense to use fetch's destination as the way to restrict to specific types. Eg you only want to send DPR to image requests.

from netinfo.

igrigorik avatar igrigorik commented on July 22, 2024

Will not sending an accept-ch on a future request be an explicit opt-out? It may be hard to get all images, etc to send the header

How does adding a new header make it any easier, in comparison to omitting it? The benefit to "omit" = clear is that there is no extra distinction between "I never used it and don't care" and "I used it but don't care now".

I think it'd make sense to use fetch's destination as the way to restrict to specific types. Eg you only want to send DPR to image requests.

That's not true. HTML, CSS, and JS can all be optimized based on DPR.. many sites do exactly that.

from netinfo.

bmaurer avatar bmaurer commented on July 22, 2024

How does adding a new header make it any easier, in comparison to omitting it? The benefit to "omit" = clear is that there is no extra distinction between "I never used it and don't care" and "I used it but don't care now".

Because it can be extremely hard to ensure that 100% of all requests on a given domain contain a header. If any request from facebook.com can blow away the CH setting then it could get very difficult to debug how that happened. I'd also be fine with the opt-out after N days of not seeing any accept-ch headers where N is large (say 15-30)

That's not true. HTML, CSS, and JS can all be optimized based on DPR.. many sites do exactly that.

Right, my point is that you'd say "I want DPR for fetch destinations document and image"

from netinfo.

tarunban avatar tarunban commented on July 22, 2024

Another update: After some discussion, it has been decided that the network quality headers will be sent only on HTTPS connections.

from netinfo.

yoavweiss avatar yoavweiss commented on July 22, 2024

A few questions:

  1. From the discussion:

Bandwidth is more important than FB, RTT more important for Salesforce

Can someone that attended the session elaborate on the actual use-cases for RTT info? (The use-case for effective bandwidth seems pretty clear to me...)

  1. I'd like to make sure we're taking into consideration the fact that adapting based on 2 hints would significantly increase cache variance, and AFAIU even if Key is adopted and implemented, it doesn't have "and" semantics that enable us to vary the cache on multiple ranges from different headers. (/cc @mnot)

If there are use-cases for adaptation based on both RTT and bandwidth separately, maybe there's room for those as well as the Effective-Connection-Type signal that was in the original proposal. Since we're talking about an opt-in anyway, servers can indicate which hint is interesting for them, and Vary (and in the future Key) based on that.

  1. @tarunban and/or @bengreenstein - is it possible to get some more details on how the network-bw value is calculated? Will an algorithm for that be part of the spec, or is it possible that it'd vary between UAs in similar conditions (to enable the algorithm to evolve and improve in the future)?

from netinfo.

tarunban avatar tarunban commented on July 22, 2024
  1. RTT is generally a good enough predictor for page load performance. RTT prediction (compared to bandwidth) is also easier to implement, more well defined and generally the prediction accuracy is better.
  2. This is a good idea. I will add ECT back here. Thanks.
  3. I believe algorithm should not be a part of the spec since estimating bandwidth is a pretty open-ended problem, and there is a lot of scope of improvement. Here is the current algorithm used in Chromium: https://docs.google.com/document/d/1eBix6HvKSXihGhSbhd3Ld7AGTMXGfqXleu1XKSFtKWQ/edit#bookmark=id.lxtaomk8d17p

from netinfo.

tarunban avatar tarunban commented on July 22, 2024

Updated proposal based on the feedback so far:

What is this?

This is a proposal for three new HTTP request headers and an extension to the Network Information API to convey the HTTP client’s network connection speed.

  • The first header provides an estimate of the current round trip time (RTT) measured at the transport layer.
  • The second header provides an estimate of network bandwidth provided by the transport layer.
  • The API extension provides the same information as the headers.

The headers will be sent together, and will be sent only to hosts that opt in to receiving them via HTTP client hints.

Why do we care?

Web servers and proxies benefit, because they can tailor content to network constraints. For example, on very slow networks, a simplified version of the page can be provided to improve document load and first paint times. Likewise, higher-bandwidth activities like video playback can be made available only on faster networks. Users benefit by being offered only that content that they can consume given network constraints, which results in faster paints, and less frustration.

Goals

The goal of the headers and API is to provide network performance information, as perceived by the client, in a format that’s easy to consume and act upon. The headers convey the bandwidth and latency constraints of the network. (Below we provide guidance as to how these map to levels of service supported by the network.) The headers aim to allow proxies and web servers to make performance-based decisions even on a main frame request. The API extension aims to make it easy to make speed-related decisions from within JavaScript.

Non-goals

We do not plan to convey the actual connection type used, because it is already available via the Network Information API’s downlinkMax and its mapping to underlying connection technology, and it is not as actionable as the providing the performance of the connection. E.g., a Wi-Fi or 4G connection can be slower than a typical 2G connection at times. We also do not aim to convey the flakiness (or variability) of the connection quality.

Headers

Network speed is provided as estimates of current transport RTT, network bandwidth and an enum effective connection type which indicates the connection type whose typical performance is most similar to the performance of the network currently in use.

network-rtt = "Network-RTT" ":" delta-milliseconds
delta-milliseconds = 1*DIGIT
network-bw = "Network-BW" ":" kbps-value
kbps-value = 1*DIGIT
network-ect = "Network-ECT":” 
ect-value = "slow2g" | "2g" | "3g" | "4g" 

The Effective connection type (ECT) should be determined using a combination of transport layer round trip time and bandwidth estimates. The table below describes the initial mapping from RTT and bandwidth to ECT.

ECT Minimum transport RTT (ms) Maximum Bandwidth (Kbps) Explanation
slow2g 1900 50 The network is so slow that it can only support very small transfers, such as for text-only or highly simplified pages.
2g 1300 70 The network supports loading images, but not video.
3g 200 700 The network loading high-resolution images, feature-rich Web pages, audio, and SD videos.
4g 0 The network support HD video, and real time video conferencing.

The above table was generated from observations by Chrome over different connection types and network technologies (e.g., EDGE, GPRS). Observations are agnostic to variations in the characteristics of network technologies of different countries.

The Network-RTT and Network-BW headers have numeric, continuous values, which limits the applicability of the Vary response header (rfc7234) with them. Both RTT and bandwidth will be rounded up to the nearest 25ms (or 25 kbps) to protect against fingerprinting attacks. Network-ECT categorizes the network quality as one of the 4 enums, and makes it possible for content providers to Vary based on ECT.

When are the headers sent?

The headers are sent after an explicit per-origin opt-in. The opt-in is via Client hints mechanism defined here. The browser should, but is not guaranteed to retain the opt-ins across browsing sessions. In particular, browser may clear the opt-ins based on user actions (e.g., clearing cookies or browsing history). Three new hints will be added:

Accept-CH: network-rtt

Accept-CH: network-bw

Accept-CH: ect

To opt-out from receiving the network quality hints, the origin should stop sending the Accept-CH, which would cause the browser to stop sending the hints.

Origins may also use an equivalent HTML meta element with http-equiv attribute (W3C.REC-html5-20141028).

The headers would be sent only over secure HTTPS connections.

Network Information API extension

The Network Information API provides read only attributes that contain the connection type (e.g., wifi, cellular, ethernet) and the maximum downlink speed of the underlying connection technology in Mbit/s, as well as a way for workers to register for connection change events. In addition, the API will be extended to provide RTT, bandwidth and effective connection type estimates:

partial interface NetworkInformation : EventTarget {
readonly attribute Milliseconds rtt;
readonly attribute Megabit downlink;
readonly attribute EffectiveConnectionType effectiveType;
}

The rtt, downlink and effectiveType attributes provide the same values that are provided by the Network-RTT, Network-BW and Network-ECT headers, except that the bandwidth attribute is in megabits to be consistent with downlinkMax, which the header is in kilobits per second to avoid floats. Implementations should provide null when an rtt or downlink estimate is not available. effectiveType has the values: slow2g, 2g, 3g, 4g, and mirror the values in the header.

Browsers can compute the RTT and bandwidth estimates by computing the median of the RTT and bandwidth observations across all transport-layer sockets. When observing bandwidth, browsers should employ heuristics to determine that the network is being used to capacity.

from netinfo.

bmaurer avatar bmaurer commented on July 22, 2024

It's unlikely FB will be able to make use of this api if the absence of an accept-ch heading clears the cache. Ensuring full coverage across all of our endpoints gets really tricky.

from netinfo.

bmaurer avatar bmaurer commented on July 22, 2024

The headers are sent after an explicit per-origin opt-in. The opt-in is via Client hints mechanism defined here. The browser should, but is not guaranteed to retain the opt-ins across browsing sessions. In particular, browser may clear the opt-ins based on user actions (e.g., clearing cookies or browsing history). Three new hints will be added:

Should probably read that the browser must clear hints when clearing cookies.

Accept-CH: network-rtt, Accept-CH: network-bw, Accept-CH: ect

Should be consistent about the net prefix. Also, consider shortening to net-rtt, etc

from netinfo.

rmarx avatar rmarx commented on July 22, 2024

I want to make sure I understand the "When are the headers sent?" and the decision to make it opt-in only.

AFAIU this means that for a totally new origin (nothing cached, no CH and no resources), the origin will get no netinfo for the first request (e.g., mydomain.com/index.html, vids1.mydomainvids.com/videoA.mp4) and thus cannot send optimized content for the "first load". Arguably, it is exactly this first load that could benefit from this information the most, since no other optimizations (e.g., resource caching, service workers) are in place yet. Working with separate origins for additional content (e.g., video, image CDNs) possibly makes this even worse.

If feel like @bmaurer was saying something similar in his comment but that the discussion then moved more to "should we cache the CH headers" instead of "should we send the info for the first request as well?". Earlier versions also had things like "Browsers should by default only send these headers when ECT is slow-2g or 2g" and talked about "Origin Policy", which have been left out of the latest version. Possibly there are good reasons for taking this approach, but from the content in this thread and the BlinkOn doc, these are not 100% clear to me.

I would be partial to sending this info for all "first loads" (new origin, nothing cached, only on HTTPS). Only if the server answers with Accept-CH do we send it for subsequent requests as well. Servers who don't support it would just ignore the extra initial headers. This might require some custom caching approach (ignoring these specific headers when caching the initial response?) but I'm not sure about how that would work/other impact?

from netinfo.

jokeyrhyme avatar jokeyrhyme commented on July 22, 2024

Can we dive into the rationale for having to opt-in via headers and having an origin cache?

If these headers are only transmitted over HTTPS, then don't we automatically benefit from some level of compression?

~80% of all browsers support HTTP/2 now: http://caniuse.com/#feat=http2

So, these headers are either:

  • 0 extra bytes if the user's privacy preference disallows them

  • an extra 10 or so bytes over HTTP/1 without encryption / compression

  • potentially just a few extra bytes over HTTP/1 where HTTPS compression is enabled

  • just a few extra bytes over HTTP/2 due to header compression and session dictionary

Unless there's an enormous saving to be had, may I propose that we reduce the complexity a great deal and make it more useful for more websites by removing the opt-in round-trip and thus no longer requiring implementations to have an origin cache?

from netinfo.

tarunban avatar tarunban commented on July 22, 2024

@bmaurer : See #47 which discusses changes needed in Accept-CH spec.

@rmarx and @jokeyrhyme : I think the problem is not just the overhead of network quality headers. There are many other client-hints that the browsers can send, and the list may expand in the future. Sending all the client hints is not scalable.

The origin-policy spec can hopefully solve this problem in future (We need to put that back in the spec here. I will work on that).

from netinfo.

igrigorik avatar igrigorik commented on July 22, 2024

From #46 (comment).

The rtt, downlink and effectiveType attributes provide the same values that are provided by the Network-RTT, Network-BW and Network-ECT headers, except that the bandwidth attribute is in megabits to be consistent with downlinkMax, which the header is in kilobits per second to avoid floats. Implementations should provide null when an rtt or downlink estimate is not available. effectiveType has the values: slow2g, 2g, 3g, 4g, and mirror the values in the header.

Above sgtm and a related question..

We fire events to notify [1] application of changes to connection.{downlinkMax, type}. Presumably, we would want to do the same for the new attributes (rtt, downlink, effectiveType), to allow developers avoid the need to poll manually.. Except, how often are these values updated? Does the current implementation have thresholds that we should consider? This question is a close cousin of #30.

[1] http://wicg.github.io/netinfo/#handling-changes-to-the-underlying-connection

from netinfo.

tarunban avatar tarunban commented on July 22, 2024

About the thresholds: One way is to update the values if the difference between the new value and the old value is at least X units, AND the percentage difference between the new and old values is at least Y%.

For example, for creating new net log entries, Chromium NQE uses X = 100 msec (for RTT) or 100 kbps (for throughput), and Y = 20% (relevant Chromium code here). We can tweak the values of X and Y and use a similar approach here too.

IMO, allowing the thresholds to be set by the listeners will make the browser implementation and the listener implementation too complex, with probably not significant benefit.

from netinfo.

igrigorik avatar igrigorik commented on July 22, 2024

@tarunban thanks for the pointer. Curious, how were the current thresholds determined? Do you have a sense for how often that logic triggers? My goal here is to avoid generating unnecessary noise for developers.. e.g. we don't want to be firing these events every 30s.

from netinfo.

tarunban avatar tarunban commented on July 22, 2024

The thresholds were determined using local experiments. With those thresholds, for a new connection, I see a few triggers in the first 30 seconds as NQE adapts. For a connection that we have seen before or after the first page load for a new connection, there should be ~1 trigger every couple of minutes.

from netinfo.

msramek avatar msramek commented on July 22, 2024

Hi!

I have been looking into the spec a bit today, and I'm not entirely on board with waving off the fingerprinting concerns with an explanation that the information can always be computed manually.

This is true when we're concerned with fingerprinting against a single origin - without this API, it will take some time to get RTT and downlink estimates, and they might not be as precise, but yes, you can do so.

However, for cross-origin fingerprinting, two origins that do their own manual measurements will likely get somewhat different results, and possibly even different from the single-origin case as they only have half the bandwidth. Yet with this API, if they call it at the same time, they would get precisely the same value.

I wonder if we should address that, e.g. by:

  • Making it harder for them to call the API at the same time (e.g. very high refresh time, randomly delayed response)
  • Proving that the real-life distribution of RTT and downlink is such that the vast majority of users would end up in a small number of 25ms (kbps) buckets, which would mitigate the problem for the majority of users.
  • Using global estimates early in the page lifetime, and local estimates (this page's traffic only) later.
    etc.

from netinfo.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.