Code Monkey home page Code Monkey logo

web-environment-integrity's People

Contributors

bakkot avatar yoavweiss avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

web-environment-integrity's Issues

"device identifier" could be per-origin; would that alleviate concerns?

Per the proposal:

We strongly feel the following data should never be included:
A device ID that is a unique identifier accessible to API consumers

I absolutely agree with this as phrased, since it would trivially allow tracking across origins.

However, you can imagine doing hash(origin || per-device ID) and including that instead. That would not allow any cross-origin tracking. And it would enable the server to do rate-limiting for physical devices, which would be extremely useful (though it wouldn't entirely obviate the usefulness of having some other rate-limiting indicator built in).

Would you consider such a thing to be verboten the way a cross-origin ID is?

(The design would need to be slightly more complicated in that the attester would need to include the origin as well, so it couldn't be spoofed just by faking the origin to the attester and thereby getting a new unique ID.)

Secure Context only

I'd like to propose that we only support WEI when the user agent has a Secure Context. This will ensure that the origin is not spoofed which is what we are relying on for things like partitioning.

This hopefully isn't particularly contentious given this is considered a default spec design principle.

can this proposal facilitate integrity of the running script?

We don't have a specific request for what to include (or not include) in the attestation. Instead, our goal is to be able to bootstrap trust in other browser APIs. For example, we would like it to be hard for an attested environment to be under control of automation without that being indicated by the navigator.webdriver property. If the attestation can guarantee to us that the browser built-ins are original and being faithfully evaluated, it would allow the set of actual attested values to be small and remain stable over time – two very desirable properties. Anything else that needs to be trusted could be added through regular browser APIs without continually extending the APIs added by this proposal.

We realise that it might be difficult to achieve this property. For example, the integrity decision would have to account for certain kinds of extensions that replace or modify platform built-ins. Additionally, to account for proxied or otherwise modified responses that change the script being run, the attestation would need to include evidence of absence of tampering with the running script, such as a hash of its contents. This info would not be included opaquely within the integrity decision because, unlike with typical integrity properties like Secure Boot, self-signed certs, or use of debugging APIs, the attester does not know how to evaluate the integrity of the web application itself.

So with all that in mind, it looks like what we would like to be included in the attestation is:

  • attester identity
  • origin
  • request-specific content binding
  • platform info, including:
    • operating system identity
    • user agent identity
  • the integrity decision
  • evidence of absence of tampering with the running script

A note about the absence of integrity info for the document or other subresources: once you have established integrity of the platform and your script, the script can be trusted to make integrity assertions about the document, at whatever point in the document lifecycle is appropriate.

/cc @bakkot

Proposing CBOR for the token format

It's no secret from the explainer that I think we should agree to go with CBOR/COSE. I'm writing an issue to formally propose this.

Thinking about low latency environments like ads that will require lots of requests, something like JSON would be pretty heavy.

Web Authn already relies on CBOR and COSE which have both been standardised. Web Authn has attestation so there is already some overlap there. It would be easier for attesters to follow.

Added to this, Apple App Attest already makes use of this as well (ref). This all seems to be pretty strong data in favor of CBOR/COSE.

The nice thing about CBOR is it already has many implementations available and is easy to deserialize to a JSON object.

Thoughts?

What does this proposed API provide?

I read the proposal and it's not clear to me how this provides anything that the fetch or XMLHttpRequest APIs don't already provide, unless there's some undocumented user information that's being transmitted to the attester. It's also not clear how the attester's key becomes trusted. I see three options:

  1. There are user agent-provided attesters, which means this proposal forces vendor lock-in because each vendor's public key will need to be individually trusted.
  2. There are website-provided attesters, which means this proposal offers no features that XMLHttpRequest or fetch can't already do, unless there's some form of user data being transmitted to the attester (again, the spec is very vague about what gets sent).
  3. Attesters can be configured by the user, which means this is an API to get an arbitrary chunk of usually meaningless data.
  • Document what information is sent to the attester and how the attester verifies that information is "correct" before signing a payload
  • If this information is sensitive, document why it's okay to send it to the attester when it wouldn't be okay to expose it to the website directly
  • Document how a malicious actor would be stopped by this proposed security feature

RFC 8890 Compliance

Hello. I'm very concerned about this specification.

For example, this API will show that a user is operating a web client on a secure Android device.

This for example says that me, who has a rooted phone that I enjoy, would be locked out of the web. This seems like a pretty bold & infernal twist to the world, that user's can only use computers that they cannot control to access the web.

This specification defies the broadly known & accepted RFC 8890, The Internet is For End Users. https://datatracker.ietf.org/doc/html/rfc8890 It is creating a system which, instead of being "user-focused", locks out users, instead preferring to give sites the ability to shape & define. There is big "negative end-user impact" to these changes, in that only limited classes of computing will be able to participate on these parts of the web.

Google Bordering on Antitrust

Creating an explicit web standard to lock out browsers you didn't certify is going to put Google on the wrong end of a losing anti trust lawsuit, and make the lives of all the engineers involved miserable in the process.

Do yourself a favor and abandon this proposal, before you have to spend the next few years of your life going through depositions and subpoenas.

investigating the causes of unexpected integrity attestation failures

Up until this point, I've assumed the integrity decision was a single pass/fail bit. But in the case of failure, as we've discussed in previous CG calls, it is handy to have some way to investigate why an integrity failure happened for a transaction (or possibly many transactions) that we believe otherwise to have come from a trustable device. What could we do to be more informative about failures? Can the integrity decision contain some information about the reason for failure?

Provide an example of validating tokens

After we've giving more details about how tokens looks such as the cryptographic requirements, we should give a non-normative example of how the validation will work for relying parties.

Do not listen to ad blocking addicts, please ship this ASAP

Ad blocking virus is currently spreading and we all know it makes people unhappy, because people need to watch at least 250 ads per day to be happy. Ad blockers interfere with this and it's not hard to see their detrimental effects on mental health.

Ads make people SO HAPPY, they literally LOVE THEM, but unfortunately that horrible ad blocking addiction prevents those people from watching them. We all know that's a disaster which needs to be addressed immmediately.

In order to break free from the ad blocking addiction, users need help and that help should be provided by their devices.
Fortunately, mobile devices are well-built for this purpose - their locked bootloader, TrustZone, Android/iOS system design, SafetyNet are the technologies that give us all the hope we need.

But even on mobile devices, currently the addiction still finds a way - it forces victims to install alternative browsers such as Firefox (shame on you) which even promote (!) the worst extension ever made - uBlock Origin (criminal).

While content providers and advertising platforms all have good intentions, currently the Web plaform lacks a critical API that would make it possible to reliably verify whether the user runs in an environment that's sufficiently protected against all the evil that ad blocking is AND from any attempt to extract the website code in order to attack the protection mechanism.

This API is EXACTLY the thing we all need. Thanks to this we can serve the real website code only after authenticating the platform. This way the code would be fully shielded from any kind of reverse engineering which could result in this horrible ad blocking virus again finding a way to prevent the user from experiencing true happiness.

Be aware though, my friends, that there's one more thing we need to tackle - web debugging.
Website authors MUST have the ability to disable debugging, otherwise the environment is not fully free from ad blocking influence.

So:

  1. Ship this ASAP
  2. Remove Chrome remote debugging on Android

Maybe just disable attaching a remote debugger if the attestation API was used?

What error codes should be thrown?

There are many reasons why an attestation token could fail to be retrieved. Each exception type of course could result in potentially more fingerprinting. For instance, simply saying the device does not support attestation with something like NotSupported could give more data than a user agent that doesn't support WEI at all.

OptOutError seems like it could potentially be used to punish users so I propose we do not throw this. Same for NotAllowedError. I'll investigate how other specs deal with this.

I'm also proposing we just throw NotFoundError in most cases. Perhaps TimeoutError could be useful if developers should know to try again later?

This idea is bad and you should feel bad

You're going to destroy the ability for browser automation tools to function so that you can keep shoving Google Ads into our eye holes and blocking scrapers from collecting data? Congrats on creating yet the latest effort to murder the open web and presenting it as a concern of users who want to visit "expensive" websites. You should be embarrassed and ashamed. 2023 and Google wants to bring back ActiveX, smh.

What should the attester identity be?

We are going to report an attester identity. What string should be returned?

For example, if my attester company name is "trustworthy source", should I just return that?

I'd like to propose that the attester identity be a domain. Web developers ultimately need to request information from some domain to check if the verdict returned is valid. This would help with this process and also be a stable target.

This also accounts for little details like companies deciding to change their names etc.

A con I can think of is domain expiry.

What does everyone think?

This will make the web stupid, and it will be wholly ineffective

Bootstrapping trust and remote attestation are very hard problems, and require careful control of every layer of the system from the silicon up.

On the other hand, displaying ads (or serving up DRM'ed media, for that matter) pretty much being willing to go wherever the eyeballs are.

The two are hopelessly in tension with each other.

You're gonna build some crummy DRM obfuscation, it's going to be a massive annoyance, somebody's gonna implement it in a slipshod way on an easily hackable system, and ad-block-wielding Linux-using chaos monkeys like me are going to figure out how to spoof it. 🤷🏻‍♂️


I'm one of the maintainers of OpenConnect, the multiprotocol VPN client; we've figured out the wire protocol for every ekstra speshul proprietary tunnelling protocol on the planet, and I've also reverse engineered and reimplemented the extraordinarily dumb obfuscated protocol for provisioning RSA soft-tokens (see dlenski/rsa_ct_kip).

Include content binding hash type in the EnvironmentIntegrity response?

To avoid an attestation being used for the wrong website, we're thinking that it makes sense to have the user agent hash the content binding with the origin.

Eg: SHA256(contentBinding + ";" + origin)

This means that an origin can't sneakly request another origin to pass on an attestation.

I'm thinking the hashing scheme used by the user agent should be reported as part of the EnvironmentIntegrity.

Eg:

<xmp class="idl">
  [Exposed=Window]
  interface EnvironmentIntegrity {
    readonly attribute ArrayBuffer attestationToken;
    DOMString encode();
    DOMString contentBindingScheme;
    object toJSON();
  };
</xmp>

What does everyone think?

Questions of Necessity for Proposal

After reading through this, it feels as if this is being proposed because of developers who are not experienced enough to avoid client-side validation bypass errors, or similar results by overly trusting that what the user says is correct. What is the actual use case for this besides allowing more lax application development to feel secure while creating an Animal Farm-esque internet landscape in which all browsers are equal but some browsers are More Equal than others?

Proposal entirely skips how attestors are supposed to actually function?

As far as I can tell the proposal leaves the question of how attestors should actually attest to anything as an open question, but im having trouble seeing any possible way these could work?

The only descriptions we have are that "the attesters will typically come from the operating system (platform) as a matter of practicality" and comparisons to App Attest and Play Integrity. The issue is though that

  1. The operating system is not trustworthy either for the purposes of this proposal
  2. The other comparisons are on closed platforms, sometimes with dedicated hardware for detecting tampering

How do you propose this system actually functioning without

  • Destroying general computation on all desktops
  • Every client sending massive amounts of data at all times to multiple attestation authorities (and verifying the information from those computers is legit, somehow)

The attestor cannot run on the client device as it is untrusted and could be either modifying the attestation software or feeding it false information, and running outside of the client device seems like a privacy and security nightmare.

So... locking down browser and OSs that only support DRM wasn't enough?

So, now that we've locked out browsers who wont kowtow to DRM requirements, and thus, locking users out of systems on fully libre OSs and stacks... Now we want to lock users out of more content because they use fully libre OSs and stacks?

Thankfully, I'll be able to make architecture decisions to reject any and all solutions that try to implement this from any environment I'm in professionally. Thanks for the heads up of technology to blacklist from all enterprise environments I touch.

Will accessibility tools be blocked or banned?

People with disabilities use devices that automate inputs or parse and read outputs from the browser's API, such as:

  • On screen keyboards
  • Screen readers
  • Controller to keyboard mapping

These inputs also tend to be automated by third party applications for things like auto-type input or scripting. How does this proposal plan to attest that a web browser is a human if all human interaction comes from these easily scriptable APIs?

Will accessibility tools be blocked or banned? Does that mean people with disabilities aren't human enough?

User Research?

I try to ask this question on every new web standard proposal - which real users have you spoken to about this?

I don't mean "which personas have you concocted?" I also don't mean "which business units have you talked to?" Nor do I mean "have you discussed this with other Googlers?"

Real users. Those people who both read and write websites. Have you asked any of Google's user researchers to research the users that this will affect?

Have you dreamed up the platonic idea of a problem you can solve? Or have you tried to understand the messy reality that exists with conflicting user desires?

Please - I beg of you - do some actual user research and then publish it.

Otherwise this proposal is nothing more than navel-gazing.

holdbacks diminish the value of the proposal and don't protect browser diversity

As you've mentioned in the explainer (and as I have already expressed in the CG call), the holdback strategy for protecting browser diversity would significantly reduce the usefulness of this proposal for the use cases we care about, and we feel strongly that it should not be used.

For the use case of credential stuffing, for example, we have one opportunity to collect signal data before making a determination about whether to allow an authentication attempt. Assuming the holdback strategy is to artificially fail the integrity decision, we would still need to collect our typical signal data for every visitor, and only rely on the attestation as a new secondary signal. If instead the holdback strategy is for the attestation API to fail, we would still need to collect our typical signal data for the users affected by the holdback. Those users would be subjected to much more scrutiny than necessary. Neither of these strategies would allow for us to stop using fingerprinting strategies entirely, even on devices which support this API, which should be a goal of this proposal.

In addition to our belief that holdbacks limit the usefulness of this proposal, we feel that holdbacks don't meaningfully make a difference for the problems they're intended to address. It's our understanding that holdbacks are meant to allow for less-popular or less-featureful browsers to still be used across the parts of the web they support without artificial discrimination. But it is already the case that certain web applications, including the ones protected by F5, deem many (mostly older) browsers unfit for sensitive transactions. Typical reasons include maximum TLS version support, cipher suite support, or susceptibility to critical vulnerabilities which compromise their ability to maintain confidentiality. But even more simply, web applications can and do just discriminate against disfavoured browsers via User-Agent string. And finally, we don't feel that this API significantly increases implementation burden for new browsers relative to the already massive number of APIs and features that are considered a baseline for web compatibility today.

/cc @bakkot

Confusion between client and user authentication

If you basically replaced all occurrences of “web browser” with “person”, what is the result?

At the end of the day, the arguments and purposes of the proposal involve the authenticity of a real person behind the request. The involvement of client details seems like an incredible distraction from the elephant in the room, which is: WHO is making the request and WHY? That’s what matters. I don’t believe this is a solution for that.

Given the ethical problems with establishing that in certain cases, there is no way authenticating a client environment will be anything other than a heuristic that itself can be further abused, by both Google and bad actors. At its best, it is a false sense of authenticity.

At its worst, this very much seems like an effort for certain browsers to dominate the web and to increase the perceived value of ads (in a rather trivial way, in the sense that ads’ perceived value is complete fantasy legitimized by pseudoscience, magical thinking, or, even worse, real science).

Who determines what Attesters are acceptable?

This proposal is about creating a way to create attestation tokens. Currently there are at least two pieces of information proposed, signed by the attester: a verdict on trustability, and an attester identity.

How do new browsers ever get trusted? If we rely on each site operator to determine which attesters to trust, what hope is there for other browsers to ever get off the ground? It seems unlikely that smaller browsers will gain much recognition here. This seems to favor only extremely large entrenched forces.

Feature Request: Whitelisted Websites

Rather than go through this long process, can we skip attestation and just ship Chrome with an approved list of websites users are allowed to visit? We need to prioritize sites that encourage transactions. Perhaps organizations can pay to be on the whitelist. We want to avoid letting people have access to anything that blocks ads but we may need to take a step further and only whitelist sites that won't possibly teach people how to create their own browsers, subvert DRM, or avoid advertisements.

explainer assumes the existence of a base64 encoder

(It's definitely too early to worry about this sort of thing, but I'm opening this issue anyway so I don't forget.)

This section has a code snippet with

console.log(attestation.encode());
"<base-64 encoding of the attestation payload and signature approx 500 bytes; see below for details>"

where attestation is "an ArrayBuffer serialized with CBOR (RFC 8949) and signed using COSE (RFC 9052)".

There's no such .encode method currently in JavaScript, nor any other convenient method to base64 encode an ArrayBuffer.

(It so happens that I am championing an early-stage proposal to add base64 support to JavaScript, though the current design of that proposal puts it on Uint8Array, not on ArrayBuffer. So this issue may end up obsoleted before it would be relevant. But I figured I'd mention it anyway.)

The advertising use case for this creates risks for users

This proposal would facilitate the placement of ads by Google (and possibly other intermediaries) onto sites with untrustworthy user metrics.

Tracking known human users onto deceptive sites is harmful because it rewards attackers for building such sites and for driving traffic to them in deceptive ways.

no don't pls thanks :3

this is literally just gonna cause monopolies on the internet
accessibility? captchas already suck. now we will have users wanting to use Accessible Browser with specific functionality and getting blocked because it's not certified
also this is the most capitalist dumb web standard idea i've seen in a while, have fun with the antitrust legal action :)

HTTP client hints

I think it would be appropriate for attestable environments to advertise this property via a low-entropy HTTP client hint. This will make it easy for web applications to deliver different experiences to visitors based on whether they will be able to take certain sensitive actions or not. It will avoid unnecessary redirects, refreshes, or replacement of content based on a runtime probe of the attestation API.

/cc @bakkot

Compare privacy claims vs anti-fingerprinting techniques

It's a stated goal of this proposal to provide a carrot for would-be privacy abusers, leading them toward a solution with different (and possibly better) privacy properties.

Unless I've missed something, it's taken for granted that the fingerprinting techniques will exist and these additional APIs will somehow provide a net privacy benefit. Could we get a comparison with best available anti-fingerprinting though? If the intention is really not to restrict users privacy, the baseline for comparison should be a browser configuration that focuses on privacy.

Could you guys please stop trying to create a 1984 dystopia every 5 minutes?

We need a safety and trusted client side "verification" to ensure Google decide all content on the internet is valid or not, block people live free of extreme abusive and intrusive surveillance ad system and make impossible to normal access web sites and apps outside big tech hands.
The next step involves implementing a social ID system and a high level of social score to regulate internet access.
Could you guys please stop trying to create a 1984 dystopia every 5 minutes?

Surveillance by Attester concerns

This proposal seems to assume that the Attester has a very broad & deeply rooted control & visibility over the end user's computer, otherwise they could not make a valid Attestation. What kind of information would be gathered? How much telemetry needs to be send to the mothership to make an Attestation claim? How are users aware of what data they are giving up to participate in Attestation? How can users protect or limit the amount of data they are sending up to their Attester? Are there any settings, a sliding scale that users can set? Or do we have to assume the Attester has complete superuser power beyond those of the computer's physical owner?

No real justification why challenges aren’t enough

Users like visiting websites that are expensive to create and maintain, but they often want or need to do it without paying directly. These websites fund themselves with ads, but the advertisers can only afford to pay for humans to see the ads, rather than robots. This creates a need for human users to prove to websites that they're human, sometimes through tasks like challenges or logins.

There is absolutely no substance as to why challenges or logins aren’t enough.

Also, as respectfully as I can be about this, and as a user, I don’t give the slightest about what advertisers can or can’t finance.

Attestation holdback

Objectives

Discourage websites from discriminating against non-attested clients

In a world where most devices support attestations, some websites might refuse service to clients that lack attestation. This may include browsers that have not integrated with platform-specific attestation APIs and browsers running on platforms that don’t have attestation available.

Maximize attestation utility for security and anti-fraud use cases

We aim to offer high anti-abuse value with the Web Environment Integrity API.

Holding back attestation from a large fraction of users would reduce usefulness of attestation, especially for use cases like data theft/security that require real-time damage prevention (as opposed to post-facto cleanup). Some stakeholders believe that even a small hold-back would diminish security reliability of attestation, and they would have to resort to old methods like browser fingerprinting. We aspire to wean the anti-abuse ecosystem off of privacy-invasive detection methods like browser fingerprinting; but a hold-back would compel continued investment in these methods. Consequently, we’d like to minimize attestation hold-back to maximize attestation usefulness.

Risks

Managing tensions between web openness, privacy, and safety

The biggest risk is carefully navigating tradeoffs to maintain or improve goals we all want: we want to keep the web open and free to everyone regardless of income or wealth (and the type of device they can afford); we believe the web should be private by default; and these things are not possible if the web is not safe and usable. Web Environment Integrity seeks to improve all three, but the devil is in the detail.

Without a hold-back, can we maintain access to the web for users with older devices incapable of environment or device integrity attestations? Will sites slowly begin to exclude “outdated” devices or custom browsers?

With a hold-back, are we able to improve privacy and reduce the need to fingerprint, if organizations still need to build, train, and maintain bot detection models for all classes of devices? Does it make a difference if the hold-back is a small proportion of traffic?

Rather than attempt to solve for each or view them as a one-for-one tradeoff, we want to acknowledge the tensions exist, discuss them in the open, and importantly, adjust if we start to see adverse impacts to openness, privacy or safety.

Risks of doing hold-back

  • Websites cannot rely on client attestation, because it may be missing.
  • Websites might continue collecting rich browser data, aka fingerprints, to detect abuse. This reduces the privacy benefit of attestation. Devices with attestation will likely still get fingerprinted to define a baseline for abuse ML classifiers.
  • Abusive traffic can blend in with traffic that has attestation held-back, potentially increasing friction to benign users in the holdback set.
  • Some use cases won’t benefit from attestation with hold-back as much as they would have from reliable attestation: irreversible user actions need a high level of security.

Risks of not doing hold-back

  • Websites can block or downgrade access for clients that have no attestation capability: some browsers, some platforms, rooted/jailbroken devices, etc. This may disproportionately impact certain users, for instance, lower wealth users that cannot afford to purchase newer devices.
  • Websites might gradually increase the requirement to have attestation, phasing out access to “outdated” clients.

Possible parts of a solution

Attestation hold-back aims to create enough users without web environment integrity attestation that websites would frustrate and lose a significant fraction of their users if attestation is required.

Inclusive verdicts

We must acknowledge that device trust is not a boolean but a gradient. Some users have more secure devices than others. Moreover, device security is not always an indicator of goodness -- there are secure devices used by bad actors in an automated manner, and there are rooted devices used by well-intentioned humans.

We aim to create abstractions that allow relying parties to tackle this complexity. For example, the attestation should cover both device integrity (allowing a few integrity levels) and behavioral integrity (e.g. a hyperactivity counter, or a historical trust score). We will share more details and seek feedback in a later stage.

Opt-outs

While we hope that Web Environment Integrity will ultimately result in more user privacy by reducing the need to fingerprint, some users may understandably want to opt out of providing detailed information about their devices. During Chrome's experimentation, if a user has unchecked the “Auto-verify setting” (utilized as controls for anti-abuse signals for Chrome 114+), we’ll refuse attestation.

Incognito mode may or may not be opted out by default. It’s an open design decision at this point. A user might get better anonymity by not providing information about their browser and the platform it’s running on, but without WEI, fingerprinting may be more likely to occur.

Fixed probability

Attestation can be held back for a fixed percentage of users, randomly selected and persistent per user per top level site for a period of time.

Site-specific explicit permission

A website may request user permission to request attestation if it is deemed important for security stance. This permission request dialog is more transparent for a user and should(?) override the browser opt-in and hold-back flag. It also may sound intrusive enough so that many users will opt out of giving that permission and out of using the website if it absolutely insists on it. There is precedent for this type of user prompt, for example, a site requesting microphone or camera access is necessary for legitimate uses like web conferencing, but attracts enough user attention such that sites are incented to only request when needed.

However, the problem with this approach is that it may set a precedent of only allowing attestable devices to use certain features on websites (see risks section).

We do not propose a permission model now, but leave it here for discussion.

Making attestation ubiquitously available

If it gets easy enough to add attesters for new platforms and browsers, most platforms and browsers would have an attester available. In that case, hold-back may become unnecessary as a protection measure for users of devices that don’t support attestation.

Keep the World Wide Web an open platform

The very idea of user agent's "integrity" goes against the spirit of the open platform that the World Wide Web was designed to be. Implementation details do not matter.

Compatibility with GDPR article 22

Something that is important to keep in mind with any proposal that comes out of these discussions is compatibility with existing regulatory frameworks. I cannot speak for the challenges globally, but I know that at least here in the EU, it would be very difficult for any "integrity check" to not facially fall afoul of GDPR Art22 Paragraph 1:
"The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her."

It's probably also worth noting that current jurisprudence has held a relatively skeptical view of using EULAs as a way out of compliance with data processing standards (as Meta found out in January https://iapp.org/news/a/what-the-dpc-meta-decision-tells-us-about-the-gdprs-dispute-resolution-mechanism/).

tokens bound to the device

The tokens are generated by the attestor based on the verdict on the device, but there is no guarantee that the malicious browser installed on a trusted device do not transfer them to bad devices. If not taken care, it may make the browsers target of attackers even more. Device-bound tokens seem to be one viable option to address the issue.

What extra fingerprinting does this allow?

https://github.com/RupertBenWiser/Web-Environment-Integrity/blob/main/explainer.md#fingerprinting discusses how to prevent attesters from including lots of entropy in their responses, but what's the minimum data that this API gives away?

https://github.com/RupertBenWiser/Web-Environment-Integrity/blob/main/explainer.md#what-information-is-in-the-signed-attestation discusses what information might be included:

  • The attester's identity, for example, "Google Play".
  • A verdict saying whether the attester considers the device trustworthy. (possibly containing more than 1 bit)
  • The platform identity of the application that requested the attestation, like com.chrome.beta, org.mozilla.firefox, or com.apple.mobilesafari.
  • Some indicator enabling rate limiting against a physical device

The attester's identity is likely to be 1:1 with the operating system, which I believe is exposed by network stack behavior, even if the browser spoofs its UA string, so that's not likely to be extra fingerprinting bits.

The verdict has at most the number of bits included in the verdict, but I think the goal is for all human users to be grouped in a single bucket, which would also remove the fingerprinting benefit?

The app identity does add fingerprinting bits, although if the particular browser has unique behavior for any web APIs, that provides the same bits that this would, meaning this wouldn't help fingerprinters beyond that baseline.

I'm not sure about the implications of the rate limiting indicator. Would it be stable across origins and time? If not, it doesn't help with fingerprinting either.

Are there any other sources that I've missed?

Don't.

Sometimes you have to ask the question whether something should be done at all, and trusted computing is certainly one of those cases where the answer is obviously a big fat NO.

So please reconsider what you believe in, leave this demon to history where it forever belongs.

Cease Immediately

This is the worst idea in the history of web browsing. Tim Berners-Lee could look at this and shed a tear. All of the folks responsible for the development of the interconnected web are staring at you, right now, and crying. They're crying. It's all your fault. What is this bunkum? This drivel will simply drive everyone away from using Google's godawful web browser. If Mozilla doesn't add this, this rot is going to see some usage numbers go up for Firefox.

tl;dr: cease

This is an awful idea

The web was built on open standards with equal access for everyone. This is nothing more than a land grab and an attempt to create a walled garden that only users approved by the corporations can use. You should be ashamed.

Does not explain what the costs of restricting fraud are

This proposal assumes that fraud is something you should prevent!

However, if a website is malicious, defrauding it is of no moral consequence, and indeed, failing to lie to it is possibly more negative of an experience. Meanwhile, a malicious website has no incentive to e.g. use the attestation but not also still try to do its own kind of fingerprinting on the side. This proposal seems to be all carrot and no stick: If I am malicious, why should I not just eat the carrot, and then go on to drink your wine and be merry anyways?

One can assume users would simply try to avoid using a malicious website, but it's very possible for a well-heeled attacker to compromise a website via perfectly legal means and retain a useful website. For instance, one could buy out the website's owner and retain some functionality that users perceive as valuable on that website, but increasingly attack their privacy (and perhaps sanity...) as a means of recouping the investment, dragging out the erosion of the good over time and maximizing extracted profit. Similar issues are fairly common problem with installed apps or even extensions, too, since those often automatically deploy (fairly trusted!) updates, and the permissions often aren't so fine grained that what was reasonable to grant today never becomes a problem tomorrow.

Does not address fake actors at all

Users want to know they are interacting with real people on social websites but bad actors often want to promote posts with fake engagement (for example, to promote products, or make a news story seem more important). Websites can only show users what content is popular with real people if websites are able to know the difference between a trusted and untrusted environment.

Wrong. Wrong wrong wrong.

If I am running an artificial engagement campaign there is absolutely nothing stopping someone from acquiring ten unique devices with authentic environments and operating them manually. It does not necessarily have to be a complete automation.

More specifically this point:

if websites are able to know the difference between a trusted and untrusted environment.

This SHOULD read: if websites are able to know the difference between a trusted and untrusted USER. That is, authentication of a person, which should not be substituted for authentication of client software. Huge difference.

The funny thing is that this proposal is perceived very much as a sly way to identify, at the very least, whether a client requests on behalf of real people for the purposes of increasing the value of advertisements.

What is more, websites that want to show “what is popular” on their site is not necessarily good for society. It creates the very incentive for artificially boosting content on social media. Look how well that worked out for the world regarding Twitter.

I feel badly for you all.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.