Code Monkey home page Code Monkey logo

trusted-types's Introduction

npm bundle size Libraries.io dependency status for latest release GitHub issues npm BrowserStack Status

Trusted Types

First time here? This is a repository hosting the Trusted Types specification draft and the polyfill code. You might want to check out other resources about Trusted Types:

Polyfill

This repository contains a polyfill implementation that allows you to use the API in all web browsers. The compiled versions are stored in dist directory.

Browsers

The ES5 / ES6 builds can be loaded directly in the browsers. There are two variants of the browser polyfill - api_only (light) and full. The api_only variant defines the API, so you can create policies and types. Full version also enables the type enforcement in the DOM, based on the CSP policy it infers from the current document (see src/polyfill/full.js).

<!-- API only -->
<script src="https://w3c.github.io/webappsec-trusted-types/dist/es5/trustedtypes.api_only.build.js"></script>
<script>
     const p = trustedTypes.createPolicy('foo', ...)
     document.body.innerHTML = p.createHTML('foo'); // works
     document.body.innerHTML = 'foo'; // but this one works too (no enforcement).
</script>
<!-- Full -->
<script src="https://w3c.github.io/webappsec-trusted-types/dist/es5/trustedtypes.build.js" data-csp="trusted-types foo bar; require-trusted-types-for 'script'"></script>
<script>
    trustedTypes.createPolicy('foo', ...);
    trustedTypes.createPolicy('unknown', ...); // throws
    document.body.innerHTML = 'foo'; // throws
</script>

NodeJS

Polyfill is published as an npm package trusted-types:

$ npm install trusted-types

The polyfill supports both CommonJS and ES Modules.

const tt = require('trusted-types'); // or import { trustedTypes } from 'trusted-types'
tt.createPolicy(...);

Tinyfill

Due to the way the API is designed, it's possible to polyfill the most important API surface (trustedTypes.createPolicy function) with the following snippet:

if(typeof trustedTypes == 'undefined')trustedTypes={createPolicy:(n, rules) => rules};

It does not enable the enforcement, but allows the creation of policies that return string values instead of Trusted Types in non-supporting browsers. Since the injection sinks in those browsers accept strings, the values will be accepted unless the policy throws an error. This tinyfill code allows most applications to work in both Trusted-Type-enforcing and a legacy environment.

Building

To build the polyfill yourself (Java required):

$ git clone https://github.com/w3c/webappsec-trusted-types/
$ cd trusted-types
$ npm install
$ npm run build

Demo

To see the polyfill in action, visit the demo page.

Testing

It can be tested by running:

$ npm test

The polyfill can also be run against the web platform test suite, but that requires small patches to the suite - see tests/platform-tests/platform-tests-runner.sh.

Cross-browser testing provided by BrowserStack.

BrowserStack

Contributing

See CONTRIBUTING.

Questions?

Our wiki or the specification may already contain an answer to your question. If not, please contact us!

trusted-types's People

Contributors

0xedward avatar antosart avatar dependabot-preview[bot] avatar dependabot[bot] avatar dontcallmedom avatar eps1lon avatar foolip avatar jonathankingston avatar jugglinmike avatar jyasskin avatar koba04 avatar koto avatar lukewarlow avatar malvoz avatar marcoscaceres avatar mbrodesser-igalia avatar mikesamuel avatar mikewest avatar mrdisconnect avatar nitinsurana avatar otherdaniel avatar paulirish avatar rictic avatar saschanaz avatar siegrift avatar slekies avatar vrana avatar zemnmez avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

trusted-types's Issues

"Trusted types" and "Literals in script"

In response to @mikewest
https://mobile.twitter.com/mikewest/status/931439227715321857

And how this "Trusted types" proposal might work with the "literals in script" proposal - which may be related to Issue 4...


If the JS engine could keep track of the variables that were created from String Literals (or String Constants), this could be very useful.

I rarely use HTML strings (e.g. for templating), but when I do, I'd only use a String Literal; any user supplied/tainted values will be applied to the DOM later, via "safe" methods like el.textContent or el.setAttribute(), while keeping in mind that certain attributes (e.g. <a href="xxx">) aren't exactly safe.

So I'd like there to a way to instruct the browser to only accept String Literals via unsafe methods like el.innerHTML... as in, anything tainted would be rejected.

This is kind of related to Issue 11, which suggests overriding these methods.

I think the extra level of needing to create a TrustedHTML object is far to messy/complex for most developers - where even I'm probably not going to update all of the JS on my websites to use these objects, or use new methods like safelySetInnerHTML().

Being completely selfish, if el.innerHTML = “literal”; was allowed, and anything else was blocked, I probably wouldn't need to change anything (other than instructing the browser to do this extra level of checking, and re-running the tests to make sure I hadn't missed something).


On a slightly related note, I'd be happy to use the CSP header to require-trusted-types, as this does relate to the Security Policy for the pages Content :-) ... it's similar to how you can opt-in to XSS Protection (issue 1).

I'm just wondering if a similar approach to "use strict"; could be used, only on the basis that a website will typically pull in several JS files, and it will be easier to apply this logic when each script is updated (i.e. piecemeal migrations).


For example, I think this should work (well, ignoring pointless use of JS):

<div id="members" data-name="Mike"></div>

<script>

    // This would typically be in an external/async script.

    ;(function(document, window, undefined) {

        'use strict';
        'use trusted-types';

        if (!document.addEventListener) {
            return;
        }

        function init() {

            var members_ref = document.getElementById('members'),
                members_html = '<p>Hi <strong></strong></p>';

            if (members_ref) {
                members_ref.innerHTML = members_html;
                members_ref.getElementsByTagName('strong')[0].textContent = members_ref.getAttribute('data-name');
                // Lets assume I've checked these are all defined.
            }

        }

        if (document.readyState !== 'loading') {
            window.setTimeout(init); // Handle asynchronously
        } else {
            document.addEventListener('DOMContentLoaded', init);
        }

    })(document, window);

</script>

But if I changed members_html to '<p>Hi <strong>' + tainted_name + '</strong></p>', then the browser would reject it for members_ref.innerHTML

Bypass via HTMLAnchorElement properties

Found by @sirdarckcat:

It's still possible to execute the JS bypassing the policy by directly manipulating the HTMLAnchorElement properties like protocol, pathname etc.

a.href = TrustedTypes.createPolicy('foo', (p) => {
  p.createURL = (d) => d // actually sanitize here, but that's not relevant for the bypass.
}).createURL('http://notevil.com');

a.pathname='\nalert(1)';
a.protocol='javascript:';
a.click();

Full setup:

data:text/html,<meta http-equiv="Content-Security-Policy" value="trusted-types *"> <script src="https://wicg.github.io/trusted-types/dist/es6/trustedtypes.build.js"></script><a id=a>clickme</a><script>a.href = TrustedTypes.createPolicy(Math.random(), (p) => {p.createURL = (d) => d}).createURL('http://notevil.com');a.pathname='\nalert(1)';a.protocol='javascript:';a.click()</script>

Given that this can be abused by multiple properties, and each property requires only a part of the URL, it's not clear how the fix might look like, short of disabling the APIs, or just disallowing changing a.protocol (to javascript: or totally).

https://developer.mozilla.org/en-US/docs/Web/API/HTMLAnchorElement mentions that these properties are experimental.

bool exposed - argument to createPolicy or member of TrustedTypesInnerPolicy?

In the JS code:

/*

  • Creates a TT policy.
  • Returns a frozen object representing a policy - a collection of functions
  • that may create TT objects based on the user-provided rules specified
  • in the policy object.
  • @param {string} name A unique name of the policy.
  • @param {TrustedTypesInnerPolicy} policy Policy rules object.
  • @param {boolean=} expose Iff true, the policy will be exposed (available
  • globally).
  • @return {TrustedTypesPolicy} The policy that may create TT objects
  • according to the policy rules.
  • @todo Figure out if the return value (and the policy) can be typed.
    */
    function createPolicy(name, policy, expose = false) {
    const pName = '' + name; // Assert it's a string
if (enforceNameWhitelist && allowedNames.indexOf(pName) === -1) {
  throw new Error('Policy ' + pName + ' disallowed.');
}

if (policyNames.indexOf(pName) !== -1) {
  throw new Error('Policy ' + pName + ' exists.');
}
// Register the name early so that if policy getters unwisely calls
// across protection domains to code that reenters this function,
// policy author still has rights to the name.
policyNames.push(pName);

// Only copy own properties of names present in createTypeMapping.
const innerPolicy = create(null);
for (const key of getOwnPropertyNames(policy)) {
  if (createFunctionAllowed.call(createTypeMapping, key)) {
    innerPolicy[key] = policy[key];
  }
}

We have @param expose and @param TrustedTypesInnerPolicy, but we could make expose a member of TrustedTypesInnerPolicy and not need a separate param for it. Any thoughts?

Mechanism to constrain usages of unsafelyCreate

When applying safe-value types in large scale software projects at Google, we've found it to be essential (see "Background" below) that there is a reasonably strong and automated mechanism to constrain wide-spread usage of unchecked conversions from string into a safe-value type (i.e., calls to unsafelyCreate).

Requirements

  • It must be feasible for projects to surface reminders of coding and review guidelines to the project's developers if a commit / pull request introduces a new unsafelyCreate call site.

  • It must be feasible for projects to centrally maintain an enforced whitelist of all call sites to unsafelyCreate (such that it is impossible to commit code that introduces non-whitelisted call sites). This allows the project's security expert to review all new usages for correctness, and to recommend alternatives to call-sites that aren't strictly necessary (and therefore undesirable, since they dilute the reviewability of the codebase)

Third-party library concerns

Many libraries, such as jquery, expose APIs whose inputs flow down to DOM injection sinks (e.g., jquery's html(...) function flows down to .innerHTML). I.e. these APIs effectively are injection sinks on their own.

In the presence of TrustedTypes, these libraries should be refactored such that they themselves accept TrustedHtml etc and forward those values to the underlying DOM sink. However, if existing APIs perform any sort of transformations or construction of HTML markup on their own (this seems to be the case for jquery for instance), it's likely that the library's authors will resort to unsafelyCreate.

An application's developers and security reviewers then effectively have to trust the library's author to have used unsafelyCreate in a correct (and ideally, reviewable) way.

A particular concern is that third-party library authors might simply wrap all uses of DOM sinks with unsafelyCreate in order to preserve existing functionality with minimal effort. Unfortunately, this will also preserve all vulnerabilities in the library, and the risk of vulnerabilities due to use of XSS sinks exposed by the library's API.

Possible mechanisms to constrain usage

  • For JS source code that is subject to compilation a mechanism built into the compiler can be used, such as the Closure Compiler JS Conformance Framework.

  • In smaller projects a grep-based pre-commit hook might be sufficient; for this the only requirement is that unchecked conversion functions have reasonably unique names (to avoid false-positives). unsafelyCreate seems reasonable, but it might be helpful to include the produced type name in the function name, i.e unsafelyCreateTrustedResourceUrl. Since these functions should be rarely used, an unwieldy name is not a concern and is in fact desirable.

  • The above mechanisms will only constrain use of unsafelyCreate within the source of the application itself, but not in its third-party dependencies (unless they're checked into the same repository). In particular, they can't constrain use of unsafelyCreate in libraries that are script-src'd directly from a separate server.

    It might be desirable to have a browser-level mechanism to permit use of unsafelyCreate only in specific, whitelisted sources. Perhaps something along the lines of a CSP header trusted-types-src 'nonce-xyz123', such that unsafelyCreate calls are only allowed from within scripts blessed by that nonce. This would allow an application developer/deployer to ensure that unsafelyCreate calls only occur in their own, security-reviewed code, and not in third-party libraries.

    Whitelisting based on CSP source is rather coarse-grained. Ideas for more fine-grained whitelisting:

    • Some mechanism to whitelist based on call stacks (or call stack suffixes). Not clear where those would be specified.
    • Perhaps each call to unsafelyCreate requires an additional nonce argument: The CSP header specifies a nonce, trusted-types-unsafely-create-nonce abc123. Each call to unsafelyCreate must be provided with the correct nonce, TrustedResourceURL.unsafelyCreate('abc123', 'https://static.example.com/my-lib.js'). (of course this is only effective if the nonce cannot be directly obtained from JS). This makes calling unsafelyCreate rather cumbersome (but again, that's desirable). In particular, source that calls unsafelyCreate must be rewritten before serving to inject the nonce. This should also create an incentive for library authors to not call unsafelyCreate, but rather refactor code to accept TrustedTypes in their API.

Background

The key benefits of types-based approach to preventing injection vulnerabilities is two fold:

  1. It replaces APIs that can potentially result in security vulnerabilities (i.e., injection sinks such as, .href or .innerHTML assignment) with APIs that are inherently safe (for instance, Closure's goog.dom.safe.set{Location,Anchor}Href and setInnerHtml, or, in this proposal the native href and innerHTML setters that only accept TrustedTypes). If an application only uses the DOM API via such mediated, inherently-safe APIs, application developers can no longer make mistakes in their use of the DOM API that result in XSS vulnerabilities.

  2. It makes it feasible for a security reviewer to arrive at a high-confidence assessment that an app is indeed free of (DOM) XSS vulnerabilities: She no longer has to audit every use of a DOM XSS sink throughout the app, and data flows therein; instead it's sufficient for her to audit only the code that constructs instances of the safe-value types (i.e., call sites of unsafelyCreate, and their fan-in).

IOW, the types-based approach replaces many XSS sinks (DOM APIs and properties) and typically many call sites throughout an application, with with fewer sinks (unsafelyCreate) with typically few call sites.

To achieve its benefits both in terms of reducing actual XSS risk and facilitating efficient high-confidence security reviews, it is therefore crucial that call sites of unchecked conversions (i.e., unsafelyCreate) are,

  1. rare, and ideally only occur in stable, security-reviewed library code, and

  2. effectively reviewable (i.e. are not used in functions that in turn re-export an injection sink).

(corresponding guidelines on usage of unchecked conversions for SafeHtml types here, https://github.com/google/safe-html-types/blob/master/doc/safehtml-unchecked.md).

We've found that in practice, individual developers sometimes (often?) do not consider the impact on the above whole-system emerging properties (i.e., residual XSS risk and feasibility of high-confidence security assessments) of a use of an unchecked conversion they're introducing. We therefore found it necessary to introduce static checks that disallow uses of unchecked conversions without security review (in our case, we rely on the package visibility mechanism in the bazel build system, and in some cases compile-time checks implemented in error-prone).

Anecdote:

The concept of safe-value types was first introduced in Google Web Toolkit. The GWT SafeHtml type offers an unchecked conversion fromTrustedString. Its documentation indicates that its use is strongly discouraged.

Nevertheless, a few years after introduction of this API, we found O(1000) call-sites across GWT-based application source. Almost all of these were unnecessary (in the sense that they could have been rewritten using inherently-safe APIs such as SafeHtmlTemplates), and several of them did indeed introduce XSS vulnerabilities.

Put safeguards around attribute nodes

There are two cases where moving a node from one parent to another might be problematic.

const div = document.createElement('div')
div.appendChild(document.createTextNode('alert(1)'));
const script = document.createElement('script')
while (div.firstChild) {
  script.appendChild(div.firstChild);
}

We need to be suspicious of append to <script> elements regardless, but there's also a problem with attributes.

const div = document.createElement('div');
const a = document.createElement('a');

div.setAttribute('href', 'javascript:alert(1)');
const attr = div.getAttributeNode('href');
div.removeAttributeNode(attr);

a.setAttributeNode(attr);

But what about when a node comes from one context to a similar context?

const a0 = document.createElement('a');
const a1 = document.createElement('a');

a0.setAttribute('href', policy.createURL('http://example.com'));
const attr = a0.getAttributeNode('href');
a0.removeAttributeNode(attr);

a1.setAttributeNode(attr);

Should we support this kind of transparent DOM restructuring?

Per-type enforcement

Right now all the types can be enforced as a group. This happens for good reasons:

  • comprehensive DOM XSS containment requires guarding all relevant sinks (but see #65, as containment does not always require having configurable policies)
  • it's simple for authors. You enforce policies to stop DOM XSS (via trusted-types CSP directive), or not.

However, it has one downside: the type list needs to be complete from the get go, as adding new types would break existing TT sites. We can already see interesting candidates for next batch of types - TrustedTemplate, or TrustedStylesheet, TrustedStylesheetURL, and if our approach turns out successful, I'm sure there will be more.

This issue is to explore how we can change the enforcement declarative syntax to accomodate for potential new types.

Option 1: Enforce each type separately via keywords

trusted-types 'enforce-html' 'enforce-url' 'enforce-script-url

Adding new types becomes then just adding new policy create* functions and a new keyword. The downside is, the secure setting requires you to know about all possible types, although we can simplyfy this, and just assume that existing DOM-XSS related types are enabled by default if trusted-types directive is present, and leave the enforce-* keywords for new, optional types. (e.g. TrustedURL might be optional - see #65).

Option 2: Tie type enforcement into existing CSP directives

script-src 'require-trusted' <other scripty restrictions>; 
navigation-to 'require-trusted' <additional limits>; 
trusted-types my-policy dompurify-policy

Under this definition, scripts sinks would have to be typed, on top of their existing restrictions. Stylesheets, when introduced, would just add style-src 'require-trusted'. It's a bit clunky, as there's no 1:1 mapping between directives and types (e.g. there's no setting for TrustedHTML, both TrustedScript and TrustedScriptURL are under script-src), and it ties the whole TT quite deeply into existing CSP syntax with all its problems.

Application could live on the edge by specifying default-src: 'require-trusted' to be opted into all possible types.

Clarify in spec - JS 'this' in policy.createXXX()

Found and pointed at by yukishiino

In this part of JS code:
function getHTML(s) { return this.foo + " " + s; }
let policy = window.trustedTypes.createPolicy('SomeName', { createHTML: s => getHTML(s) })

What would we like thisto point to? 2 possibilities:

  1. global object, i.e. window --> this.foo will refer to global var foo
  2. policy itself --> this.foo == policy.foo. This would also enable also e.g. getting policy name or other properties like this:
    function myCreateHTML(input) { console.log(this.name); }
    let policy = window.trustedTypes.createPolicy('SomeName', { createHTML: s => myCreateHTML(s) })
    policy.createHTML('something')

We should have this choice of behaviour explained in specification as well.

Bypasses via attributes

In the Chrome version, when require-trusted-types is enabled

a = document.createElement('script')
a.src = 'http://foo.bar' // throws
a.setAttribute('src', 'http://foo.bar') // does not throw, and...
a.outerHTML == "<script src="http://foo.bar"></script>"

In the polyfill this is fixed, but this works:

a = document.createElement('script');
a.setAttribute('src', TrustedScriptURL.unsafelyCreate('http://foo.bar'));
a.attributes.src.value = 'http://evil.com'; // does not throw, and...
a.outerHTML == "<script src="http://evil.com"></script>"

`npm spec` eats translation errors

gulp spec and gulp spec.watch use npmjs.com/package/bikeshed-js which is nice since it falls back to the bikeshed webservice if you haven't installed bikeshed locally, but which eats errors.

Right now, gulp spec

  • dumps error output to the HTML target file instead of the console
  • silently eats non-zero response codes
  • it's not obvious how to specify strict flags.

I ran

$ bikeshed --die-on link-error --print console spec spec/index.bs dist/spec/index.html

during local development .

Maybe rewrite the gulp file to just call out to bikeshed when it's available on PATH.

The bikeshed executable has a watch command so setting up the watcher to be equivalent shouldn't be super hard.

Add additional context to the default policy invocations

Hi. I just finished reading the description of the API and it looks very interesting.

I think the main disadvantage I find with it is that if you use a framework you
don't really have control on how many of this sinks are being set.

I had similar issue with CSP where jQuery and other libraries I was loading we're injecting some code that broke the CSP policy and basically meant it lost most of it's value.

But reading this suggestion made me think. is it possible to just set a callback for all the sinks ?
So instead of having some fixed lock function that checks if a value is an instance of trustedType have a user defined function.
I assume the header will have to be something similar to

Content-Security-Policy: trusted-set-method sinkPreSet

if the the global object has setPreSet method defined it will be called before any setting is happening.
This method will need to get the sink, element, value as arguments and it's return value will be the actual value that will be set.

Having this control the app developer can mix whitelists, blacklist and trusted types depending on the context.

Now I can for example whitelist inline code my libraries inject, but require that all other values (coming from my own code) will be an instance of trusted types.

I can also decide to log / throw when a specific context fails to validate so I can find the actual injection vector or badly set value instead of just silently reject the value.

Consider metadata API; building blocks for HTML sanitizers

Browser-side HTML sanitizers are typically implemented by

  • parsing HTML into an inert Document(Fragment)
  • transforming the resulting "untrusted" tree into a safe tree, by pruning or copying based on attribute/element white/black-lists.
  • re-serializing the tree into HTML (not necessary if the "clean" tree can be directly attached to the DOM).

Examples: https://google.github.io/closure-library/api/goog.html.sanitizer.HtmlSanitizer.html, https://github.com/cure53/DOMPurify.

The necessary whitelists are a fairly significant part of the sanitizer's code size (blacklists would be smaller, but blacklists are generally brittle); https://github.com/cure53/DOMPurify/blob/master/src/attrs.js, https://github.com/cure53/DOMPurify/blob/master/src/tags.js, https://github.com/google/closure-library/blob/master/closure/goog/html/sanitizer/attributewhitelist.js, https://github.com/google/closure-library/blob/master/closure/goog/html/sanitizer/tagwhitelist.js

TT introduces implicit knowledge into the DOM API of the security semantics of DOM attributes. It seems worth considering if we can expose this information somehow, so that it can be used to as the basis of a HTML sanitizer's policy, which could then avoid the code size overhead of their own metadata tables.

Caveat: This will likely not obviate the need for sanitizer-specific whitelists/blacklists altogether: There are elements whose attributes don't require sanitization in a typical application's threat model (where the presence of the element in the DOM is application controlled, and only the attribute's value is potentially malicious); while the entire element is undesirable to keep when sanitizing entirely-untrusted HTML markup. E.g., <form> comes to mind.

Browser extensions vs Trusted Types

There's a couple of interesting questions regarding browser extensions and TT.

  1. Should browser extensions be able to create TT policies in their content scripts? If so, should they share the policy namespace (i.e. can the extension content script race to create a policy with an allowed name with the website), or have a separate one?

  2. If they are able to create policies, should they be affected by the policy names restrictions, or can they create one at will?

  3. Do extensions need to create TT policies, and then TT objects to interact with the DOM XSS sinks, or should they be exempt from the TT enforcement? To respect prior bevavior, they should be exempt.

Facilitate creating trusted types from string literals

If an initial goal of this proposal was to restrict usages of trusted types to literal strings, unless an explicit escape hatch were used, I believe this would be possible using a slight template literals.

The syntax at the usage site would be something like,

TrustedURL`https://foo.bar`

Using tc39/ecma262#1350, the implementation of TrustedURL would check whether the template object passed into it was a "real" template object present in the program or not. Coupled with CSP, this would prove whether the string came from a tagged template in the author's program (but, it could be that a different tag was originally used).

Now that this proposal has been developed further, is there still interest in checking for literal strings? The new sanitizer policy direction seems great to me, but it seems like proving literal-ness would be a complementary benefit.

open for extension in createPolicy last arguments

Hi

TrustedTypePolicy createPolicy(DOMString policyName, TrustedTypeInnerPolicy policy, optional boolean expose = false);

optional boolean seems better to use object for future extension.

for example, addEventListener has same problem when adding new options.

// before
document.addEventListener('touchstart', handler, true);

// after
document.addEventListener('touchstart', handler, {capture: true});

if you needs feature detection, you need to do like this.

var supportsPassive = false;
try {
  // define getter for opts.passive
  var opts = Object.defineProperty({}, 'passive', {
    get: function() {
      // if called, option obj for 3rd arguments ara supported
      supportsPassive = true;
    }
  });
  // checking for supports or not
  window.addEventListener("test", null, opts);
} catch (e) {}

for avoid this tragic history, its better to think about make it object for last argument, even if you currently don't think about another option without expose.

like this

TrustedTypePolicy createPolicy(DOMString policyName, TrustedTypeInnerPolicy policy, optional TrustedTypeOption option);

interface TrustedTypeOption {
  boolean expose;
}

Support application-specific sanitizers / type builders

Extracted off #7, by @xtofian:

[...] There isn't a universal definition of "safe" attached to the types as per the spec, but there certainly is a notion of "safe" that developers/reviewers would attach to these types in the context of their specific code base. The definition of "safe" is whatever makes sense in the context of their app, and is embodied in the implementation of builder/producer APIs that create values of TrustedTypes through use of unsafelyCreate.

[...]

One implication that falls out of this is that any factory functions that are defined in the spec (e.g. TrustedURL.sanitize) must be designed such that the values they produce are within any conceivable notion of "safe" for their corresponding sinks, across applications. This is not necessarily trivial. For instance, should TrustedURL.sanitize accept tel: URLs? On one hand, they're certainly benign with respect to XSS. However, there are contexts where they are problematic (or at least used to be -- e.g., within webviews in iOS, see discussion at golang/go#20586 (comment)). And, it's not even clear that the standard should permit arbitrary http/https URLs. For instance, an application might want to ensure that it only renders hyperlinks that refer within the scope of this application itself, and that links to external sites go through a redirector that performs real-time phishing detection or some such.

This indicates that perhaps the spec shouldn't provide for factory methods for these types at all. This might be a quite reasonable direction: I'd expect these types in the currently spec'd form (with very basic factory functions) to not be all that useful on their own.

To be practically useful, these types will need factories/builders for many common scenarios, e.g.:

  • composability of SafeHtml/TrustedHTML (i.e. the property that for all s, t : SafeHtml, s + t is also in SafeHtml), and a corresponding function that concatenates TrustedHTML (see e.g. SafeHtml.create).
  • factory functions that create HTML tags with tricky semantics, accounting for browser-specific quirks (example); of course this won't be an issue in spec-compliant browsers that implement TrustedTypes, but will have to be accounted for in libraries that support other browsers via polyfill of those types.
  • special case functions for safe-URL types, e.g. to create a blob: URL for a Blob, but only if it's of a MIME type that is considered to not result in content-sniffing issues (e.g., SafeUrl.fromBlob).
  • factory functions for TrustedScriptURL that choose a particular tradeoff between strictness and expressiveness (e.g. TrustedResourceUrl.format).
  • etc.

Of course, all of these factory functions can be replaced with ad-hoc code that relies on unsafelyCreate. However, that's very undesirable since it'll result in a relatively large number of such calls throughout the application source code, which will significantly erode the reviewability benefits.

I.e.. I'd expect these types to be used as a core building block of framworks and libraries that endow them with richer semantics (and corresponding builders and factories), such as the Closure SafeHtml types and their builders, or the SafeHtml types in Angular.

(aside: the one factory function in the current draft spec that's reasonably uncontroversial is TrustedHTML.escape -- escaped text should be within the notion of "safe for HTML" in the context of any conceivable application; however it's also not particularly useful in practice -- applications should just assign to textContent instead of using innerHTML with TrustedHTML.escape).

How to handle <template>?

Client-side templating systems (such as polymer, angular, knockoutjs, etc) generally make the assumption that the elements and attributes that make up a template are trustworthy. This is primarily because contents and attributes elements underneath templates may contain template expressions, which are evaluated as code (see e.g., https://angular.io/guide/security#angulars-cross-site-scripting-security-model -- Angular happens to not use <template> but is the one template system that actually documents this assumption).

Thus, while el.textContent = untrusted is generally harmless, it is a potential vulnerability if el is a child/grandchild of HTMLTemplateElement.

It's not clear what set of types make sense for element contents/attributes of children of a <template>; I suspect this depends on the semantics of the template system / framework that interprets them. Furthermore, it seems generally undesirable for frameworks/application to rely on runtime-manipulation of templates (it's indicative that many such frameworks suggest use of a compilation step for production deployments, e.g., https://www.polymer-project.org/2.0/toolbox/build-for-production, https://angular.io/guide/security#offline-template-compiler).

With that in mind, it might make sense to simply make children/grandchildren of HTMLTemplateElement completely unmodifiable?

Action required: Greenkeeper could not be activated 🚨

🚨 You need to enable Continuous Integration on all branches of this repository. 🚨

To enable Greenkeeper, you need to make sure that a commit status is reported on all branches. This is required by Greenkeeper because it uses your CI build statuses to figure out when to notify you about breaking changes.

Since we didn’t receive a CI status on the greenkeeper/initial branch, it’s possible that you don’t have CI set up yet. We recommend using Travis CI, but Greenkeeper will work with every other CI service as well.

If you have already set up a CI for this repository, you might need to check how it’s configured. Make sure it is set to run on all new branches. If you don’t want it to run on absolutely every branch, you can whitelist branches starting with greenkeeper/.

Once you have installed and configured CI on this repository correctly, you’ll need to re-trigger Greenkeeper’s initial pull request. To do this, please delete the greenkeeper/initial branch in this repository, and then remove and re-add this repository to the Greenkeeper integration’s white list on Github. You'll find this list on your repo or organization’s settings page, under Installed GitHub Apps.

Figure out what to do with cross-document interactions

(This is somewhat related to #47 and #49)

There are certain ways to obtain a reference to a different Document objects in the platform. For example:

Coincidentally, some of these methods are used by HTML sanitizers (e.g. https://github.com/cure53/DOMPurify), as it's the most convenient way of sanitizing client side.

If documents created by above methods are not restricted by the policies of the current document, it's possible to use string-based DOM API (potentially putting untrusted data) there, later on attaching the produced DOM nodes to the main document, effectively bypassing the restrictions of the policies.

It's possible to address that in several ways:

  • acknowledge, but don't prevent it. Policies are per-document, document creation functions are relatively rare and should be subject to a security review by other means than TT enforcement.
  • make the policies propagate to newly created synthetic document. CSP propagation rules state that CSP will propagate to <iframe srcdoc> and about:blank documents roughly, and it makes sense to tie it to those rules. Propagating the policies to other type of documents (e.g. same-origin <iframe src=/>) is obviously not a good idea.
  • guard document creation APIs: make the methods like createHTMLDocument throw on document creation if TTs are enabled. Expose the original implementation of the document producers within the policy (such that, e.g. a sanitizing policy could pass the document-creating function, or even a new document instance to a sanitizer lib). This might be hard to polyfill for all cases, especially XHR, and has a potential to break applications in additional ways, but making a sanitizer policy call CustomSanitizer(dirtyString, dirtyDocument) does sound like an elegant API to have, and would simplify existing sanitizer's code.

Semantics and naming of types

Naming

"Trusted" doesn't quite seem the right term to use. In many cases, the value of those types will be entirely or partially derived from untrusted data, however the values will be known to be safe to use in the destination (sink) context due to appropriate validation/sanitization/escaping that has been applied to the original value. For instance, in

var u = TrustedURL.sanitize(untrustedInput)

the string value of u will equal the string value of untrustedInput (i.e. consist entirely of a value from an untrusted source), if untrustedInput passes validation (e.g., is a well-formed http URL).

Of course, in some cases a value can be established to be safe due to its trustworthy provenance (e.g. a string consisting of a javascript:... URL can be treated as a SafeUrl if it is known to come from a trusted source), but that's usually just one way of establishing the type contract.

Semantics and type contracts

The type contracts used in Closure have a number of unresolved issues, which stem from the inherent complexity of the web platform.

In the setting of a particular application domain, we have largely been able to ignore or gloss over such issues; however, for the purposes of a standard they should presumably be kept in mind.

For example:

  • Types are implicitly indexed by the origin their values will be interpreted in: A value being "trusted" or "safe" to use as, say HTML markup, is relative to the origin it will be evaluated in. In practice, this usually doesn't matter much since we typically deal with the code base of a single application that will be served in a single origin. However, some applications involve documents or frames that are served in separate origins and, for instance communicate via postMessage. At this point, the trust relationship between components becomes relevant.

  • The TrustedResourceUrl (TrustedScriptURL) contract is somewhat subtle: Nominally, this type refers to the set of URLs that refer to resources that are trusted to be evaluated / executed in the application's origin. However, it is in practice difficult to firmly establish this property, since it is difficult for a runtime-library or a build-time toolchain to reason about the trust relationship between the application in question and the resource served by a given URL.

    In Closure's TrustedResourceUrl, we essentially use the property that the URL was constructed under "reasonably rigorous" application control (e.g., constructed from a compile-time-constant prefix with optional properly escaped path fragments and query parameters; see TrustedResourceUrl.format) as a proxy for the property that the URL refers to a trustworthy resource. This reasonably prevents bugs due to code that permits unintended injection into path fragments that might cause the URL to point to an unintended resource (e.g. via a redirector). But the TrustedResourceUrl type's constructors do not truly ensure that its values are in fact URLs pointing to trustworthy resources.

    Whether or not this approach is reasonable depends on the application domain and organization maintaining the code; other implementations might be preferable in certain settings (e.g. a centrally maintained registry of URLs that are indeed known to point to trustworthy resources).

    Similarly, in some settings it may be reasonable to assume that all URLs within the application's origin are serving trustworthy resources (i.e. any URL in the same origin, including any path-absolute URL, can be considered a TrustedScriptURL). This is convenient, since this property can be checked by a simple runtime predicate. However, this assumption is unsound if there's an open redirector anywhere in the origin.

  • It is unclear what properties should be included in the SafeHtml/TrustedHTML contract: Clearly, this contract should imply that the value does not result in execution of untrustworthy script when evaluated as HTML (e.g. by assigning to the innerHTML property). It's less clear if the contract should also require the rendered HTML will remain visually contained (i.e. does not make use of absolute-positioning styles). This property is necessary to prevent social-engineering attacks if sanitized, but originally untrustworthy HTML is rendered embedded within a web UI (for instance, a HTML email message rendered in an email UI must not be allowed to overlay/alter the UI of the application itself). However, it is not necessary if such attacks are not relevant or mitigated through other means in a particular application.

Also worth considering is the granularity of the types: In Closure, safe-content types are relatively coarse grained. For instance, the same type (TrustedResourceUrl) is used to represent URLs that are interpreted as script (<script src>), style (<link rel=stylesheet href>), and embedded content (<iframe src>). This choice was made to reduce the number of types to a manageable vocabulary, and has generally worked out reasonably well in practice. However, it is not clear if this tradeoff between complexity of type vocabulary and expressiveness is appropriate in the general case.

In particular, using an untrusted resource as the src of an (non-sandboxed) <iframe> has obviously much less damage potential (largely limited to social engineering / phishing attacks) than using it as a script source.

Proposal: "value neutral" type contracts

Thus, it seems rather difficult if not impossible to come up with precise specifications of type contracts that prescribe what values are "safe" or "trusted" for a given sink context, such that these specifications generalize well and don't make unwarranted assumptions about the particular application domain or development frameworks.

With that in mind, it may make sense to essentially dodge all these issues and define types and their contracts not in terms of safety/trustedness, but rather simply to characterize the semantics of the specific sink that will interpret the value, and do so at a fairly granular level. (Granularity seems to already be the intent; the proposal states "Introduce a number of types that correspond to the XSS sinks we wish to protect.") IOW, "value neutral" type contracts that focus on how the value will be used, rather than whether or not it is safe.

I.e., types such as

  • HyperlinkURL -- URL that is interpreted as a hyperlink (<a href>, as well as possibly <link rel=author href>)
  • MediaURL -- URL that refers to media (<img src>, etc)
  • ScriptSrcURL -- (<img src>)
  • StyleSrcURL -- (<link rel=stylesheet href>)
  • HTMLImportSrcURL -- (<link rel=import href>)
  • SameOriginHTML -- (HTML sink that will interpret and render the markup in the current origin, i.e. .innerHTML assignment, except if the element is a <template> element)
  • ...

One downside of this "value neutral" type naming is that the security implications of creating values of these types are less prominent. However, instances of the types can only be created via the .unsafelyCreate factory function, which conveys the security aspect (and in larger project will typically be usage restricted to ensure security review).

Explore granular enforcement of types

So far, TT are enforced per document - this is controlled by CSP. This issue is to explore whether more granular control is possible. This might serve two goals:

  1. Facilitate adoption - e.g. some 3rd party widgets (in separate scripts) may be outside of the application control, but introduce a low enough risk of actually introducing XSS, as estimated by the application owner.
  2. Control unsafe types creation (see #31). For certain applications, it's desirable to enable the type enforcement globally (per document), and introduce even stricter policies for most of the application code - i.e. only allow unsafelyCreate in certain code location (e.g. the ones defining the safe wrappers over unsafelyCreate) .

So far, a couple of mechanisms of doing so have been proposed:

  1. CSP source expressions - some examples:
  • require-trusted-types 'nonce-xyz123' could enable the enforcement for scripts with that nonce. require-trusted-types * would enable globally. This seems awkward, as the wider the whitelist, the more secure the system is, which is opposite to how other CSP directives work.
  • require-trusted-types; unsafe-disable-trusted-types: http://twitter.com/script.js seems more fitting.
  1. "use trusted-types" - just like Strict mode. This one is probably quite invasive in the language, but it could enable very precise script level (or even function level) enforcements.

Override `.innerHTML`, or define `safelySetInnerHTML(...)`?

It might be difficult to audit sink assignments (e.g. el.innerHTML = whatever;), as bad usage looks exactly like good usage. The auditor needs to have knowledge of the context ("Is the restrictive flag set?") in order to know whether or not a given usage is safe.

Perhaps introducing a safe variant of the sink would be better: el.safelySetInnerHTML(whatever) or something similar.

Figure out if TrustedXYZ objects should be Transferable

Transferable interface allows an object to be transferred between different execution contexts. Allowing TT to be Transferable would make it possible to e.g. create objects in a worker and consume them in a main document. However, this might also make it possible to send them over postMessage channels. Obviously, this is undesirable, unless we track which policies created those objects (policies are per realm).

Initially, I think TT should NOT be Transferable, but leaving this open in case some use cases arise.

Handling of dependent types

There are attributes where the required security type contract of the assigned value depends on the value of another attribute. Notably, the <link> element's href attribute requires a type that represents a URL that references a trustworthy resource (TrustedResourceUrl in Closure; similar to TrustedScriptURL in this proposal), if the element's rel attribute has a value of import or stylesheet. For other values of rel (e.g., author) the href attribute is interpreted as a plain hyperlink, and hence a weaker contract suffices ("following the link / navigating to the URL does not result in any undesirable side effects such as script execution").

This is a rather unfortunate design but we're presumably stuck with it.

In particular, we need to account for the possibility that the attribute that the type depends on (rel) is changed from a value that requires the weaker contract to one that requires the stronger contract after the dependently-typed attribute (href) itself.

In Closure, this is accounted for by a typed wrapper (goog.dom.safe.setLinkHrefAndRel) that sets both attributes at the same time, and dynamically enforces the appropriate type contract, combined with disallowing direct assignment to either attribute (rel or href) in application source.

In a native implementation of typed setters such a combined setter is likely undesirable. However, the issue could be addressed by changing the behavior of the rel attribute setter to clear the href attribute's value on assignment to the rel attribute (or perhaps preferably, to throw an exception if rel is assigned when href already has a non-empty value). The setter for the href attribute can then dynamically enforce the appropriate type contract depending on the rel attribute's actual value.

Consider implicit node / subtree adoption

8.2 DOM Core says

These are the changes made to the features described in DOM Level 3 Core.

...

Nodes are implicitly adopted across document boundaries.

IIUC, before, taking a DOM subtree from <template>'s and same-origin <iframe>'s content documents and moving it into the DOM required an explicit document.importNode step.

That now happens auto-magically.

This means that insertChild and related methods can now take nodes from different Realms' documents.

Do we need to guard against DOM subtrees that were created using different policy sets?

May depend on how we address #42

Is it possible to lock things down to string constants?

It would be lovely if we could have a CreateFromConstant method that would allow

Whatever.createFromConstant("https://example.com/")

but deny

var notAConstant("https://example.com/");
Whatever.createFromConstant(notAConstant);

Investigate what to do with <link>.

<link>s mean different things, depending on the rel attr value. Some directly may cause script execution - import, some are interesting security-wise (stylesheet), some - not so much (dns-prefetch).

It might be we can't come up with a single type to accommodate for that.

Polyfill: modularize the code

Use ES6 modules or, if not possible, goog.module in the polyfill. Explore if it's possible to enable ADVANCED_OPTIMIZATIONS for the polyfill to make the binary smaller. So far the problem was in the property renaming.

Consider dropping TrustedURL

We started integrating the application with the polyfill. So far the biggest obstacle is the TrustedURL enforcement. It turns out linking to other content is common in the web (who knew?).

Guarding how application produces URLs is important for XSS prevention because of a presence of scriptable protocols like javascript: (same-origin) or data: (cross-origin). Historically, there were also other URL schemes like jar: that could result in an XSS.

It's also desirable for non-XSS related reasons. Some examples:

  • navigating to custom URL handlers might initiate actions harmful to users (e.g. [tel:], (https://tools.ietf.org/html/rfc3966#section-11), facetime:, android intents, chrome-extension://)
  • some applications may want to prevent directing their users to external sites
  • defacement / spoofing risks when loading subresources from 3rd party sites
  • IP address of the user may be disclosed, together with other fingerprintable data, when a http: resource is fetched from a https: document
  • creating iframes from malicious URLs may enable exploitation of other bugs (e.g. when a postMessage channel is established without verifying peer origins).
  • loading subresources from different origins into the same renderer process might data exfiltration via Spectre-like bugs.
  • attacker-controlled stylesheets may exfiltrate data from the document
  • pointing a form to external URL via action or formaction attributes may exfiltrate data (usually, credentials)
  • <base> URL might be used to hijack all relative links.

However, in practice, for DOM XSS prevention alone it's enough if following the URL will not execute a script in a same-origin document. The check for this is simply - after parsing, and making the URL absolute, make sure it's not a javascript: one. In other words, there's no user-controlled sanitization required for a.href and other standard URL sinks. It's enough if the document behaves like it would under script-src *, without unsafe-inline keyword.

In that spirit, we might simply drop the TrustedURL type and mandate that, under TT enforcement the host environment disables javascript URLs (like if there was a script-src without unsafe-inline). Such behavior is polyfillable (e.g. URL may be used to correctly parse a URL and extract the protocol). This offers a very simple DOM XSS containment (only policies may introduce DOM XSS) setting:

Content-Security-Policy: trusted-types [policy-list]

Authors could still introduce additional restrictions to URLs via *-src directives or navigate-to to address risks other than DOM XSS via URLs.

Polyfilling HostEnsureCanCompileStrings

Collecting issues for polyfilling https://wicg.github.io/trusted-types/dist/spec/#string-compilation

Modify HostEnsureCanCompileStrings algorithm, adding the following steps before step 1:

IIUC, this is meant to require TrustedScript inputs to new Function and eval.

We would need to change the behavior of new Function(x) without changing basic identities like (function () {}) instanceof Function.

https://gist.github.com/mikesamuel/4f3696975ec47d09de50dc3f4328dfe9 does that by creating a proxy over Function that traps [[Construct]] and [[Apply]].


The polyfill should also preserve the eval function / eval operator distinction from:

  • window.eval does PerformEval with direct=false
  • eval uses direct=true
    and direct affects whether the lexical environment (step 9) is the top of the stack or the global environment.

This difference can be seen in

const x = 'global';
function f(direct) {
  const x = 'local';
  return direct ? eval('x') : window.eval('x');
}
f(true) // -> 'local'
f(false) // -> 'global'

This is trickier because a bare eval(...) is only the eval operator when step 4.a of the bare function call finds that eval in scope is the same as the Realm's original eval value:

If SameValue(func, %eval%) is true

We could fake that by reassigning global.eval while being careful to not open a window for recursive calls to eval in the script body by prepending something to the script body, but that would still not get the wrapper function off the top of the environment stack.

Understading the difference between Standalone CSP vs TrustedTypes

Hi there, I've been browsing through the spec and source code (and implementing a policy on a local project) for an entire day. The proposal is very interesting, but there are a couple of things that aren't clear to me:

  1. Why exactly do we need TrustedTypes when we have CSP?
  2. How are TrustedTypes different from CSP, what can you accomplish with them that you can't otherwise?

Let's say. based on the list of policies for script-src, I could protect my domain against execution of any kind of script in my DOM by providing a 'nonce-' for my domain inside my CSP header. Same goes for origins of images, iframes, etc. etc.

I'm struggling to understand if TrustedTypes are an alternative to CSP, or complementary approach.

Could you please clarify it for me?

Many thanks in advance.

Figure out if TrustedURL needs to be absolute.

If we absolutize the TrustedURLs on creation, this might be backwards-incompatible, as String(TrustedURL.unsafelyCreate(foo)) !== foo. On the other hand, relative URLs may actually be a well-formed absolute URLs (http://foo.bar is a valid URL path IIRC, so is //foo.bar/), which may cause obvious problems when assigned to a sink.

Expose information on status of TrustedTypes enforcement

Something like window.trustedtypes.isEnabled(): bool.

This would be useful in a number of scenarios:

  • Frameworks that support strict contextual escaping and have their own notion of types (e.g. Angular, or Polymer with polymer-resin) may want to turn off their own mediation of assignments to DOM properties (e.g. Angular's DomSanitizer) and rely on the platform-provided mediation.

  • Existing libraries might have to suppress (legacy) functionality that's incompatible with TrustedTypes. E.g. jquery seems to be doing some HTML markup manipulation, which is potentially prone to security bugs and presumably should be disabled in a TrustedTypes-enforced app. The library could check at run-time whether or not TrustedTypes are enforced and alter behavior accordingly.

  • In scenarios where JS code is compiled and optimized, it's desirable to elide framework code that implements features provided by the platform. For instance, polymer-resin implements mediation (i.e., context-aware sanitization) of values flowing into DOM XSS sinks via Polymer templates. It does so by hooking into Polymer to interpose a context-aware value sanitizer into template data bindings. The sanitizer consists of a fair bit of code and metadata to determine how a value should be sanitized based on the destination context. With TrustedTypes enforced by the browser, most of this code should be unnecessary, and it'd be desirable to not ship it to browsers.

    In a compiled setting, this can be accomplished via some form of preprocessing; e.g. using Closure compiler's @define mechanism combined with dead-code elimination. I.e., we might define:

    /** @define {boolean} */
    platform.HAS_TRUSTEDTYPES = false;
    

    and then condition framework code on this flag. E.g. polymer-resin would essentially have two implementations distinguished by this flag, a full one (for !HAS_TRUSTEDTYPES) and a light-weight one that assumes TrustedTypes. When compiled with --define='platform.HAS_TRUSTEDTYPES=true', the bulk of the polymer-resin implementation would be compiled away.

    This however raises a configuration risk: If the web server were to ship the HAS_TRUSTEDTYPES JS code into a context where TrustedTypes are not actually enabled (e.g. b/c the header was not sent), the client is in an insecure configuration.

    To guard against this, we can include a safety check somewhere in the initialization code, along the lines of,

    if  (platform.HAS_TRUSTEDTYPES && !window.trustedtypes.isEnabled()) {
      // fail noisily and decisively; prevent further initialization of the app;
    }
    

Allow guarding (dynamic) module imports - a type for module specifiers

Script modules are here i.e. there is a way of loading additional code from a URL to your application (previously you'd just append a new script, or eval). These come in a few flavors:

Flavor Syntax Status
standard import import {foo} from 'url' ECMA standard
dynamic import import(foo).then() TC39 stage 3
HTML modules import {foo} from 'a.html' proposal

These new code loading methods introduce a few interesting challenges:

  1. There's no built in mechanism to add metadata to module names (~URLs). For example, there's no way to pass a nonce or a hash explicitly.
  2. Apart from the dynamic import, there's no way to change what's being imported at runtime. Imports are parsed, loaded and linked before the execution of the module happens.

For CSP side of things, this realistically means that only the script-src whitelist can somehow guard which modules are allowed to be run (example problem). I'm not sure if and how Trusted Types can offer something here as well, given that they are a runtime feature.

Mentioning a related issue as well, as it seems these concerns (code isolation, module limitations) are raised by other folks as well.

cc @arturjanc who raised the issue.

TrustedHTML.unwrap vs TrustedHTML.prototype.toString

It's unsound to accept a TrustedType as an input to a builder API without performing a runtime check.

E.g.,

class TrustedHTMLs {
  /**
   * @param {TrustedHTML} h1
   * @param {TrustedHTML} h2
   */
  static concat(h1, h2) {
    return TrustedHTML.unsafelyCreate('' + h1 + h2);
  }

is incorrect because there's no guarantee that run-time types of s1 and s2 are indeed TrustedHTML at a given call site (this is the case even if the code is compiled and statically type checked, since Closure as well as TypeScript type systems are unsound).

To address this concern, we made two design choices in the Closure SafeHtml types:

  • The type's toString method returns a debug string that includes a type name. This helps prevent code that needs the string-value of an instance of the type from relying on toString().
  • There's a static method SafeHtml.unwrap that performs a run-time type check of its argument.

With that, the correct way to write a concat method is,

  static concat(h1, h2) {
    return TrustedHTML.unsafelyCreate(
        TrustedHTML.unwrap(h1) + TrustedHTML.unwrap(h2));
  }

If we were to follow the capability/factory-based approach to controlling access to unsafelyCreate (proposed in #33 (comment)), it may make sense to provide the unwrap functions as part of the factory as well, to allow trusted-types builder code to be more confident that it's calling the genuine unwrap method.

spec: maybe use `<wpt>` to link spec sections to test suite

https://tabatkins.github.io/bikeshed/#wpt-element says

When writing tests, you can sometimes link to the section of the spec that you’re testing, to make it easier to review. But when you’re actually reading (or updating!) the spec, you can’t tell what sections are tested or untested, or what tests might need to be updated due to a change you’re making. The <wpt> element lets you easily link to your WPT testcases inline, right next to the text they’re testing, and helps make sure that the testsuite and the spec are kept in-sync.

The <wpt> element is a block-level element for parsing purposes; place it after or between paragraphs/etc. The contents of the element are line-based: each line contains a single test path (the path under the WPT repo, so not including the domain or the /web-platforms-tests/wpt/) pointing to the test covering some nearby text.

and later

If you want to produce a test-annotated version of the output, specify the WPT Display metadata with the value "inline"; all of the <wpt> elements will become usefully-formatted lists of their contained tests, with links to wpt.fyi, the live test on w3c-test.org, and the source code on GitHub.

Fallback policy support

Fallback policy is a single, exposed TT policy in the realm that gets called implicitly, when the DOM sink is used with a string. Example:

TrustedTypes.createPolicy('fallback', (p) => {
  p.createHTML = (s) => reallyConservativeSanitizer(s);
  ...
  p.expose = true;
});

// legacy code, e.g. in a widget
domElement.innerHTML = 'a string'; // will call 'fallback' policy underneath.

Fallback policies fit well into the generic TT design, but it's unclear yet whether we should support them.

Pro:

  • Substantially facilitates integration with existing codebase, especially when non-controlled code is used in a website (widgets, 3rd party libraries). For example, Google Analytics inserts a tracking image, and it's hard to expect Analytics to change.
  • Trivial to polyfill (it was introduced in #46, now disabled by default)

Con:

  • Strong incentive to make the fallback policy liberal. It has to support all possible, functionally valid DOM interactions of all the runtime dependencies. In practice reallyConservativeSanitizer might break most applications, and the easiest solution is to make the fallback policy a no-op (s) => s. This might become the equivalent of unsafe-inline - easy to adopt, but not improving the security posture.
  • As all exposed policies, it's harder to reason about its security, as its usage is spread throughout the whole application code. A liberal policy might not introduce vulnerabilities, but one can't tell that by looking at the policy alone.
  • No mechanism to control the usage of this policy (it has to be exposed)
  • Difficult to implement in the browsers (strings are blocked at the IDL level); @mikewest suggests this can be a userland implementation.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.