Code Monkey home page Code Monkey logo

braid-spec's Introduction

Braid: Adding Synchronization to HTTP

This is the working area for the Braid extensions to HTTP in the IETF HTTP Working Group. These extensions add Synchronization to HTTP. They are authored in three documents:

Braid adds to HTTP:

  1. Versioning of resource history
  2. Updates sent as patches
  3. Subscriptions to updates over time
  4. Merge-Types that specify OT or CRDT behavior

A uniform approach for expressing changes to state over HTTP. Generalizes Range Requests to other HTTP methods. Defines the replacement of a range with a new value.

Merge Types specify how to consistently merge a set of simultaneous conflicting edits to a resource. If multiple computers implement the same Merge Type, they can guarantee eventual consistency after arbitrary multi-writer edits.

Linked JSON is an extension of JSON that adds a Link datatype, so that URIs can be distinguished from ordinary strings. This allows JSON documents to be nested inside other JSON documents.

Contributing

You are welcome to edit these documents. To get Github access, send your login to Michael. Discuss edits on the Braid mailing list. After editing, add your name to the authors list at the top and bottom of the document.

Discussion of the spec should occur on the IETF HTTPWG mailing list mailing list. Anyone can contribute; you don't have to join the HTTP Working Group, because there is no "membership" — anyone who participates in the work is part of the HTTP Working Group. See also Contributing to the HTTP Working Group.

All material in this repository is considered Contributions to the (IETF) Standards Process, as defined in the intellectual property policies of IETF currently designated as BCP 78, BCP 79 and the IETF Trust Legal Provisions (TLP) Relating to IETF Documents. Any edit, commit, pull request, issue, comment or other change made to this repository constitutes Contributions to the IETF Standards Process (https://www.ietf.org/). You agree to comply with all applicable IETF policies and procedures, including, BCP 78, 79, the TLP, and the TLP rules regarding code components (e.g. being subject to a Simplified BSD License) in Contributions.

braid-spec's People

Contributors

brynbellomy avatar calebkm avatar canadaduane avatar josephg avatar mgsloan avatar michielbdejong avatar mitar avatar pkulchenko avatar toomim avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

braid-spec's Issues

Security considerations for range patch

I don't think this should be part of the synchronization type. I think the synchronization type only needs to be there in order for two synchronizers to consistently resolve to the same state — that just means that they need to use the same function to merge multiple parent versions into a single resulting state.

So a synchronization type doesn't need to be more than a function merge (parent_versions) => state_of_child_version. This doesn't need to care about whether the version dag is merkle; whether it is hashes; it doesn't need to validate. The version identifiers can be determined by something separate, and validation can be a separate process and separate specification.

Overall, I think that this can make the actual Synchronizer-Type a lot more specific, which also reduces the potential security problems, and as a result, I think a lot of the security considerations don't need to be in there.

Originally posted by @toomim in #20

Braid-Patch: why it is needed?

I do not get why we need Braid-Patch. I mean, I understand that it is elegant, but it is not yet-another-patch format with additional metadata about which synchronization algorithm to use. I do not get why those things have to be coupled together? Why simply we cannot say: my patch is in plain-text format, my patch is in JSON format, or my patch is in Braid-Patch-Simple format, which would be just:

bytes: [33:3889] = <binary-data>
json: .foo.bar[3].baz = {"1": {"two": "tree"}}
xml: /foo/bar/*[3]/baz = <one two="tree"/>

This is all information you have there really, decoupled from the synchronization algorithm. So why couple it together. Once represented this way we can also observe some other issues with the spec:

  • Where are standard values for selector type (e.g., bytes) and how they map to selectors (e.g., [33:3889]) defined? It looks like there is some hidden assumption that it is known that bytes should use [33:3889] should use .foo.bar[3].baz. If I am oblivious, what are those? Is that Xpath? JSON pointer? I have no idea.
  • What are valid combination of selector types and content types. Or how does one know which one they should implement in their Braid-compatible implementation? So if I want to put a label on my system saying it is Braid-compatible, which combinations I should support? This brings back that it is pretty unclear how that JSON - XML conversion can be done (#4) so it is important also to know which combinations are supported, when they are maybe unexpected.

(BTW, not sure why are you talking about content types in this spec, you probably want to talk about Media Types, content type is a header in HTTP.)

So, to go back to the main issue here. So I think this should be split into two: one is a new patch format. And another is some standard way to specify/register/index different synchronization algorithms and their parameters. So in HTTP world, we might want for the latter to have something like Synchronization-Type: sync9(array). But I do not think why I could not have Synchronization-Type: sync9(array) with Content-Type: application/json-patch+jsonand then I would not have to use Braid-Patch. I see the benefit of simplicity of Braid-Patch, but I also see the benefit of simply reusing existing patch formats. I see those formats as a strict subset of Braid-Patch, so they should be compatible. But why to require to use Braid-Patch just to be able to also specify the indented synchronization type.

sync9(json) vs. sync9(partially-ordered-tree)

Duane points out that the "json" merge type should maybe be called something more abstract, like "partially-ordered-tree":

I'm reading the current draft of Braid-Patch (which is awesome, nice job!) and have a thought:

XML files have an isomorphic partial ordering with JSON files.  You
could conceivably re-use a JSON synchronizer on an XML file by
converting the XML to JSON, merging the JSON, and then converting the
result back when necessary.  However, you can do the same thing by
connecting an XML parser to a JSON merger, since their partial orders
are compatible

I feel like there is a "spec smell" here (akin to a "code smell", haha). It seems like if a JSON synchronizer can be re-used in an XML Content-Type then there is perhaps an underlying concept that needs expression. Perhaps the "sync9(json)" merge type should be renamed to something that both XML and JSON would need to merge? For example: braid(xml, sync9(partially-ordered-tree)) ? Is there a reason one would choose a JSON merge type over an XML merge type for an XML Content-Type?

I don't think I have the right alternative name ("partially-ordered-tree") but I'm trying to suggest/point to something more generic. Curious about your thoughts.

I agree. I haven't come up with the better name yet, but partially-ordered-tree sounds ok with me too. Since this is just an example, we can name it whatever communicates the idea best in this case.

Unify JSON Range Units

To resolve #29 and #23 @toomim and I discussed that he should make an alternative proposal for a range unit (a fork) which has slightly different approach:

  • It uses . instead of / to represent the path.
  • It allows negative indexing into arrays and strings.

I prefer to keep json range unit compatible with JSON pointer (and JSON patch). So let's have both for now and see where we go from there.

Also, BTW, if we are defining a new path format, why not just make it JSON and have it be instead of .foo.bar or /foo/bar be simply ["foo", "bar"]. Easy to read, easy to write, and all escaping rules are already defined.

What are "hints"?

That section is highly unclear. It should explain what is indented use? How to use it. What is this magical Patch: prefix now? Where does this come from? Where was grammar for that? Hint:?

Braid-Patch: unclear what it is

I think document is unclear what "Braid-Patch" is? Is it a patch format, e.g.:

<selector> = <data>

Or is is a patch format:

braid(bytes, sync9(array))

Yes, on purpose using twice "patch format" because it really looks confusing.

I think the spec should be what is data of the patch and what is metadata. I am assuming braid(bytes, sync9(array)) is metadata associated with the patch, while [33:3889] = <binary-data> is data?

Requirement to define a function for a sync type

@toomim In this change you added:

A Synchronization Type MUST specify the equivalent of a function that takes a set of parent versions as input and returns the state of the resulting merged version.

What do you mean by that? That when registering you should link to the code of such a function? Or that you have to describe the function in the registration form for the content type?

I think this is unnecessary restrictive to specify in any case. This spec is just namespace registration. Linking some spec/standard/definition with the name. We do not have to define how it should behave. If a particular sync type does not work for your or our system, you do not use it. So not all sync types have to work with Braid. Many probably will not. This should not be a requirement for a sync type.

So I would suggest removing this paragraph.

Allow multiple patches per version

The current braid-http spec only allows a single patch per version:

  A client SHOULD send a new version in a PATCH, POST, or PUT request.
  A server MAY send a version in the response or sub-response to a GET
  request.  A version MAY contain any combination of the following
  headers, with one restriction:

      Patch-Type: <patch-type>
      Cache-Control: no-cache, patch
      Version: <versionid>
      Parents: <versionid>, <versionid>, ...

But in general, a version might include multiple patches. We need to support this.

Allow servers to advertise that they support Subscribe

It'd be nice for a server to advertise that it lets you Subscribe to a resource.

I'm thinking it could send in its response something like Accepts: or Allow:, except that those are for media types and method names, respectively. Do we know of a header name that would be appropriate to say that a server supports subscriptions? Maybe just a special header like Allow-Subscriptions?

@mitar @brynbellomy I'm wondering if you guys have seen anything like this before. This could also be supported in an OPTIONS request like Bryn wrote for the range patch spec.

Allow asking not just for the latest patch, but also for a cumulative patch

So I think maybe doing a GET request with Version header + Parents header would give you a patch which is a cumulative patch which gets you from state at version at Parents to state at version at Version, without intermediate patches.

So this would not be as useful for resolving conflicts, but in a client-server scenario where there are no offline changes pending on the client, it would allow quick update after reconnecting to the latest version of state. And if at a later stage happens that you do have to resolve conflicts, you then go and fetch individual patches.

And for fetching a bunch of individual patches I would live this out of scope of this spec because things like HTTP2 and pipelining in general can make this not too painful.

Determine whether proxies can drop range requests of non-GET requests.

https://github.com/braid-work/braid-spec/blob/86435a05d5bcad1e446905d4e7e22e672391c0f3/draft-toomim-httpbis-range-patch-00.txt#L160

    When server supports Range header with non-GET requests, server MUST
   NOT ignore the Range header when used with a non-GET request.  When
   server does not support Range header with non-GET requests, a server
   SHOULD generate a 416 (Range Not Satisfiable) or a 400 (Bad Request)
   response when a non-GET request with a Range header is made.  Proxies
   SHOULD NOT drop Range header for non-GET requests.  To assure correct
   handling of non-GET requests with the Range header, requester can
   check server's support for it as described in Section 5.

We need to reconcile this with the text from RFC7233:

   A server
   MUST ignore a Range header field received with a request method other
   than GET.

   An origin server MUST ignore a Range header field that contains a
   range unit it does not understand.  A proxy MAY discard a Range
   header field that contains a range unit it does not understand.

This seems to prohibit proxies from dropping range headers on requests other than GET. I'm not sure we want to reduce "MUST ignore" to "SHOULD NOT drop".

Per-hop behaviour of Subscribe requests

I don't believe the currently described subscription routine will work properly when an unaware proxy lies between the origin and the client. The client can send a request to the proxy, which will forward the Subscribe header unaware of its meaning. The origin will then keep the connection open to the proxy indefinitely, which the proxy is definitely not prepared for. Most likely, the proxy will time out the connection and the client will get an error.

I don't see a way to do the Subscribe header that isn't per-hop. As a per-hop header, it would need to be included in the Connection header for HTTP/1 and be handled as a new SETTING in HTTP/2 and /3.

Maybe add Accept-Range-Patch header

For server to announce that it accepts range patches, more precisely, that which range units it accepts for non-GET requests.

  • How do we communicate which non-GET requests accept range units?
  • Should we have this header, or something else which tells that resource is Braid-enabled more in general? Or both?

Why not making Linked JSON be an extension to JSON reference?

I think the difference between $ref and link is just cosmetic. I do not see why another backwards-incompatible way is needed. I would propose the following:

JSON reference is extended to:

  • Allow additional metadata alongside $ref ($ref forbids other fields, so this is backwards compatible in the sense that conflicts with existing fields there are not "standard", but we can also minimize this by requiring that all metadata is under special additional key, like $meta, while other fields should be left as-is).
  • Add a mechanism to escape $ref field (which will probably have to be used much less often than escaping link), maybe something simple as $$ref and so on.

I think arbitrary linking to a non-JSON is a bit strange to me. I mean, every such link will need some semantic/schema/explanation what the link means/is. So not sure if just defining a link is enough? And if it is not enough, then JSON-LD adds what is enough.

JSON Range format doesn't support [n:n], [-n], or [-n:-n]

As discussed here: 5729979#r35759079

Zero-length ranges

Range starts and ends should be allowed to be equal. That is how you insert new text at a zero-length location.

Negative numbers

It is also nice to be able to use negative numbers, to specify a distance from the end. That's useful if you want to write a synchronizer by hand, that just implements an append-only log.

For instance, let's say that you're writing a chat application. It's nice to make it easy to add a message to the end of the array. Your code to do that is even simpler if it doesn't have to keep track of how long the array is, and can just say "Insert this at -0".

Or let's say that you have a server in your corporate LAN that maintains logs of all your systems. Then whenever you need to add a new line to the end of the log, you just issue a patch that does a PUT http://log_server/log '[-0:-0] = "new event: alert on system Q\n"'.

Obviously for these cases it's hard to use - as the character to separate the two numbers. It's better to use :.

Rename Sync Type to Merge Type

After discussing it with @toomim in person, we decided that the best path forward is to rename Sync Type to Merge Type and make it really just about naming functions which take two versions and produce one version.

So we should:

  • Rename that.
  • Go over the whole spec and update it so that it is consistent with this new definition (instead of current broader "approach to conflict resolution").
  • Simplify security section.

Standardize few parser types

I think we should list and standardize few parser types:

  • bytes
  • json
  • xml

And then for each of them describe how they encode selectors, how they parse out selector from the payload, and how they describe portals.

Missing grammars in patch format

I think you should formally specify the grammars for what you are defining there. Both for the patch specification and for patch itself. Like:

braid(bytes, sync9(array))

What of this is structure, what are values. Where are spaces allowed? Are there comments? How many elements should be there in the parameters? Can sync9 get additional arguments? What are possible values for the first? bytes? Is this a reserved word? Is there a new registry your spec is introducing of those words? Who will maintain that registry, IANA?

Similarly, you have example:

[3:5] = "hello"

Why are quotes there? Where do they come from? What are quoting rules there?

Cut Stand-Alone Range Patch?

@mitar I'm wondering what the motivations for section 2.2. Stand-Alone Range Patch.

Is there a compelling use-case that you have in mind?

The text of this section says:

"When range patches are transmitted outside of HTTP session, a stand-alone range patch format can be used."

I am a little bit concerned that the HTTP Working Group might consider standards for transmission that occur outside of a HTTP session to be out of their purview. But I imagine that if there's a compelling use-case, they might allow it. But I don't yet understand this motivation myself.

Allow custom metadata on each version

This can serve to store information about the user who made the version, add signatures and so on.

I think this is also the reason why also merge versions (so versions without payload) would still have to be present and not just "virtual".

JSON - XML conversion

In draft-xx-httpbis-braid-patch-00.txt you state:

XML files have an isomorphic partial ordering with JSON files. You could conceivably re-use a JSON synchronizer on an XML file by converting the XML to JSON, merging the JSON, and then converting the result back when necessary.

That is not true. XML files without additional schema do not have enough information to do this conversion in general. For example:

<div>
  <p>Foo</p>
</div>

How should this be converted to JSON? There is no standard way. Moreover, it is unclear if p should be a list of p elements or just one. You could always convert to lists, but then also your patch should be aware of that. So if you convert to JSON and try to apply a path specification to it, it might not work.

I am not sure why this speculation/suggestion does even have to be in the spec though? It does not add much and it could be more suitable for some other library, which uses this trick (if it really works) for its implementation, but does not have to be really specified in the spec?

I do not think Ted would approve one-directional links

I love NelSON name, but I am not sure we can claim that Ted would approve it if we are using one-directional links. I think that unless the target to which we link also knows which JSON links to it, this is not how Ted would design it.

Simplified Subscription + Versions + Range-Patches

I took a pass redesigning the response to a GET+Subscription. It needs to return a sequence of versions, where each version can have a sequence of patches.

What do you think of this? @mitar @brynbellomy

Request:

GET /chat
Subscribe: keep-alive

Response:

HTTP/1.1 209 Subscription
Subscribe: keep-alive

Version: "ej4lhb9z78"
Parents: "oakwn5b8qh", "uc9zwhw7mf"
Content-Type: application/json
Merge-Type: sync9
Patches: 2

Content-Length: 2
Content-Range: json .messages

[]

Content-Length: 70
Content-Range: json .messages[-0:-0]

{text: "Hi, everyone!",
 author: {type: "link", value: "/user/tommy"}}

Version: "g09ur8z74r"
Parents: "ej4lhb9z78"
Content-Type: application/json
Merge-Type: sync9
Content-Length: 158

{"messages": [
  {text: "Hi, everyone!",
   author: {type: "link", value: "/user/tommy"}},
  {text: "Yo!",
   author: {type: "link", value: "/user/yobot"}}
]}

Version: "2bcbi84nsp"
Parents: "g09ur8z74r"
Content-Type: application/json
Merge-Type: sync9
Patches: 1

Content-Length: 65
Content-Range: json .messages[-0:-0]

{text: "Hi, Tommy!",
 author: {type: "link", value: "/user/sal"}}

Version: "up12vyc5ib"
Parents: "2bcbi84nsp"
Content-Type: application/json
Merge-Type: sync9
Patches: 1

Content-Length: 326
Content-Type: application/json-patch+json

[
  { "op": "test", "path": "/a/b/c", "value": "foo" },
  { "op": "remove", "path": "/a/b/c" },
  { "op": "add", "path": "/a/b/c", "value": [ "foo", "bar" ] },
  { "op": "replace", "path": "/a/b/c", "value": 42 },
  { "op": "move", "from": "/a/b/c", "path": "/a/b/d" },
  { "op": "copy", "from": "/a/b/d", "path": "/a/b/e" }
]

If it's not clear how these are intended to nested, I made a version showing the nesting at https://wiki.invisible.college/braid/subscriptions. Check it out.

This makes a few changes:

  • Delimits nested regions in a more unified way. Doesn't need multipart/* unique strings anymore.
  • Creates a new special response code 209 Subscription to ensure legacy caches don't cache this shit
  • Puts Content-Type on the Version, not the Patch. This means if a resource theoretically wanted to change its Content-Type from one version to another, it could do that. But not per-patch.
  • A Version reports how many Patches are within it, helping you know when the next Version will start
  • You can specify the full content of a Version instead of a patch by specifying a Content-Length: header instead of a Patches: header on the version. (See version "g09ur8z74r" above.)

Allow the use of PUSH for updates.

This would be a pretty big update to the document, perhaps most appropriate as another document entirely. But I think it would be useful for servers to be able to provide object updates via PUSH on an open H/2+ connection. H/1 doesn't have such a concept, so I see why a new protocol is needed there, but H/2+ has all the primitives we need to handle the update propigation.

Where to specify a data structure's merge-type schema?

Duane writes:

Something that I don't currently understand about Braid is where one would be able to choose a way of merging data on a more fine-grained level that corresponds to intentions.

For example, suppose I have two situations in my application that each involve merging strings:

  1. A collaborative textarea where customer service reps take notes.
  2. An "address" field where customer service reps enter the customer's address.

These two fields' merge intentions are quite different. In the first case, it makes sense to keep all edits, inserting divergent text so that the result includes all text. In the second case, it would be confusing if the address field resulted in a sort of "hybrid" address with a street name from one rep's edit and a house number from another rep's edit. It would be better to have a "last write wins" merge here.

My understanding of sync9, or any other merge algorithm, is that it chooses one or the other merge intention for us--there is no way to specify that one field should behave in the first way, and another field should behave in the second way. Is that correct?

If so, where would this intention-mapping behavior belong?

Thank you, this is a pretty glaring omission in the current spec! In the last spec, we suggested that a programmer could specify a schema of different merge types, but we haven't gotten to defining the precise syntax for this yet.

I think this is important and will be working on it. It's possible that we'll want to encode the merge-type of a JSON value inside the Linked-JSON spec, so that you could do something like {merge_type: "lww", val: "P. Sherman, 42 Wallaby Way, Sidney"}. We'd then have to add "merge_type" as a special keyword that needs to be escaped, like "link".

Also, a more precise example for using last-write-wins might be a UUID or computer hash address, rather than a physical address. It's more inconceivable that two people want to edit a hash address and merge their edits than a physical mailing address.

Link to version with query param, rather than header

I am not sure if we should be using Version header to specify which version of the state you want to get. Or at least, we should standradize both the HTTP header but also query string. So that one can link to a particular version by doing something like https://example.com/document/1?version=42.

I think this is similar how OAuth tokens can be provided both as a header or as a query string.

Use ETag in all examples in range patch

I think it is a bit strange that we show it only in OPTIONS but not also in other examples. Also, if we believe that this is the best practice (and we do), then we should show that everywhere.

I would suggest that we start with GET request returning the initial document, providing ETag there. And then all other requests should use If-Match. That will also demonstrate well which version of example document we are patching.

We do not have to explain those headers before Section 6, but just leave them around. We can mention in Section 6 that examples in the document include those headers.

Review of Synchronization Types

  • This is very cool. All we are doing is generalizing them from GET to
    other methods, and then — blammo — you have a general patch
    language!
  • Intro is good.
    • First sentence of second paragraph needs revision
    • What is "o Description" for?
  • I can take care of the page breaks. When we submit, we need to
    manually insert ^L characters, and label the page numbers in Table
    of Contents. I did this last time manually, and am happy doing it
    again.
  • I think we can clarify the difference between a Synchronization Type
    and its implementation.
    • The Type only needs to specify enough so that two implementations
      can synchronize together and obtain consistent output.
      • It does not, for instance, need to prescribe which algorithms or
        data structures the implementation uses, or care about the
        performance of the synchronizer.
    • Therefore, a type only needs to specify how to merge parallel edits
      together
      , aka how to resolve conflicts.
    • Here are some edits to clarify this (and here's the commit):
    2.  Definitions

       For the purpose of this document we define "synchronization" as the
       resolution of conflicts amongst patches.  There are multiple
       approaches to resolving conflicts, and for two synchronizers to be
       compatible, they must resolve them in the same way.  A
       "Synchronization Type" is an identifier that defines a method of
       resolving patches, along with a set of (optional) parameters.

       A Synchronization Type MUST specify the equivalent of a function that
       takes a set of parent versions as input and returns the state of the
       resulting merged version.
  • I think this means that much of the security considerations are not
    necessary. They mostly seem to be about particular algorithms or data
    structures, or validation or versioning, rather the behavior of the
    conflict-resolution function.

Originally posted by @toomim in #20 (comment)

Add a `lines` range unit

I was thinking that it would be useful to have that so that unified patches can be directly converted to range patch.

JSON indexing with `/` vs. `.` vs. `[""]`

Programming languages typically index into data structures with .foo or ["foo"], and file directory structures typically index with /.

However, the Range-Patch spec currently indexes into a JSON data structure with /, like:

Content-Range: json /foo/bar/3/baz

I understand that JSON-Patch also uses /, but I think that's weird and breaks convention. When I see slashes, I expect a directory structure, not a data structure.

I would rather support one of these options:

Content-Range: json .foo.bar[3].baz
Content-Range: json .foo.bar.3.baz
Content-Range: json .foo["."].3.baz        # To encode a literal "."
Content-Range: json .foo.\..bar.3.baz      # To encode a literal "."

Another way to distinguish / from . is with the rule:

  • / distinguishes resources
  • . distinguishes components of a resource

New response status code vs. Cache-Control: no-cache, patch

In order to prevent existing caches from trying to cache a patch, the current spec requires a patch to specify Cache-Control: no-cache, patch.

However, I just discovered that the same problem was solved by HTTP Range Requests by inventing a new status code:

   Partial responses are indicated by a distinct status code to not be
   mistaken for full responses by caches that might not implement the
   feature.

Should we switch to using a new status code for patch responses as well? This seems nicer than saying "Cache-Control: no-cache, patch", which implies, in rather circuitous fashion "Don't cache this. Oh, but you can cache this as a patch."

JSON Reference is incorrectly mentioned in Section 3

   If the URI contained in the JSON Reference value is a relative URI,
   then the base URI resolution MUST be calculated according to
   [RFC3986], Section 5.2.  Resolution is performed relative to the
   referring document.

Is this by mistake?

Portals to Past Versions does not look complete

It does not read as normative spec. I suggest that the language is improved. Like:

We could explicitly write the copies with portals:

You mean, we CAN?

Also, old is not really defined, except in this example.

Portals allow Braid-Patch to directly support:

I think you should write it out, for each of them, how. In a standard way.

Use of "patches" term

@toomim In this change you introduced "patches" into sync types. I on purpose didn't use "patches" anywhere in that spec. Because I want to make the spec more general. It is about synchronization of states in distributed systems. How that is done might have nothing with patches. For example, proof of work does that without patches. :-)

So I suggest we remove that.

Define how multiple JSON pointers and line ranges can be combined together

So bytes range unit allows to specify multiple ranges like bytes=500-600,601-999. We mention in "URI Fragment Identifiers" section:

/api/document/1#json=/foo/bar/0,/foo/bar/1

But in fact we have not specified anywhere how multiple JSON pointers (nor line ranges) are combined together. Using comma might conflict with contents of individual JSON pointers.

We should clarify and define this for both range units.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.