Code Monkey home page Code Monkey logo

c2sp's Introduction

The Community Cryptography Specification Project

The Community Cryptography Specification Project (C2SP) is a project that facilitates the maintenance of cryptography specifications using software development methodologies. In other words, C2SP applies the successful processes of open source software development and maintenance to specification documents.

  • C2SP decisions are not based on consensus. Instead, each spec is developed by its maintainers, who are responsible for reviewing and accepting changes, just like open source projects. This enables rapid, focused, and opinionated development. Since C2SP produces specifications, not standards, technical disagreements can be ultimately be resolved by forking.
  • C2SP specs are updateable, and follow semantic versioning. Most specifications are expected to start at v0.x.x while in “draft” stage, then stay at v1.x.x for as long as they maintain backwards compatibility, ideally forever. Drafts are expected to bump the minor version on breaking changes.
  • C2SP documents are developed as Markdown files on GitHub, and can include ancillary files such as test vectors and non-production reference implementations.

A small team of stewards maintains the overall project, enforces the C2SP Code of Conduct, assigns new specifications to proposed maintainers, and may intervene in case of maintainer conflict or to replace lapsed maintainers, but they are otherwise not involved in the development of individual specs (in their steward capacity).

Versions are tracked as git tags of the form <spec-name>/vX.Y.Z like age/v1.2.3.

Specifications should be linked using their c2sp.org short-links. https://c2sp.org/<spec-name> and https://c2sp.org/<spec-name>@<version> are supported. (The former currently redirects to the specification in the main branch, this may change in the future to the latest tagged version of the spec.) GitHub URLs should not be considered stable.

All C2SP specifications are licensed under CC BY 4.0. All code and data in this repository is licensed under the BSD 1-Clause License (LICENSE-BSD-1-CLAUSE).

Specifications

Name Description
c2sp.org/age File encryption format Maintainers
c2sp.org/age-plugin The age plugin stdio protocol Maintainers
c2sp.org/BLAKE3 A fast cryptographic hash function (and PRF, MAC, KDF, and XOF) Maintainers
c2sp.org/chacha8rand Fast cryptographic random number generator Maintainers
c2sp.org/https-bastion Bastion (reverse proxy) protocol for exposing HTTPS services Maintainers
c2sp.org/jq255 Prime order groups, key exchange, and signatures Maintainers
c2sp.org/signed-note Cleartext signed messages Maintainers
c2sp.org/static-ct-api Static asset-based Certificate Transparency logs Maintainers
c2sp.org/tlog-checkpoint Interoperable transparency log signed tree heads Maintainers
c2sp.org/tlog-cosignature Witness cosignatures for transparency log checkpoints Maintainers
c2sp.org/tlog-tiles Static asset-based transparency log Maintainers
c2sp.org/tlog-witness HTTP protocol to obtain transparency log witness cosignatures Maintainers
c2sp.org/vrf-r255 Simplified ristretto255-based ECVRF ciphersuite Maintainers
c2sp.org/XAES-256-GCM Extended-nonce AEAD from NIST-approved components Maintainers

Associated projects

The C2SP organization hosts three other testing-focused projects:

  • Wycheproof, a large library of tests for cryptographic libraries against known attacks.

  • CCTV, the Community Cryptography Test Vectors, a repository of reusable test vectors.

  • x509-limbo, a suite of tests for X.509 certificate path validation.

c2sp's People

Contributors

alcutter avatar bdd avatar filosottile avatar mhutchinson avatar msparks avatar pornin avatar quite avatar samuel-lucas6 avatar str4d avatar vcsjones avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

c2sp's Issues

New specs: note, checkpoint, witness, bastion

This is a set of transparency log related specs.

note is for the signed note format, and has a draft at https://github.com/C2SP/C2SP/blob/filippo/tlogs/note.md. Would be maintained by me and @rsc (if he wishes).

checkpoint is for the signed tree head artifacts, and has a draft at https://github.com/C2SP/C2SP/blob/filippo/tlogs/checkpoint.md. It would be maintained by me.

witness is for a protocol to communicate with cosigning witnesses. It has a draft at https://git.glasklar.is/sigsum/project/documentation/-/blob/ed872c8b04161aabf41ae6bbf2797e23dd7c332f/witness.md and https://git.glasklar.is/sigsum/project/documentation/-/blob/filippo/witness-apis/proposals/2023-08-witness-apis.md. It would be maintained by me, @rgdd, and Niels Möller.

bastion is for a reverse proxy that allows exposing allow-listed services to the Internet. It has a draft at https://git.glasklar.is/sigsum/project/documentation/-/blob/ed872c8b04161aabf41ae6bbf2797e23dd7c332f/bastion.md. It would be maintained by me, @rgdd, and Niels Möller.

/cc @dconnolly @str4d for approval

meta: make it easy to add lints

For example, it should be easy to add a rule somewhere that will raise objections on PRs if "co-signature" is used instead of "cosignature".

STREAM construction

The STREAM construction for nOAE was originally described in Figure 10 of the paper Online Authenticated-Encryption and its Nonce-Reuse Misuse-Resistance. Since then, several concrete instantiations have arisen (notably age and Tink), that make slightly different design choices.

It would be great to specify these instantiations, both to collate the rationale for their choices, and to help guide people towards an existing instantiation in order to limit the proliferation of additional variants (which will make it easier for people to write STREAM implementations).

age-plugin: Decide whether or not to allow secret keys to be "annotated" as such

This implies that unlike native keys (AGE-SECRET-KEY-1[...]), plugin keys don't have the nice reminder that you should keep them secret (assuming a plugin that stores secrets in their identities). Maybe let the plugin opt-in by stripping a SECRET-KEY- suffix when extracting the name? So plugins with secrets in their identities can generate them with a AGE-PLUGIN-NAME-SECRET-KEY- HRP.

Originally posted by @HW42 in #5 (comment)

tlog-tiles: forbid logs from returning a 200 status for non-existent tiles

A client may have to fall back to downloading a full tile if a partial tile has been deleted. The protocol doesn't currently provide a way for the client to figure this out. Adding text such as the following would fix this:

Logs MUST NOT return an HTTP status code of 200 if a tile does not exist

I really hope that there are not backends out there that return 200 for nonexistent objects. IMO those should not be considered suitable backends for CT logs.

"Bug" in CODEOWNERS

GitHub is not happy with the recent changes in CODEOWNERS, see the yellow banner here: 4896804.

It reads: "make sure @gtank exists and has write access to the repository"

age: ChaCha20Poly1305 clarification

Hi,

The age spec says

body = ChaCha20-Poly1305(key = wrap key, plaintext = file key)

Am I correct that this is the concatenation of the ciphertext and the tag? Is this unambiguous?

The reason I ask is that CryptoKit out of the box uses a representation (combined) that is nonce || ciphertext || tag. I was looking at the RFC to see if the serialization was specified there, but I can't seem to find anything: it just says that it outputs a ciphertext and a tag, but not that the output is always a concatenation of (only) those 2 in that order (and without the nonce).

So, I was wondering if this needs to be explicitly listed in the age spec?

https-bastion, signed-note, tlog-checkpoint, tlog-cosignature, tlog-witness: tag initial version

Unless any other maintainer objects, I will create the following tags at the end of today:

  • https-bastion/v1.0.0-rc.1
  • signed-note/v1.0.0-rc.1
  • tlog-checkpoint/v1.0.0-rc.1
  • tlog-cosignature/v1.0.0-rc.1
  • tlog-witness/v0.1.0

The first four are very stable and are used in production, so it makes sense to move to v1. tlog-witness still has a TODO, so we can give it a bit more time.

/cc @C2SP/https-bastion
/cc @C2SP/signed-note
/cc @C2SP/tlog-checkpoint
/cc @C2SP/tlog-cosignature
/cc @C2SP/tlog-witness

New spec: XAES-256-GCM

XAES-256-GCM is a spec for an extended-nonce AEAD based on the composition of a standard NIST SP 800-108r1 KDF and the standard NIST AES-256-GCM AEAD.

The XAES-256-GCM inputs are a 256-bit key, a 192-bit nonce, a plaintext of up to approximately 64GiB, and additional data of up to 2 EiB.

Unlike AES-256-GCM, the XAES-256-GCM nonce can be randomly generated for a virtually unlimited number of messages. Only a 256-bit key version is specified, which provides a comfortable multi-user security margin.

XAES-256-GCM derives a subkey for use with AES-256-GCM from the input key and half the input nonce using a NIST SP 800-108r1 KDF. The derived key and the second half (last 96 bits) of the input nonce are used to encrypt the message with AES-256-GCM.

The KDF can be easily described as a sequence of three AES-256 invocations, but making it a profile of SP 800-108r1 allows a FIPS compliance argument.

There is a draft at #36.

It would be maintained by me.

/cc @dconnolly @str4d for approval

Hash algorithm agility for proofs and nodes

Given we don't have a spec yet for tiles, I wanted to ask if there's interest in removing any assumptions that SHA-256 is used for root, node, or leaf hashing or inclusion or consistency proofs. This could apply for sunlight or any log deployments, though I'm thinking more about deployments other than sunlight/CT.

age: Integration with OpenPGP Card without a Plugin

From YubiKey firmware 5.2.3 (https://developers.yubico.com/PGP/YubiKey_5.2.3_Enhancements_to_OpenPGP_3.4.html), X25519 is supported.

When we use X25519 keys in OpenPGP, we can write private keys to a OpenPGP Card(e.g. YubiKey), If we do so, we can protect the private keys with hardware.

We can use the same key pair in age, and this will bring us hardware protected feature with out a plugin.

For OpenPGP Card, send below command to the card via PC/SC:

CLA: 0x00, INS: 0x2A, P1: 0x80, P2: 0x86, DATA: A6 7F49 86 EPK

The we will get the shared secret, and then the age file can be decrypted.

Although it will work, but when we have multiple recipients, I need to try all the X25519 recipient stanzas with OpenPGP Card. But OpenPGP operation is verify heavy, it requires PIN and a touch(If turn the policy on).

So if we want to support OpenPGP Card protected age encrypted files, we need a quick lightweight way to verify X25519 recipient stanza, my suggestion is that add an additional argument to X25519 recipient stanza.

Current X25519 recipient stanza is like this:

-> X25519 1R1xhye2ff90kBDpmIlhKAd9R/uyMJPn2U1y5YfjBl4
jerzVNLKbmFn56WxRBlGZ3otYMUwR29Pcml+WzU36Is

Then change to:

-> X25519 1R1xhye2ff90kBDpmIlhKAd9R/uyMJPn2U1y5YfjBl4 RECIPIENT
jerzVNLKbmFn56WxRBlGZ3otYMUwR29Pcml+WzU36Is

RECIPIENT can be Hmac_SHA256(recipient, ephemeral share), with this additional argument we can confirm which recipient is correct quickly.

age: is RFC 7539 reference wrong?

From age.md:

ChaCha20-Poly1305 is the AEAD encryption function from RFC 7539.

But there is a new RFC 8439 which says

This document represents the consensus of the Crypto Forum Research Group (CFRG). It replaces [RFC7539].

Main difference IMO is in AEAD construction. This is from RFC 7539 section 2.8.1.

      chacha20_aead_encrypt(aad, key, iv, constant, plaintext):
         nonce = constant | iv
         otk = poly1305_key_gen(key, nonce)
         ciphertext = chacha20_encrypt(key, 1, nonce, plaintext)
         mac_data = aad | pad16(aad)
         mac_data |= ciphertext | pad16(ciphertext)
         mac_data |= num_to_4_le_bytes(aad.length)
         mac_data |= num_to_4_le_bytes(ciphertext.length)
         tag = poly1305_mac(mac_data, otk)
         return (ciphertext, tag)

. . . and this is from RFC 8439 section 2.8.1.

      chacha20_aead_encrypt(aad, key, iv, constant, plaintext):
         nonce = constant | iv
         otk = poly1305_key_gen(key, nonce)
         ciphertext = chacha20_encrypt(key, 1, nonce, plaintext)
         mac_data = aad | pad16(aad)
         mac_data |= ciphertext | pad16(ciphertext)
         mac_data |= num_to_8_le_bytes(aad.length)
         mac_data |= num_to_8_le_bytes(ciphertext.length)
         tag = poly1305_mac(mac_data, otk)
         return (ciphertext, tag)

. . . so mac_data differs in using num_to_4_le_bytes vs num_to_8_le_bytes which is significant.

I'm positive rage uses RFC 8439 version of ChaCha20-Poly1305 from chacha20 crate which says

ChaCha20 ChaCha20 stream cipher (RFC 8439 version with 96-bit nonce)

sunlight: specifying synchronous merging

Early drafts of the Sunlight spec noted that the inclusion of the leaf index in the SCT "limit[ed] Sunlight logs to a null Merge Delay" but that language was softened after it was observed that it was possible to identify a future leaf index without actually yet including the certificate in the tree.

Synchronous merging (i.e. a null merge delay) is a highly-desirable property from Chrome's perspective, and we would like to see this property added to the spec more explicitly.

Experience with RFC6962 logs in the existing CT ecosystem have shown that one of the greatest risks to individual logs is the issuance of SCTs whose corresponding certificates are never included in the log's merkle tree. Dropping certificates for which an SCT has been issued results in an unrecoverable loss of integrity, leading to the log's removal from the list of usable logs by CT-enforcing user agents.

Avoiding this risk is worth a lot to us. Logs commonly experience downtime, but as long as logs have durably included all certificates for which SCTs were issued, and resume correctly serving the required submission and read endpoints, these failures are typically fully recoverable. Downtime when RFC 6962 logs have not yet fully incorporated all pending certificates has led to several log failures due to either omitting entries entirely or rebuilding the tree in a way that resulted in a split view.

Logs that break their integrity guarantees not only pose risks to the directly-involved certificates, but also cause extended periods of reduced availability of CT logging for the entire web ecosystem. Replacement of a log is far from instantaneous -- it takes months to ensure that a new log is usable in all enforcing user agents. During that time, the WebPKI must rely on fewer remaining CT logs.

One wrinkle is that the current specification identifies an API, but largely does not dictate other log behavior. I'll provide a PR soon, but broadly, we'd like to propose the introduction of a "Log Behavior" section (mirroring a similar section in RFC6962) that specifies that:

  • A log MUST incorporate the certificate into the Merkle Tree before returning the SCT.
  • To facilitate the usability of this log, add-chain and add-pre-chain APIs SHOULD return SCTs within a specified SLO.

Let maintainers push tags directly

Currently, maintainers need to ask stewards to make new version tags. This is technically enforced by a GitHub Ruleset that applies to all tags and has @C2SP/stewards in the bypass list. https://github.com/C2SP/C2SP/settings/rules/1440328

I propose that instead of building some complex version tagging bot, we create a ruleset for each short-name/v* tag namespace and put the relevant maintainers (and the stewards) in the bypass list.

It will be a bit of toil to create a new ruleset for each spec, but probably easier than maintaining a bot at least until we have dozens of specs.

I'm afraid we'll have to remove the global ruleset though because rulesets can't override each other, so non-namespaced tags will be pushable by all maintainers. That doesn't feel like the end of the world since non-namespaced tags are meaningless, and we'll still prevent deleting, so if anyone creates one we'll notice and be able to have a conversation. (We can always make a GitHub Action to auto-delete them, too.)

/cc @C2SP/stewards

Test vectors for age

At the moment the age specification doesn't contain any end-to-end examples of a valid (or invalid) age file. While it's relatively straightforward to generate some using the reference implementation, there are some components of an age file (e.g. the file key) that cannot easily be inferred from the inputs and outputs of the CLI tool, and the tool presumably doesn't generate any invalid files.

I wanted to ask whether it'd be worthwhile to add some test vectors to the age specification in the form of some basic valid and invalid age files with the inputs (original file and any secrets used) and intermediate values (file key and perhaps payload key) specified for anybody trying to implement a reader for the format. I'd be happy to make a PR to add a few different examples, although I'm not immediately sure whether the examples should be embedded in the format spec or provided separately.

chacha8rand: provide C reference implementation

/cc @C2SP/chacha8rand

There are literaly hundreds of projects that ship ad-hoc "fallback" (CS)PRNG when the OS does not have arc4random, getrandom or getentropy.. most of them are insecure.

I think it will be of great benefit to have a liberally licensed version written in C that could help to replace this ad-hoc constructions and maybe even replace most uses of rand(), random(), drand* interfaces.

I am not a cryptographer and I am not comfortable writting one myself.

age: streaming header MAC calculations

TL;DR: if the header MAC were calculated slightly differently, you could write an age implementation with a guaranteed upper-bound memory usage.

The Problem

When first parsing the headers of age file, you don't yet know the file key. Because you don't know the file key, you can't calculate the HMAC key.

This is unfortunate, because you can't meaningfully use a "streaming" API (init/update/final) to perform your HMAC calculations at the same time as you read the headers, because you need the key before you can "init".

As an implementer, you have two options:

  • Once you have the HMAC key, rewind and re-read the header from the input file to feed it into the HMAC. This could introduce TOCTOU issues (although I'm not sure that's really within age's threat model?), and more importantly is not possible at all if the input is a pipe etc.

  • Buffer the whole header in memory as you read it, and feed that buffer into the HMAC later, once you know the HMAC key.

I believe most (all?) implementations opt for the latter, and in practice that works fine. Reasonable headers are always going to fit in memory.

But what if a file has hundreds of recipients, and you need to parse it on a system under high memory pressure? Or perhaps more realistically, what if you wanted to guarantee that your program never uses more than a certain amount of memory even under adversarial inputs?

The Solution?

A simple solution could be to hash-then-HMAC. That is, rather than HMAC(key, header), do HMAC(key, hash(header)). It seems like SHA256 would be the most sensible choice of hash.

This way, you can incrementally feed the hash while you parse the headers (via a streaming init/update/final API), and do the HMAC at the end once you have the key.

This is not without downsides, but IMHO it's a reasonable trade-off, since it allows for an implementation of age to be fully streamed, without ever having to keep potentially-large buffers in memory. Any performance overhead would be utterly negligible in comparison to the cost of doing X25519/scrypt etc., and it's probably faster in practice anyway due to not having to allocate as much memory.

This would of course be a breaking change, but maybe something to consider if there's ever an "age v2"?

New spec: tlog-tiles

tlog-tiles is a spec describing how to expose generic transparency logs as static assets.

It specifies a set of assets exposed as GET requests: the checkpoint (per c2sp.org/tlog-checkpoing) for the tree head, tiles for the Merkle Tree, and entry bundles for the contents of the log.

It's extracted and made generic from the Go Checksum Database and the Sunlight spec. It's mostly compatible with those, and we'll adapt them to follow this spec.

There is a draft at #73.

It would be maintained by @AlCutter, @mhutchinson, and myself.

/cc @dconnolly @str4d for approval

How to expose a prime order group API for curves

People often want to implement protocols that inherently assume a group of prime order. Then these protocols are adapted to elliptic curves, some of which are not prime order (but have a prime-order subgroup), and most of which have edge cases that need handling (e.g. non-canonical encodings).

We should have a specification that defines an API for prime-order groups (e.g. similar to Ristretto's external API), and guides implementors on how to wrap it safely (and ideally efficiently) around various types of curves, documenting all the edge cases that have been encountered over the years. Then we can use this API in other protocol specifications.

Wycheproof

I noticed that project Wycheproof has been added to this project. Since I left Google I have been continuing to work on the project. Hence I'm wondering if it would be possible to have a discussion about plans to avoid duplication of work as much as possible.

chacha8rand: Specify counter in state compression encoding

This, combined with the 33 byte length specified above, indicates that this is a 1-byte counter. But nowhere is it specified what this is counting.

My guess is that this is meant to be the ChaCha8 counter (which is guaranteed to fit into 1 byte), rather than a count of how many of the 992 bytes per rekey have been output (which would require a 2-byte counter). But the unspecified implication is that when randomness is sampled that only consumes part of a block, the remainder of the block is discarded. That in turn means that the sample output is only going to match for continuous reads (or reads of a multiple of 992 bytes).

Originally posted by @str4d in #41 (comment)

chacha8rand: Fix typo

**Why the subtractions?** ChaCha8 needs to add the key back to the output, or the block function would be invertible. However, adding back the constants and the counter is unnecessary (as they are public), and is done maybe to allow adding the state as a whole when non-interlaced. (The security importance of adding back the key is mentioned in passing in the [XSalsa20 paper](https://cr.yp.to/snuffle/xsalsa-20110204.pdf), at page 5, when defining HSalsa20.) This is only slowing down SIMD implementations, so it is skipped.

Originally posted by @str4d in #41 (comment)

https-bastion: Spec feedback

Hi,

at last I've had a closer look at the bastion spec, with implementor eyes. Some questions and comments:

Backend to bastion: Probably obvious, since it terminates TLS, but this means that bastion needs to be exposed directly to the internet, in contrast to, e.g., operating behind a reverse proxy. Makes perfect sense for its purpose, but makes it less attractive to integrate the bastion function in a service that otherwise provides a plain web api that can sit behind a reverse proxy. My uses case would be the sigsum log server.

Backend auth: "The bastion checks the backend public key against an allowlist or verifies the client certificate chain." I don't get chain validation as an alternative to the public key allowlist. Are you thinking about an allowlist of x.509 names (rather than key hashes), with the names certified by any trusted web CA, or bastion operator running it's own CA signing the client certs of all allowed backends?

On bastion configuration, I wonder if there's some way to avoid maintaining an explicit allowlist. Is there any reasonable way to either be rather liberal in accepting backends, or have clients tell the bastion which backends they would like the bastion to serve?

Client to bastion requests: Would it make sense to add a verb to the request url, like

https://<bastion-host>/<verb>/<key hash>/<path>

For the main operation, verb would be "connect". The other operation I'm thinking about is "add", meaning please add this keyhash to the allowlist. May also be useful for troubleshooting with verbs exposing some of the bastion status. The "add" verb may require some authentication of the client, which adds complexity, but perhaps it's useful operational flexibility to be able to trade maintaining a backend allowlist or a client allowlist?

Client to bastion request errors: Would it make sense to return different status codes for a backend that is unknown to the bastion (i.e., key hash not on the allowlist), and a backend that is configured but not currently connected? 503 Service unavailable might be reasonable for bastion errors, even though I don't see an obvious choice of when to return 502 and when to return 503.

New spec: BLAKE3

/cc @C2SP/stewards

We would like to submit BLAKE3, by adapting our IETF draft to the C2SP format.

Sales pitch:

  • BLAKE3 is the fastest cryptographic hash function used in practice (faster than SHA-3 and BLAKE2).
  • It's not only a general-purpose hash but also a PRF, MAC, KDF, and XOF.
  • Lot of projects use it: it has 5k+ GitHub stars, and major projects using BLAKE3 in production include LLVM, Bazel, OpenZFS, IPFS, and apparently Tekken 8. We keep an incomplete list on GitHub.

One of our motivations with this submission is to get BLAKE3 added to OpenSSL (which seems to require some formal validation).

Proposed short name: BLAKE3

Proposed maintainers: @veorq @oconnor663 @sneves @zooko

tlog-checkpoint: Add test vectors

It would be useful for test vectors for checkpoints, with both log signatures and witness cosignatures.

In my implementation, the below appears to be a valid checkpoint for a log with public Ed25519 key (hex) 66e0b858e462a609e66fe71370c816d8846ff103d5499a22a7fec37fdbc424a7 (and private key hex 110000...00). Signed only by the log itself, no witnesses.

sigsum.org/v1/tree/e796172b92befd62d9dc67e41c2f5bc9d3100a3023b20b1ca40288dd1c679e69
10
HA5HAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=

— sigsum.org/v1/tree/e796172b92befd62d9dc67e41c2f5bc9d3100a3023b20b1ca40288dd1c679e69 pOZUn0n8F8olcerUF1FFdv235A/5as/coWrpLrtE7ovMeP5whgwouYExowG/lTznxu6OUGjjt5yJQ6bXtTcf718MqAQ=

Cross validation with other implementations would be useful.

/cc @C2SP/tlog-checkpoint

New spec: chacha8rand

chacha8rand is a spec for the new high-performance CSPRNG being built into Go 1.22.

There is a draft at #41.

It would be maintained by me and @rsc (if he wishes).

/cc @dconnolly @str4d for approval

clarify why precertificate is there twice in static-ct-api data entries

@haydentherapper and I were a bit confused about why precertificate seems to be in a data entry twice (once in TileLeaf, once in TimestampedEntry), could we can add a note explaining this?

Our collective understanding is that the TileLeaf.pre_certificate is the one that was submitted to the log, the other is the one is not actually a precertificate but a PreCert with some fields possibly modified (eg. Issuer field). The fingerprints are the chain for the TileLeaf.pre_certificate, not for the TimestampedEntry's PreCert.

Maybe we could also clarify that the fingerprints and the TileLeaf.pre_certificate is the same shape as extra_data returned by get-entries in the RFC6962? As the motivation for these top level fields?

I am not sure how to submit suggestions to the spec (can I just make a PR?), and perhaps this is all fine and I'm just new to the ecosystem.

New spec assignment: sunlight

Sunlight is a design for a new Certificate Transparency log server. The design includes a zero-merge-delay implementation (out of scope for this specification), and a new read-path API (what this specification is about).

Sunlight logs expose the regular RFC 6962 write APIs (add-chain, add-pre-chain, and get-roots), however all the read operations (get-sth, get-sth-consistency, get-proof-by-hash, get-entries, and get-entry-and-proof) are replaced with static assets served over HTTPS.

This specification is intended to document the precise names and formats of the assets made available by a Sunlight instance:

  • the signed tree head in c2sp.org/tlog-checkpoint format
  • the Merkle tree in tiles (research.swtch.com/tlog) format
  • the tree leaves in compressed batches
  • the list of roots and intermediates for chain building in PEM format

A complete design document (covering both implementation and new API) is available at https://filippo.io/a-different-CT-log.

signed-note: Support RSA as a signature type

Sigstore will be using the tlog-cosignature and related specifications. Currently, our key hash

Private deployments have the option to use either ed25519, ecdsa or RSA for signature algorithms to maximize cryptographic agility. While the public deployment uses ECDSA, we don't provide any recommendations to private deployments.

The request is to include identifiers for RSA-PKCS#1-v1.5 and RSA-PSS. Within Sigstore deployments, the former is far more prevalent when RSA is used.

For key size and hash requirements, is it sufficient to include suggestions as SHOULDs? Ideally we don't need a signature type per combination of scheme, key size and hash. Tangential, but for ECDSA, should a preferred curve be included as a SHOULD?

Ref: #54 (comment)

MPC specs for threshold signatures

The majority of open source libraries for MPC threshold signatures (let's start with 2 party ECDSA) either do not include by-product stack such as: serialisation/deserialisation and most importantly networking stack for a deployed protocol; or they support completely different standards in terms of networking and ser/de. That renders comparison, adoption and deployment of new protocols in a real world scenario a cumbersome.

Having a unifying standard for all MPC threshold signatures being agnostic to the number of involved parties and protocol guarantees would very much boost adoption, comparison and progress since new protocols being constructed will focus on the cryptographic part entirely and on the important side stacks that are needed to deploy it.

That is more of a need for an MPC standard proposal than an issue for an existing standard.

age: Why is the header text-based?

From my perspective, the decision to use a text-based header format is a slightly strange one.

From the perspective of an implementer, extra care must be taken regarding whitespace handling, and ensuring that base64 encodings are canonical, etc.

In general, text-based parsing is more fiddly and has more edge-cases to consider than binary format parsing (IMHO).

From the perspective of an end-user, there is little benefit to being able to read the headers visually, since they're still very opaque. Information about the number and types of recipients could easily be reported by a command-line tool option, if desired.

Is there some benefit or rationale that I'm missing?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.