theupdateframework / go-tuf Goto Github PK
View Code? Open in Web Editor NEWGo implementation of The Update Framework (TUF)
Home Page: https://theupdateframework.com
License: Apache License 2.0
Go implementation of The Update Framework (TUF)
Home Page: https://theupdateframework.com
License: Apache License 2.0
The client tests need improving to cover the attack scenarios discussed in section 1.5.2 of the TUF spec (there are some good examples in the Python test suite).
There should be no scenario where we save more than one signature for a given key ID.
Note: description edited to clarify that this is not a go-tuf metadata generation issue.
While looking at the metadata generated for the sigstore root of trust I noticed that the expires entries in the metadata encodes a non-UTC timezone:
"expires": "2021-12-18T13:28:12.99008-06:00"
(from 1.root.json)
Whereas the the specification suggests time should always be in UTC:
Metadata date-time follows the ISO 8601 standard. The expected format of the combined date and time string is "YYYY-MM-DDTHH:MM:SSZ". Time is always in UTC, and the "Z" time zone designator is attached to indicate a zero UTC offset. An example date-time string is "1985-10-21T01:21:00Z".
@asraa pointed out below that the expires field is not set by go-tuf, so the issue here (if any) is that the client does not verify that the date-time in the expires field is in UTC.
If an update determines a newer version of a file exists, it should retain that information locally even if there is a subsequent error in the update process.
If a new root.json
is downloaded during an update (e.g. because the local one is expired), and the new root.json
has completely different keys, then other local metadata can potentially no longer be verified in Client.getLocalMeta
, leaving the client unable to update.
Instead of returning an error when local data does not verify, it should be invalidated and re-downloaded, possibly logging the failed verification somewhere.
The tuf
CLI supports receiving passphrases via environment variables in the form TUF_{{ROLE}}_PASSPHRASE
. This should be documented.
Hey guys,
I got this working on Linux.
Does this support updating osx apps ?
I should be able to change role thresholds using the CLI, for example:
$ tuf set-threshold root 2
Sorry for the redacted stack trace.
I got a panic in what is essentially this line: https://github.com/flynn/go-tuf/blob/890a6cb82044de20e094222d137721d287f46b71/local_store.go#L265
I get that this is probably a case of some wrong input of sorts (e.g. name being empty or nil
), but the program should still not crash here.
panic: runtime error: index out of range
goroutine 1 [running]:
[github.com/flynn/go-tuf.(*fileSystemStore).Commit.func3(0xc0003763e3](https://github.com/flynn/go-tuf.(*fileSystemStore).Commit.func3(0xc0003763e3), 0xc, 0xc000376301)
/<readacted>/src/github.com/flynn/go-tuf/local_store.go:274 +0x1a2
[github.com/flynn/go-tuf.(*fileSystemStore).Commit.func4(0xc000376380](https://github.com/flynn/go-tuf.(*fileSystemStore).Commit.func4(0xc000376380), 0x6f, 0x86d0e0, 0xc00037fd40, 0x0, 0x0, 0x4b732a, 0xc00037fd40)
For example ErrNotFound
should contain information on what exactly was not found.
Here is a proposed workflow for managing a tuf workflow from the command line:
https://gist.github.com/lmars/13e272b5bb4195ae24c8
@titanous @heavenlyhash please comment / update if you feel anything is missing / incorrect, I plan to update #3 to support this workflow.
Staging the snapshot and timestamp manifests should optionally compress their dependant manifests, for example:
$ tree staged
staged
└── targets.json
$ tuf snapshot --compression=gzip
$ tree staged
staged
├── snapshot.json
├── targets.json
└── targets.json.gz
$ tuf timestamp --compression=gzip
$ tree staged
staged
├── snapshot.json
├── snapshot.json.gz
├── targets.json
├── targets.json.gz
└── timestamp.json
I should be able to list the IDs of the current signing keys and whether they are present locally, for example:
$ tuf list-keys
ROLE KEYID LOCAL
root 8fb16df0010dfeeb5737245e527e598fdf38cf92aea18a205e19fbf97c5766fd false
snapshot 5a83cb8e2db2dceada45b14c6ab36c44618b26518b7fc9800f737cc54de9ffe2 true
targets 6ee7468ea4027e8c813ed774f2959336e6fb0c7015a0df73fbe3b24aa76a34a2 false
targets a418bc9ec8699df5903779fe2c80058ac0ba2843b956e535aa377a6c99c56499 true
timestamp 6a14c18be9ea8e3b53509e0644e79097f933303186bd649ef199f854c5f0a86d true
If I have all the necessary keys locally, I should be able to do all of the following with one command:
This could be supported with a commit flag, e.g. tuf commit --add
The tuf
command should default to saving keys in encrypted form using a passphrase. The passphrase should be passed interactively by default with an option to pass it via an environment variable (as a very weak attempt to prevent it from leaking to observers of ps
). An option to disable encryption (like --insecure-plaintext-key
or similar) should be provided as well.
The encryption steps are as follows:
crypto/rand
crypto/rand
The N parameter was chosen to be ~100ms of work using the default implementation on the 2.3GHz Core i7 Haswell processor in my late-2013 Apple Retina Macbook Pro (it takes 113ms).
The JSON should probably look something like this:
{
"_type": "encrypted-key",
"kdf": {
"name": "scrypt",
"params": {
"N": 32768,
"r": 8,
"p": 1
},
"salt": "<base64 encoded bytes>"
},
"algorithm": {
"name": "nacl/secretbox",
"nonce": "<base64 encoded bytes>"
},
"ciphertext": "<base64 encoded bytes>"
}
I was running through the setup tutorial and after running "tuf commit" each file's hash was appended to the beginning of the file name, which doesn't match the tutorial. If that is the correct behavior it would be helpful to update the tutorial so that people don't dive into the code thinking there might be a bug to fix.
I would like to be able to make historic versions of the same file available for download to people who want it.
As a concrete example, users can download the latest version (say v3) of my-binary
by running client.Download("my-binary")
, but I would also like them to be able to download v2, perhaps like client.DownloadVersion("my-binary", "v2")
.
I could handle this by adding version strings to files (e.g. have my-binary
(the latest), my-binary.v2
, my-binary.v3
etc.) but this will lead to an ever growing targets manifest.
I could also point the client at a historic snapshot which contains a targets manifest with the correct versions in it, but this would be considered a downgrade attack and be rejected.
Instead I would like to maintain a collection of target manifests (one per version of my software), and requesting a specific version of a file would look in the relevant targets manifest (identified using custom metadata).
This isn't explicitly discussed in the tuf spec, but what is discussed is the ability for targets to delegate trust for a subset of target files to other targets. I plan to use this feature to achieve my goal, but the delegated roles would likely have the same keys as the top-level target, and would also have overlapping target responsibility (the end user would be responsible for determining which delegated manifest to download a given file from based on custom metadata).
On the repo side, the following changes would be needed:
tuf role
command which will manage delegated roles (i.e. their keys and custom metadata)--role xxx
flag to tuf add
to support adding files to specific delegated rolestuf sign
to support signing delegated rolesOn the client side:
role := client.FindRole(func(Role) bool)
to find a single role, or roles := client.FindRoles(func(Role) bool, n int)
to find multiple roles)role.Download(name string)
to download a file from a given delegated roleWe need to decide whether our unintended use of delegated roles is a good idea, and whether it will be compatible with adding things like hashed bins in the future.
/cc @titanous
If I run tuf clean
before the first tuf commit
, then root.json
will be deleted and there is no way to re-create it.
The CLI should output informative messages, for example:
$ tuf gen-key root
Created root key with ID 8f782b389ce30f9296d5b850cfd21a2eb55f134ba6af8397dbed7b0fba259228
$ tuf sign root.json
Added 1 signature to root.json
The CLI should default to using consistent snapshots.
At the time of submitting the #143 that updates root, there was another PR describing the fast forward recovery approach.
The current implementation of the root update is based on the current state of that PR (https://github.com/mnm678/specification/tree/e50151d9df632299ddea364c4f44fe8ca9c10184)
Specifically, the part of the updateRoots that removes some metadata files based on the top-level key rotation might need to be updated upon any change in the fast-forward recovery pr.
I came across this confusion when trying to issue an update on the targets key, and I'm not sure if it's intended. The following test fails in repo_test.go
. What it does is
root.json.
indeed, signatures in root.json
are updated.targets.json
(with the new key), and snapshot and timestampHowever, what we find after committing is that the old signature was NOT removed from targets.json
. The old signature still exists, because it didn't get updated (because it's not in the signing key database).
My question is:
RevokeKey
make sure to clear signatures associated with the key in the role file? ORI am leaning to (1) after initial thought. Dealing with this as early as possible makes sense to me.
func (rs *RepoSuite) TestRevokeTargets(c *C) {
files := map[string][]byte{"foo.txt": []byte("foo")}
local := MemoryStore(make(map[string]json.RawMessage), files)
r, err := NewRepo(local)
c.Assert(err, IsNil)
// don't use consistent snapshots to make the checks simpler
c.Assert(r.Init(false), IsNil)
genKey(c, r, "root")
targetIds := genKey(c, r, "targets")
genKey(c, r, "snapshot")
genKey(c, r, "timestamp")
c.Assert(r.AddTarget("foo.txt", nil), IsNil)
c.Assert(r.Snapshot(CompressionTypeNone), IsNil)
c.Assert(r.Timestamp(), IsNil)
c.Assert(r.Commit(), IsNil)
// Update the targets key
c.Assert(r.RevokeKey("targets", targetIds[0]), IsNil)
newTargetIds := genKey(c, r, "targets")
// Update the targets key
c.Assert(r.RevokeKey("targets", targetIds[0]), IsNil)
newTargetIds := genKey(c, r, "targets")
// Re-sign, snapshot and timestamp
c.Assert(r.Sign("targets.json"), IsNil)
c.Assert(r.Snapshot(CompressionTypeNone), IsNil)
c.Assert(r.Timestamp(), IsNil)
c.Assert(r.Commit(), IsNil)
// Signatures in targets.json should only be from the new key.
checkSigIDs := func(role string, keyIDs ...string) {
s, err := r.SignedMeta(role)
c.Assert(err, IsNil)
c.Assert(s.Signatures, HasLen, len(keyIDs))
for i, id := range keyIDs {
c.Assert(s.Signatures[i].KeyID, Equals, id)
}
}
// THIS LINE FAILS
checkSigIDs("targets.json", newTargetIds...)
}
The ecdsa-sha2-nistp256 verifier,
which is implemented here: https://github.com/theupdateframework/go-tuf/blob/master/verify/verifiers.go#L49
assumes uncompressed form of public key specified in section 4.3.6 of ANSI X9.62 (because it uses elliptic.Unmarshal function)
The specification however says that this scheme should use format where "PUBLIC is in PEM format and a string."
theupdateframework/python-tuf#498
So it should use: x509.MarshalPKIXPublicKey + pem.EncodeToMemory.
Would it be fine to fix it now? Is such a fix constrained anyhow?
We should have support for the mirrors metadata.
Hi,
this is how I check for new versions of Flynn:
Initially:
tuf-client init https://dl.flynn.io/tuf <<< '[{"keytype":"ed25519","keyval":{"public":"6cfda23aa48f530aebd5b9c01030d06d02f25876b5508d681675270027af4731"}}]'
and then for the checks for nightly and stable:
tuf-client get https://dl.flynn.io/tuf /channels/nightly
tuf-client get https://dl.flynn.io/tuf /channels/stable
Every time I execute the checks, the local tuf.db grows big times.
Call nr:
etc..
I should be able to run tuf regenerate
to regenerate targets.json
based on the target files in the committed targets
directory.
go test -v
go-tuf/client [master●] » go test -v
=== RUN Test
----------------------------------------------------------------------
FAIL: <autogenerated>:1: InteropSuite.TestGoClientPythonGenerated
interop_test.go:54:
c.Assert(client.Init([]*data.Key{key}, 1), IsNil)
... value client.ErrDecodeFailed = client.ErrDecodeFailed{File:"root.json", Err:(*errors.errorString)(0xc4200964d0)} ("tuf: failed to decode root.json: tuf: valid signatures did not meet threshold")
OOPS: 31 passed, 1 FAILED
--- FAIL: Test (6.58s)
FAIL
exit status 1
FAIL github.com/flynn/go-tuf/client 6.589s
» go version
go version go1.10 linux/amd64
» uname -a
Linux primary.aagat.com 4.15.3-2-ARCH #1 SMP PREEMPT Thu Feb 15 00:13:49 UTC 2018 x86_64 GNU/Linux
I had to make a few changes in order to generate repo (breaking changes upstream?).
client/testdata/generate/Dockerfile
FROM ubuntu:trusty
RUN apt-get update
RUN apt-get install -y python python-dev python-pip libffi-dev tree libssl-dev
# Use the develop branch of tuf for the following fix:
# https://github.com/theupdateframework/tuf/commit/38005fe
RUN apt-get install -y git
RUN pip install --upgrade pip
RUN pip install --upgrade setuptools
RUN pip install --no-use-wheel git+https://github.com/theupdateframework/tuf.git@develop && pip install tuf[tools]
ADD generate.py generate.sh /
CMD /generate.sh
Modified file: client/testdata/generate/generate.py
#
# A script to generate TUF repository files.
#
# A modification of generate.py from the Python implementation:
# https://github.com/theupdateframework/tuf/blob/v0.9.9/tests/repository_data/generate.py
import shutil
import datetime
import optparse
import stat
from tuf.repository_tool import *
import os
parser = optparse.OptionParser()
parser.add_option("-c","--consistent-snapshot", action='store_true', dest="consistent_snapshot",
help="Generate consistent snapshot", default=False)
(options, args) = parser.parse_args()
repository = create_new_repository('repository')
root_key_file = 'keystore/root_key'
targets_key_file = 'keystore/targets_key'
snapshot_key_file = 'keystore/snapshot_key'
timestamp_key_file = 'keystore/timestamp_key'
generate_and_write_ed25519_keypair(root_key_file, password='password')
generate_and_write_ed25519_keypair(targets_key_file, password='password')
generate_and_write_ed25519_keypair(snapshot_key_file, password='password')
generate_and_write_ed25519_keypair(timestamp_key_file, password='password')
root_public = import_ed25519_publickey_from_file(root_key_file+'.pub')
targets_public = import_ed25519_publickey_from_file(targets_key_file+'.pub')
snapshot_public = import_ed25519_publickey_from_file(snapshot_key_file+'.pub')
timestamp_public = import_ed25519_publickey_from_file(timestamp_key_file+'.pub')
root_private = import_ed25519_privatekey_from_file(root_key_file, 'password')
targets_private = import_ed25519_privatekey_from_file(targets_key_file, 'password')
snapshot_private = import_ed25519_privatekey_from_file(snapshot_key_file, 'password')
timestamp_private = import_ed25519_privatekey_from_file(timestamp_key_file, 'password')
repository.root.add_verification_key(root_public)
repository.targets.add_verification_key(targets_public)
repository.snapshot.add_verification_key(snapshot_public)
repository.timestamp.add_verification_key(timestamp_public)
repository.root.load_signing_key(root_private)
repository.targets.load_signing_key(targets_private)
repository.snapshot.load_signing_key(snapshot_private)
repository.timestamp.load_signing_key(timestamp_private)
target1_filepath = 'repository/targets/file1.txt'
if not os.path.exists('repository/targets/'):
os.makedirs('repository/targets/')
target2_filepath = 'repository/targets/dir/file2.txt'
if not os.path.exists('repository/targets/dir/'):
os.makedirs('repository/targets/dir/')
with open(target1_filepath, 'wt') as file_object:
file_object.write('file1.txt')
with open(target2_filepath, 'wt') as file_object:
file_object.write('file2.txt')
octal_file_permissions = oct(os.stat(target1_filepath).st_mode)[4:]
file_permissions = {'file_permissions': octal_file_permissions}
repository.targets.add_target(target1_filepath, file_permissions)
repository.targets.add_target(target2_filepath)
repository.root.expiration = datetime.datetime(2030, 1, 1, 0, 0)
repository.targets.expiration = datetime.datetime(2030, 1, 1, 0, 0)
repository.snapshot.expiration = datetime.datetime(2030, 1, 1, 0, 0)
repository.timestamp.expiration = datetime.datetime(2030, 1, 1, 0, 0)
repository.targets.compressions = ['gz']
if options.consistent_snapshot:
repository.writeall(consistent_snapshot=True)
else:
repository.writeall()
shutil.move('repository/metadata.staged', 'repository/metadata')
#!/usr/bin/env python
#
# A script to generate TUF repository files.
#
# A modification of generate.py from the Python implementation:
# https://github.com/theupdateframework/tuf/blob/v0.9.9/tests/repository_data/generate.py
import shutil
import datetime
import optparse
import stat
from tuf.repository_tool import *
import os
parser = optparse.OptionParser()
parser.add_option("-c","--consistent-snapshot", action='store_true', dest="consistent_snapshot",
help="Generate consistent snapshot", default=False)
(options, args) = parser.parse_args()
repository = create_new_repository('repository')
root_key_file = 'keystore/root_key'
targets_key_file = 'keystore/targets_key'
snapshot_key_file = 'keystore/snapshot_key'
timestamp_key_file = 'keystore/timestamp_key'
generate_and_write_ed25519_keypair(root_key_file, password='password')
generate_and_write_ed25519_keypair(targets_key_file, password='password')
generate_and_write_ed25519_keypair(snapshot_key_file, password='password')
generate_and_write_ed25519_keypair(timestamp_key_file, password='password')
root_public = import_ed25519_publickey_from_file(root_key_file+'.pub')
targets_public = import_ed25519_publickey_from_file(targets_key_file+'.pub')
snapshot_public = import_ed25519_publickey_from_file(snapshot_key_file+'.pub')
timestamp_public = import_ed25519_publickey_from_file(timestamp_key_file+'.pub')
root_private = import_ed25519_privatekey_from_file(root_key_file, 'password')
targets_private = import_ed25519_privatekey_from_file(targets_key_file, 'password')
snapshot_private = import_ed25519_privatekey_from_file(snapshot_key_file, 'password')
timestamp_private = import_ed25519_privatekey_from_file(timestamp_key_file, 'password')
repository.root.add_verification_key(root_public)
repository.targets.add_verification_key(targets_public)
repository.snapshot.add_verification_key(snapshot_public)
repository.timestamp.add_verification_key(timestamp_public)
repository.root.load_signing_key(root_private)
repository.targets.load_signing_key(targets_private)
repository.snapshot.load_signing_key(snapshot_private)
repository.timestamp.load_signing_key(timestamp_private)
target1_filepath = 'repository/targets/file1.txt'
if not os.path.exists('repository/targets/'):
os.makedirs('repository/targets/')
target2_filepath = 'repository/targets/dir/file2.txt'
if not os.path.exists('repository/targets/dir/'):
os.makedirs('repository/targets/dir/')
with open(target1_filepath, 'wt') as file_object:
file_object.write('file1.txt')
with open(target2_filepath, 'wt') as file_object:
file_object.write('file2.txt')
octal_file_permissions = oct(os.stat(target1_filepath).st_mode)[4:]
file_permissions = {'file_permissions': octal_file_permissions}
repository.targets.add_target(target1_filepath, file_permissions)
repository.targets.add_target(target2_filepath)
repository.root.expiration = datetime.datetime(2030, 1, 1, 0, 0)
repository.targets.expiration = datetime.datetime(2030, 1, 1, 0, 0)
repository.snapshot.expiration = datetime.datetime(2030, 1, 1, 0, 0)
repository.timestamp.expiration = datetime.datetime(2030, 1, 1, 0, 0)
repository.targets.compressions = ['gz']
if options.consistent_snapshot:
repository.writeall(consistent_snapshot=True)
else:
repository.writeall()
shutil.move('repository/metadata.staged', 'repository/metadata')
make
to generate repo.client/testdata [master●] » make
docker build -t tuf-gen ./generate
Sending build context to Docker daemon 7.68kB
Step 1/9 : FROM ubuntu:trusty
---> dc4491992653
Step 2/9 : RUN apt-get update
---> Using cache
---> 4448229afdc9
Step 3/9 : RUN apt-get install -y python python-dev python-pip libffi-dev tree libssl-dev
---> Using cache
---> e76d647ae1d1
Step 4/9 : RUN apt-get install -y git
---> Using cache
---> 388e3c4d12f6
Step 5/9 : RUN pip install --upgrade pip
---> Using cache
---> bbc9ef4a7f4e
Step 6/9 : RUN pip install --upgrade setuptools
---> Using cache
---> 9b60f68e0734
Step 7/9 : RUN pip install --no-use-wheel git+https://github.com/theupdateframework/tuf.git@develop && pip install tuf[tools]
---> Using cache
---> 9ab38c82fee8
Step 8/9 : ADD generate.py generate.sh /
---> Using cache
---> 037b9501c3fd
Step 9/9 : CMD /generate.sh
---> Using cache
---> 0341e646ab74
Successfully built 0341e646ab74
Successfully tagged tuf-gen:latest
docker run tuf-gen | tar x
Creating '/tmp/tmp.CmokAtVEyB/with-consistent-snapshot/repository'
Creating u'/tmp/tmp.CmokAtVEyB/with-consistent-snapshot/repository/metadata.staged'
Creating u'/tmp/tmp.CmokAtVEyB/with-consistent-snapshot/repository/targets'
Creating '/tmp/tmp.CmokAtVEyB/without-consistent-snapshot/repository'
Creating u'/tmp/tmp.CmokAtVEyB/without-consistent-snapshot/repository/metadata.staged'
Creating u'/tmp/tmp.CmokAtVEyB/without-consistent-snapshot/repository/targets'
Files generated:
.
|-- with-consistent-snapshot
| |-- keystore
| | |-- root_key
| | |-- root_key.pub
| | |-- snapshot_key
| | |-- snapshot_key.pub
| | |-- targets_key
| | |-- targets_key.pub
| | |-- timestamp_key
| | `-- timestamp_key.pub
| |-- repository
| | |-- metadata
| | | |-- 1.root.json
| | | |-- 1.snapshot.json
| | | |-- 1.targets.json
| | | |-- 1.timestamp.json
| | | |-- root.json
| | | |-- snapshot.json
| | | |-- targets.json
| | | `-- timestamp.json
| | `-- targets
| | |-- 055dc805570eecebad4270774054ee4375ef9a7248d981cfa8155dc884817df31e8497684dd26addd018a30565c3ccf87eeb70445f2e76587af84ed6ce1e0302.file1.txt
| | |-- 55ae75d991c770d8f3ef07cbfde124ffce9c420da5db6203afab700b27e10cf9.file1.txt
| | |-- dir
| | | |-- 04e2f59431a9d219321baf7d21b8cc797d7615dc3e9515c782c49d2075658701.file2.txt
| | | |-- 2b85daf030ebc94d302822da4fd50216dc56f90c9bb60a95b272aa5b11fe81cd9b192b1a860896d6a8241d1a42cc97b6015d42100c9b46432a32db4b13a11c58.file2.txt
| | | `-- file2.txt
| | `-- file1.txt
| `-- tuf.log
`-- without-consistent-snapshot
|-- keystore
| |-- root_key
| |-- root_key.pub
| |-- snapshot_key
| |-- snapshot_key.pub
| |-- targets_key
| |-- targets_key.pub
| |-- timestamp_key
| `-- timestamp_key.pub
|-- repository
| |-- metadata
| | |-- 1.root.json
| | |-- root.json
| | |-- snapshot.json
| | |-- targets.json
| | `-- timestamp.json
| `-- targets
| |-- dir
| | `-- file2.txt
| `-- file1.txt
`-- tuf.log
12 directories, 39 files
Is there a plan to release this? Also what is the roadmap planned.
All JSON files should be formatted with MarshalIndent
and have a trailing newline at the end.
I should not be able to commit changes to a repo which downgrades the version of any metadata.
root.json
metadata is currently populated with the keys available in the keys/<role>.json
files. For example, if one wishes to add a root key to root.json
, the tuf gen-key root
command is issued. The public key of the newly generated key is specified in root.json
by gen-key. However, there isn't a command to remove a key from a specific role. I suppose one can generate a new root.json
key file with only the keys desired, however, this likely requires manually editing files.
In addition, the tools should also support the ability revoke keys for specific roles (i.e., not list their public key(s) in metadata), yet still sign metadata with the revoked keys to allow clients to successfully update. The specification goes into more detail about this aspect of key revocation and management:
"To replace a compromised root key or any other top-level role key, the root role signs a new root.json file that lists the updated trusted keys for the role. When replacing root keys, an application will sign the new root.json file with both the new and old root keys until all clients are known to have obtained the new root.json file (a safe assumption is that this will be a very long time or never). There is no risk posed by continuing to sign the root.json file with revoked keys as once clients have updated they no longer trust the revoked key. This is only to ensure outdated clients remain able to update."
I should be able to change the passphrase of a keys file, for example:
$ tuf change-passphrase root
Enter current root keys passphrase:
Enter new root keys passphrase:
Repeat new root keys passphrase:
Slow retrieval attacks should be prevented by requiring that remote data is fetched in a timely manner.
The Python implementation guards against this attack by reading remote data in small chunks, and signalling an error if reading any given chunk takes longer than a specified time (see here).
Clients should be able to use the same database file with multiple repositories (e.g Flynn users pulling images from arbitrary repositories).
This can be achieved by updating FileLocalStore
to take a namespace
argument which namespaces the boltdb lookups, and the remote repository URL can be used as the namespace.
I should be able to see the status of the repository by running tuf status
, e.g.:
$ tuf status
MANIFEST STATUS
root.json committed
targets.json staged
snapshot.json missing
timestamp.json missing
Hi, I wanted to implement a custom LocalStore for a store on an OCI registry, but I ran into a problem because the parameter type is private
Line 39 in aee6270
I potentially could use the in-memory store and do some before/after conversion, but just wanted to see if it would be acceptable to make this public.
How does this project compare with https://github.com/theupdateframework/notary? This could be useful to add to the readme.
It seems to me that this project essentially implements the "signer" portions of Notary, without any HTTP services. Is that correct?
When might a user decide to run go-tuf instead of Notary?
We should have support for target role delegation.
We should have test fixtures generated by the Python implementation to ensure compatibility with our client, as well as a test harness that uses the Python client to test interoperability with repos generated by our code.
Downloaded metadata should be rejected if it is expired. Metadata from local storage however should not be rejected due to expiry (only signature checked for consistency).
There should be a flag when generating manifests that modifies the expires from the default.
Hi guys,
I am wondering how do you generate a keyID for a given key (curiosity) ?
I found this but I did not get it: code
role.KeyIDs = append(role.KeyIDs, pk.ID())
Thanks :)
The "create signed root manifest" example (which is the first thing repository maintainers need to do) needs to be clearer on why files need copying from a "root" box to a "repo" box, and that using multiple machines is optional (at the expense of reduced security).
From section 6.1 of the TUF spec:
To replace a compromised root key or any other top-level role key, the root
role signs a new root.json file that lists the updated trusted keys for the
role. When replacing root keys, an application will sign the new root.json
file with both the new and old root keys until all clients are known to have
obtained the new root.json file (a safe assumption is that this will be a
very long time or never). There is no risk posed by continuing to sign the
root.json file with revoked keys as once clients have updated they no longer
trust the revoked key. This is only to ensure outdated clients remain able
to update.
When adding a file that came from another system I should be able to provide a flag that verifies the hash of the file against the hash that was calculated. This saves me from performing an additional manual checksum verification before adding the file. It might also make sense to allow moving files from elsewhere on the filesystem as part of the same command?
The client should support resuming unfinished downloads.
We should have support for hashed bins.
Some part of the code-based is based on the assumption that the Remote Store Interface implementation for S3 returns the correct error code (i.e., ErrMissingRemoteMetadata). This is of importance because S3 uses 403 as a response code instead of commonly used 404.
This issue it to check whether that is the case and address that if it is not.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.