Code Monkey home page Code Monkey logo

cacache's Introduction

cacache npm version license Travis AppVeyor Coverage Status

cacache is a Node.js library for managing local key and content address caches. It's really fast, really good at concurrency, and it will never give you corrupted data, even if cache files get corrupted or manipulated.

On systems that support user and group settings on files, cacache will match the uid and gid values to the folder where the cache lives, even when running as root.

It was written to be used as npm's local cache, but can just as easily be used on its own.

Install

$ npm install --save cacache

Table of Contents

Example

const cacache = require('cacache')
const fs = require('fs')

const cachePath = '/tmp/my-toy-cache'
const key = 'my-unique-key-1234'

// Cache it! Use `cachePath` as the root of the content cache
cacache.put(cachePath, key, '10293801983029384').then(integrity => {
  console.log(`Saved content to ${cachePath}.`)
})

const destination = '/tmp/mytar.tgz'

// Copy the contents out of the cache and into their destination!
// But this time, use stream instead!
cacache.get.stream(
  cachePath, key
).pipe(
  fs.createWriteStream(destination)
).on('finish', () => {
  console.log('done extracting!')
})

// The same thing, but skip the key index.
cacache.get.byDigest(cachePath, integrityHash).then(data => {
  fs.writeFile(destination, data, err => {
    console.log('tarball data fetched based on its sha512sum and written out!')
  })
})

Features

  • Extraction by key or by content address (shasum, etc)
  • Subresource Integrity web standard support
  • Multi-hash support - safely host sha1, sha512, etc, in a single cache
  • Automatic content deduplication
  • Fault tolerance (immune to corruption, partial writes, process races, etc)
  • Consistency guarantees on read and write (full data verification)
  • Lockless, high-concurrency cache access
  • Streaming support
  • Promise support
  • Fast -- sub-millisecond reads and writes including verification
  • Arbitrary metadata storage
  • Garbage collection and additional offline verification
  • Thorough test coverage
  • There's probably a bloom filter in there somewhere. Those are cool, right? πŸ€”

Contributing

The cacache team enthusiastically welcomes contributions and project participation! There's a bunch of things you can do if you want to contribute! Please don't hesitate to jump in if you'd like to, or even ask us questions if something isn't clear.

All participants and maintainers in this project are expected to follow Code of Conduct, and just generally be excellent to each other.

Please refer to the Changelog for project history details, too.

Happy hacking!

API

> cacache.ls(cache) -> Promise<Object>

Lists info for all entries currently in the cache as a single large object. Each entry in the object will be keyed by the unique index key, with corresponding get.info objects as the values.

Example
cacache.ls(cachePath).then(console.log)
// Output
{
  'my-thing': {
    key: 'my-thing',
    integrity: 'sha512-BaSe64/EnCoDED+HAsh=='
    path: '.testcache/content/deadbeef', // joined with `cachePath`
    time: 12345698490,
    size: 4023948,
    metadata: {
      name: 'blah',
      version: '1.2.3',
      description: 'this was once a package but now it is my-thing'
    }
  },
  'other-thing': {
    key: 'other-thing',
    integrity: 'sha1-ANothER+hasH=',
    path: '.testcache/content/bada55',
    time: 11992309289,
    size: 111112
  }
}

> cacache.ls.stream(cache) -> Readable

Lists info for all entries currently in the cache as a single large object.

This works just like ls, except get.info entries are returned as 'data' events on the returned stream.

Example
cacache.ls.stream(cachePath).on('data', console.log)
// Output
{
  key: 'my-thing',
  integrity: 'sha512-BaSe64HaSh',
  path: '.testcache/content/deadbeef', // joined with `cachePath`
  time: 12345698490,
  size: 13423,
  metadata: {
    name: 'blah',
    version: '1.2.3',
    description: 'this was once a package but now it is my-thing'
  }
}

{
  key: 'other-thing',
  integrity: 'whirlpool-WoWSoMuchSupport',
  path: '.testcache/content/bada55',
  time: 11992309289,
  size: 498023984029
}

{
  ...
}

> cacache.get(cache, key, [opts]) -> Promise({data, metadata, integrity})

Returns an object with the cached data, digest, and metadata identified by key. The data property of this object will be a Buffer instance that presumably holds some data that means something to you. I'm sure you know what to do with it! cacache just won't care.

integrity is a Subresource Integrity string. That is, a string that can be used to verify data, which looks like <hash-algorithm>-<base64-integrity-hash>.

If there is no content identified by key, or if the locally-stored data does not pass the validity checksum, the promise will be rejected.

A sub-function, get.byDigest may be used for identical behavior, except lookup will happen by integrity hash, bypassing the index entirely. This version of the function only returns data itself, without any wrapper.

See: options

Note

This function loads the entire cache entry into memory before returning it. If you're dealing with Very Large data, consider using get.stream instead.

Example
// Look up by key
cache.get(cachePath, 'my-thing').then(console.log)
// Output:
{
  metadata: {
    thingName: 'my'
  },
  integrity: 'sha512-BaSe64HaSh',
  data: Buffer#<deadbeef>,
  size: 9320
}

// Look up by digest
cache.get.byDigest(cachePath, 'sha512-BaSe64HaSh').then(console.log)
// Output:
Buffer#<deadbeef>

> cacache.get.stream(cache, key, [opts]) -> Readable

Returns a Readable Stream of the cached data identified by key.

If there is no content identified by key, or if the locally-stored data does not pass the validity checksum, an error will be emitted.

metadata and integrity events will be emitted before the stream closes, if you need to collect that extra data about the cached entry.

A sub-function, get.stream.byDigest may be used for identical behavior, except lookup will happen by integrity hash, bypassing the index entirely. This version does not emit the metadata and integrity events at all.

See: options

Example
// Look up by key
cache.get.stream(
  cachePath, 'my-thing'
).on('metadata', metadata => {
  console.log('metadata:', metadata)
}).on('integrity', integrity => {
  console.log('integrity:', integrity)
}).pipe(
  fs.createWriteStream('./x.tgz')
)
// Outputs:
metadata: { ... }
integrity: 'sha512-SoMeDIGest+64=='

// Look up by digest
cache.get.stream.byDigest(
  cachePath, 'sha512-SoMeDIGest+64=='
).pipe(
  fs.createWriteStream('./x.tgz')
)

> cacache.get.info(cache, key) -> Promise

Looks up key in the cache index, returning information about the entry if one exists.

Fields
  • key - Key the entry was looked up under. Matches the key argument.
  • integrity - Subresource Integrity hash for the content this entry refers to.
  • path - Filesystem path where content is stored, joined with cache argument.
  • time - Timestamp the entry was first added on.
  • metadata - User-assigned metadata associated with the entry/content.
Example
cacache.get.info(cachePath, 'my-thing').then(console.log)

// Output
{
  key: 'my-thing',
  integrity: 'sha256-MUSTVERIFY+ALL/THINGS=='
  path: '.testcache/content/deadbeef',
  time: 12345698490,
  size: 849234,
  metadata: {
    name: 'blah',
    version: '1.2.3',
    description: 'this was once a package but now it is my-thing'
  }
}

> cacache.get.hasContent(cache, integrity) -> Promise

Looks up a Subresource Integrity hash in the cache. If content exists for this integrity, it will return an object, with the specific single integrity hash that was found in sri key, and the size of the found content as size. If no content exists for this integrity, it will return false.

Example
cacache.get.hasContent(cachePath, 'sha256-MUSTVERIFY+ALL/THINGS==').then(console.log)

// Output
{
  sri: {
    source: 'sha256-MUSTVERIFY+ALL/THINGS==',
    algorithm: 'sha256',
    digest: 'MUSTVERIFY+ALL/THINGS==',
    options: []
  },
  size: 9001
}

cacache.get.hasContent(cachePath, 'sha521-NOT+IN/CACHE==').then(console.log)

// Output
false
Options
opts.integrity

If present, the pre-calculated digest for the inserted content. If this option is provided and does not match the post-insertion digest, insertion will fail with an EINTEGRITY error.

opts.memoize

Default: null

If explicitly truthy, cacache will read from memory and memoize data on bulk read. If false, cacache will read from disk data. Reader functions by default read from in-memory cache.

opts.size

If provided, the data stream will be verified to check that enough data was passed through. If there's more or less data than expected, insertion will fail with an EBADSIZE error.

> cacache.put(cache, key, data, [opts]) -> Promise

Inserts data passed to it into the cache. The returned Promise resolves with a digest (generated according to opts.algorithms) after the cache entry has been successfully written.

See: options

Example
fetch(
  'https://registry.npmjs.org/cacache/-/cacache-1.0.0.tgz'
).then(data => {
  return cacache.put(cachePath, 'registry.npmjs.org|[email protected]', data)
}).then(integrity => {
  console.log('integrity hash is', integrity)
})

> cacache.put.stream(cache, key, [opts]) -> Writable

Returns a Writable Stream that inserts data written to it into the cache. Emits an integrity event with the digest of written contents when it succeeds.

See: options

Example
request.get(
  'https://registry.npmjs.org/cacache/-/cacache-1.0.0.tgz'
).pipe(
  cacache.put.stream(
    cachePath, 'registry.npmjs.org|[email protected]'
  ).on('integrity', d => console.log(`integrity digest is ${d}`))
)
Options
opts.metadata

Arbitrary metadata to be attached to the inserted key.

opts.size

If provided, the data stream will be verified to check that enough data was passed through. If there's more or less data than expected, insertion will fail with an EBADSIZE error.

opts.integrity

If present, the pre-calculated digest for the inserted content. If this option is provided and does not match the post-insertion digest, insertion will fail with an EINTEGRITY error.

algorithms has no effect if this option is present.

opts.integrityEmitter

Streaming only If present, uses the provided event emitter as a source of truth for both integrity and size. This allows use cases where integrity is already being calculated outside of cacache to reuse that data instead of calculating it a second time.

The emitter must emit both the 'integrity' and 'size' events.

NOTE: If this option is provided, you must verify that you receive the correct integrity value yourself and emit an 'error' event if there is a mismatch. ssri Integrity Streams do this for you when given an expected integrity.

opts.algorithms

Default: ['sha512']

Hashing algorithms to use when calculating the subresource integrity digest for inserted data. Can use any algorithm listed in crypto.getHashes() or 'omakase'/'γŠδ»»γ›γ—γΎγ™' to pick a random hash algorithm on each insertion. You may also use any anagram of 'modnar' to use this feature.

Currently only supports one algorithm at a time (i.e., an array length of exactly 1). Has no effect if opts.integrity is present.

opts.memoize

Default: null

If provided, cacache will memoize the given cache insertion in memory, bypassing any filesystem checks for that key or digest in future cache fetches. Nothing will be written to the in-memory cache unless this option is explicitly truthy.

If opts.memoize is an object or a Map-like (that is, an object with get and set methods), it will be written to instead of the global memoization cache.

Reading from disk data can be forced by explicitly passing memoize: false to the reader functions, but their default will be to read from memory.

opts.tmpPrefix

Default: null

Prefix to append on the temporary directory name inside the cache's tmp dir.

> cacache.rm.all(cache) -> Promise

Clears the entire cache. Mainly by blowing away the cache directory itself.

Example
cacache.rm.all(cachePath).then(() => {
  console.log('THE APOCALYPSE IS UPON US 😱')
})

> cacache.rm.entry(cache, key, [opts]) -> Promise

Alias: cacache.rm

Removes the index entry for key. Content will still be accessible if requested directly by content address (get.stream.byDigest).

By default, this appends a new entry to the index with an integrity of null. If opts.removeFully is set to true then the index file itself will be physically deleted rather than appending a null.

To remove the content itself (which might still be used by other entries), use rm.content. Or, to safely vacuum any unused content, use verify.

Example
cacache.rm.entry(cachePath, 'my-thing').then(() => {
  console.log('I did not like it anyway')
})

> cacache.rm.content(cache, integrity) -> Promise

Removes the content identified by integrity. Any index entries referring to it will not be usable again until the content is re-added to the cache with an identical digest.

Example
cacache.rm.content(cachePath, 'sha512-SoMeDIGest/IN+BaSE64==').then(() => {
  console.log('data for my-thing is gone!')
})

> cacache.index.compact(cache, key, matchFn, [opts]) -> Promise

Uses matchFn, which must be a synchronous function that accepts two entries and returns a boolean indicating whether or not the two entries match, to deduplicate all entries in the cache for the given key.

If opts.validateEntry is provided, it will be called as a function with the only parameter being a single index entry. The function must return a Boolean, if it returns true the entry is considered valid and will be kept in the index, if it returns false the entry will be removed from the index.

If opts.validateEntry is not provided, however, every entry in the index will be deduplicated and kept until the first null integrity is reached, removing all entries that were written before the null.

The deduplicated list of entries is both written to the index, replacing the existing content, and returned in the Promise.

> cacache.index.insert(cache, key, integrity, opts) -> Promise

Writes an index entry to the cache for the given key without writing content.

It is assumed if you are using this method, you have already stored the content some other way and you only wish to add a new index to that content. The metadata and size properties are read from opts and used as part of the index entry.

Returns a Promise resolving to the newly added entry.

> cacache.clearMemoized()

Completely resets the in-memory entry cache.

> tmp.mkdir(cache, opts) -> Promise<Path>

Returns a unique temporary directory inside the cache's tmp dir. This directory will use the same safe user assignment that all the other stuff use.

Once the directory is made, it's the user's responsibility that all files within are given the appropriate gid/uid ownership settings to match the rest of the cache. If not, you can ask cacache to do it for you by calling tmp.fix(), which will fix all tmp directory permissions.

If you want automatic cleanup of this directory, use tmp.withTmp()

See: options

Example
cacache.tmp.mkdir(cache).then(dir => {
  fs.writeFile(path.join(dir, 'blablabla'), Buffer#<1234>, ...)
})

> tmp.fix(cache) -> Promise

Sets the uid and gid properties on all files and folders within the tmp folder to match the rest of the cache.

Use this after manually writing files into tmp.mkdir or tmp.withTmp.

Example
cacache.tmp.mkdir(cache).then(dir => {
  writeFile(path.join(dir, 'file'), someData).then(() => {
    // make sure we didn't just put a root-owned file in the cache
    cacache.tmp.fix().then(() => {
      // all uids and gids match now
    })
  })
})

> tmp.withTmp(cache, opts, cb) -> Promise

Creates a temporary directory with tmp.mkdir() and calls cb with it. The created temporary directory will be removed when the return value of cb() resolves, the tmp directory will be automatically deleted once that promise completes.

The same caveats apply when it comes to managing permissions for the tmp dir's contents.

See: options

Example
cacache.tmp.withTmp(cache, dir => {
  return fs.writeFile(path.join(dir, 'blablabla'), 'blabla contents', { encoding: 'utf8' })
}).then(() => {
  // `dir` no longer exists
})
Options
opts.tmpPrefix

Default: null

Prefix to append on the temporary directory name inside the cache's tmp dir.

Subresource Integrity Digests

For content verification and addressing, cacache uses strings following the Subresource Integrity spec. That is, any time cacache expects an integrity argument or option, it should be in the format <hashAlgorithm>-<base64-hash>.

One deviation from the current spec is that cacache will support any hash algorithms supported by the underlying Node.js process. You can use crypto.getHashes() to see which ones you can use.

Generating Digests Yourself

If you have an existing content shasum, they are generally formatted as a hexadecimal string (that is, a sha1 would look like: 5f5513f8822fdbe5145af33b64d8d970dcf95c6e). In order to be compatible with cacache, you'll need to convert this to an equivalent subresource integrity string. For this example, the corresponding hash would be: sha1-X1UT+IIv2+UUWvM7ZNjZcNz5XG4=.

If you want to generate an integrity string yourself for existing data, you can use something like this:

const crypto = require('crypto')
const hashAlgorithm = 'sha512'
const data = 'foobarbaz'

const integrity = (
  hashAlgorithm +
  '-' +
  crypto.createHash(hashAlgorithm).update(data).digest('base64')
)

You can also use ssri to have a richer set of functionality around SRI strings, including generation, parsing, and translating from existing hex-formatted strings.

> cacache.verify(cache, opts) -> Promise

Checks out and fixes up your cache:

  • Cleans up corrupted or invalid index entries.
  • Custom entry filtering options.
  • Garbage collects any content entries not referenced by the index.
  • Checks integrity for all content entries and removes invalid content.
  • Fixes cache ownership.
  • Removes the tmp directory in the cache and all its contents.

When it's done, it'll return an object with various stats about the verification process, including amount of storage reclaimed, number of valid entries, number of entries removed, etc.

Options
opts.concurrency

Default: 20

Number of concurrently read files in the filesystem while doing clean up.

opts.filter

Receives a formatted entry. Return false to remove it. Note: might be called more than once on the same entry.

opts.log

Custom logger function:

  log: { silly () {} }
  log.silly('verify', 'verifying cache at', cache)
Example
echo somegarbage >> $CACHEPATH/content/deadbeef
cacache.verify(cachePath).then(stats => {
  // deadbeef collected, because of invalid checksum.
  console.log('cache is much nicer now! stats:', stats)
})

> cacache.verify.lastRun(cache) -> Promise

Returns a Date representing the last time cacache.verify was run on cache.

Example
cacache.verify(cachePath).then(() => {
  cacache.verify.lastRun(cachePath).then(lastTime => {
    console.log('cacache.verify was last called on' + lastTime)
  })
})

cacache's People

Contributors

billatnpm avatar chrisdickinson avatar claudiahdz avatar darcyclarke avatar dependabot[bot] avatar ffflorian avatar from-the-river-to-the-sea avatar github-actions[bot] avatar greenkeeper[bot] avatar hdgarrood avatar iarna avatar isaacs avatar jbcpollak avatar jfmartinez avatar kalinkrustev avatar kapouer avatar larsgw avatar lukekarrys avatar mmahrous avatar nlf avatar rmg avatar ruyadorno avatar supita avatar thecodrr avatar wraithgar avatar xhmikosr avatar yorickvp avatar zkat avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cacache's Issues

[BUG] cacache doesn't work on Android

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

~ $ npm install yarn
npm ERR! code EACCES
npm ERR! syscall link
npm ERR! path /data/data/com.termux/files/home/.npm/_cacache/tmp/14de33dd
npm ERR! dest /data/data/com.termux/files/home/.npm/_cacache/content-v2/sha512/d1/d2/d481d770644c0c5e31275a2b952a18da6097da58f146549fb26a5f5d8ac389ffcd10db5d924df1176590499cd2d92b5c21f948efab003774723c809d2d6c
npm ERR! errno EACCES
npm ERR!
npm ERR! Your cache folder contains root-owned files, due to a bug in
npm ERR! previous versions of npm which has since been addressed.
npm ERR!
npm ERR! To permanently fix this problem, please run:
npm ERR!   sudo chown -R 10427:10427 "/data/data/com.termux/files/home/.npm"

npm ERR! A complete log of this run can be found in:
npm ERR!     /data/data/com.termux/files/home/.npm/_logs/2022-11-21T01_31_59_253Z-debug-0.log

Expected Behavior

NPM install package successfully.

Steps To Reproduce

In Termux, run

apt update && apt upgrade 
apt install nodejs-lts
npm install -g [email protected]
npm install yarn

Then error message will occur.

fs.link will try to use hard link, which is disallowed by seccomp in Android. The maintainer of termux packages have applied a patch to solve this, but when users update cacache or npm, the patch will not work anymore.

Related issues: termux/termux-packages#11293, termux/termux-packages#13293
Possible patch: https://github.com/termux/termux-packages/blob/master/packages/nodejs/deps-npm-node_modules-cacache-lib-util-move-file.js.patch

Environment

  • npm: 9.1.2
  • Node: 16.18.1
  • OS: Android
  • platform: Termux 0.118

[FEATURE] Replacing move-concurrently

What / Why

Hi i would like to replace the move-concurrently package by a new one (crafted from scratch). Move-concurrently package and his dependencies are outdated with a lot of legacy code.

(dead emoji in my screenshot mean no update since one years and more with dependencies not up to date too).

I guess the package will only need rimraf to support Node.js 10 (after EOL we can move with the new fs.rmdir recursive).

Do you think this is a good idea ? Or maybe you prefer to go on updating the packages ?

Best Regards,
Thomas

[BUG] put.stream can crash the process with unhandled exception, even when error handler is attached

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

Process terminates with error:

node:internal/process/promises:289
            triggerUncaughtException(err, true /* fromPromise */);
            ^

[Error: EACCES: permission denied, open 'tmp/3164ed41'] {
  errno: -13,
  code: 'EACCES',
  syscall: 'open',
  path: 'tmp/3164ed41'
}

Expected Behavior

Process does not terminate and the stream error handler is called

Steps To Reproduce

Run the following script. It will change the tmp folder permission, so that an error is triggered. It can happen in other circumstances too (of of disk space, etc.)
In certain cases this results in a lingering rejected promise handleContentP within lib/content/write.js
In the example, such case is when the readable stream has not ended.

const {Readable} = require('stream');
const {chmod} = require('fs/promises');
const {existsSync} = require('fs');
const cacache = require('cacache');

const read = (...params) => new Readable({
    read(size) {
        if (params.length) this.push(params.shift());
    }
});

require('http').createServer().listen(() => console.log('listening'));

((async()=>{
    if (existsSync('./tmp')) await chmod('./tmp', 0777);
    await cacache .put('.', 'key1', '1')
    await chmod('./tmp', 0555);
    return read('2').pipe(cacache.put.stream('.', 'key2')).on('error', console.error);

})()).catch(console.error);

Environment

  • Node: v20.9.0
  • OS: Linux

EPERM lchown error with the put method on v12 and +

After updating to the v12 and + (the current version I use is 12.0.2) I get the following error on put operations:

{ [Error: EPERM: operation not permitted, lchown '/tmp/content-v2/sha512/ec/9e']
cause:
{ Error: EPERM: operation not permitted, lchown '/tmp/content-v2/sha512/ec/9e'
errno: -1,
code: 'EPERM',
syscall: 'lchown',
path: '/tmp/content-v2/sha512/ec/9e' },
isOperational: true,
errno: -1,
code: 'EPERM',
syscall: 'lchown',
path: '/tmp/content-v2/sha512/ec/9e' }

And I actually see the files and folders created.
With both versions (11+ and 12+) the owner of the folders is my user.

And here is the code portion I use:

import { put as putCache } from "cacache/en";

const CACHE_PATH = "/tmp";

export const put = async <T>(key: string, payload: T, ttl: number = 10_000) => {
  try {
    await putCache(
      CACHE_PATH,
      key,
      JSON.stringify({
        payload,
        timestamp: Date.now(),
        ttl,
      }),
    );
    return payload;
  } catch (error) {
    console.log("Put Cache", error);
    return null;
  }
};

OS: macOS 10.14 and AWS Lambda
Node Version: 10.16.0 and 12.6.0

[BUG] cacache package depends on vulnerable version of tar

What / Why

The dependency on "tar":"^6.0.2" leaves the cacache package on a high severity vulnerability list. Please update so that the vulnerability is patched cacache.

How to fix:

Upgrade tar to version 6.1.9, 5.0.10, 4.4.18 or higher.

Links:

[QUESTION] Get info by integrity?

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

I was wondering if it's possible to get info for a cache entry by digest? Something like cacache.get.info.byDigest(cache, integrity)

cacache.get.info(cache, key) only takes a key and get.info.byDigest doesn't exist. I see cacache.get.hasContent(cache, integrity) which finds my cache entry but only tells me its size and some stats about the file. If it also returned the key then I could use get.info.

The only alternative I can really see to get the info (including metadata which I'm really interested in) is using ls and iterating over its output but that seems kinda crazy!

Also, I see that cache.get.byDigest does not return metadata by design so there is probably a technical reason this is not possible?

Expected Behavior

No response

Steps To Reproduce

No response

Environment

No response

[FEATURE] support multiple integrity algorithms

What / Why

Right now, if you provide multiple algorithms to content.write(), it'll error out with:

opts.algorithms only supports a single algorithm for now

It's said that for a long time. Let's support multiple algorithms!

This causes some suboptimal caching in make-fetch-happen, because we may have an integrity value that is a sha512, but it always caches as sha1, so we can never have a cache hit.

How

In lib/content/write.js, we always place the content in a single location based on the integrity and algorithm.

  • Create content locations for all the hashes provided.
  • In lib/content/write.js, move the written content into place, and then hard-link to the other locations.
  • Even if one of the entries is deleted, the others will still be available.

Current Behavior

  • Throws an error if multiple algorithms are provided
  • Content can only ever be cached at a single integrity location

[BUG] put.stream() with integrity check might not work

What / Why

Apparently put.stream() with integrity checks leaves the cache in an inconsistent state, and all keys inserted with this procedure are purged by the next verify().

I tried to find some use cases of put.stream(), but I could not find anything in npm, nor any tests here, so, although I don't rule out an improper use case in my code, I guess we are facing a small bug that slipped due to incomplete testing.

When

Always.

How

Current Behavior

I'm using put.stream()Β to add large archives to the cache, and I have the expected sha256 digest for each archive.

If I set opts.algorithms = ['sha256'] and I check the resulting integrity[hashAlgorithm][0].source I get the expected digest, and everything is fine.

If I set opts.integrity to the expected digest, the call seems ok too, it does not throw any error, the archive is saved in the cache and decompressing it is fine.

However, if I run a verify(), the associated entries are removed, as there would be something wrong with them.

Steps to Reproduce

  • run a put.stream() with opts.algorithms = ['sha256']
  • note the resulting integrity digest
  • run a verify()
  • run a ls(); the recently added key is there
  • run a put.stream() with opts.integrity set to the digest
  • note that the call seems ok, it returns the integrity result, which matches the input
  • run a verify()
  • run a ls(); the key is no longer in the cache

Note: using sha256 might not be relevant, but I mentioned it to match my test.

Expected Behavior

Running verify() after put.stream() with integrity checks should not remove the entries from the cache.

Who

  • n/a

References

  • n/a

[BUG] `rm.all` doesn't delete anything on Windows

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

  • version 17.0.4
  • node 19.6
  • this is a windows only bug, works as expected in node:19.6.0-alpine
import cacache from "cacache"

const cachePath = "./cache"
const key = "test-key"

await cacache.put(cachePath, key, "10293801983029384")

const cached = await cacache.get(cachePath, key)
console.log(`Data: ${cached.data}`)

await cacache.rm.all(cachePath)

const cached2 = await cacache.get(cachePath, key)
console.log(`Data after rm.all: ${cached2.data}`)

Output

Data: 10293801983029384
Data after rm.all: 10293801983029384

After rm.all cache keys still exist and nothing is delete from the filesystem.

Expected Behavior

Exception: NotFoundError: No cache entry for test-key found in ./cache

Works as expected in version 16.1.3

Steps To Reproduce

No response

Environment

  • Node: 19.6
  • OS: Windows 10

[FEATURE] Add CLI support to list & remove the cache content

What / Why

I am using pacote to download and cache, in addition to reasonably sized source libraries, large binaries (like toolchain distributions, hundreds of MB) and I would appreciate a CLI solution to enumerate the content of the cache, such that later to be able to selectively remove some cached files. For now the only method I found was to completely remove the cache, which is far from optimal.

Thank you,

Liviu

[BUG] EMFILE error in environment with low file descriptors limit

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

EMFILE error may be thrown during npm install, depending on the allowed file descriptors and the state of the cache before the install. An example error:

Error: EMFILE: too many open files, open '/Users/<user>/.npm/_cacache/index-v5/64/ee/136420e5adf6592619d25b411c7849220f30364ed8ba96dea19887a5d1f2'

Expected Behavior

npm install should succeed.

Steps To Reproduce

  1. Create a project directory and add package.json with the following:
{
  "name": "nuxt-app",
  "devDependencies": {
    "nuxt": "^3.7.0"
  }
}

(The error happens reliably with nuxt, but it's not related to nuxt, you can use other packages and get the same result.)

  1. Set a low file descriptor limit using ulimit -Hn 128 (It's possible to get the error with a higher ulimit, but using a low value helps to reliably reproduce the error)
  2. Delete the global cacache, eg: rm -rf /<home-dir>/.npm/_cacache/
  3. Delete the project's node_modules folder: rm -rf node_modules
  4. Run npm install
  5. EMFILE error is thrown

(Reset ulimit using ulimit -Hn unlimited)

Environment

  • npm: 10.4.0
  • Node: 20.11.0
  • OS: MacOS 14.1.2
  • platform: Macbook Pro 2019 (Intel)

Could you help remove the vulnerability in your package?

Hi ,@nlf , @isaacs, I’d like to report a vulnerability issue in cacache:

Issue Description

A vulnerability CVE-2021-27290 (high severity) detected in package ssri (>=5.2.2 <6.0.2,>=7.0.0 <8.0.1) is directly referenced by cacache 10.0.4. We noticed that such a vulnerability has been removed since cacache 11.0.1.

However, cacache's popular previous version [email protected] (1,015,201 downloads per week) is still transitively referenced by a large amount of latest versions of active and popular downstream projects (about 5,086 downstream projects, e.g., @toptal/davinci-engine 3.3.0, @toptal/davinci-syntax 6.4.0, @toptal/davinci 4.2.2, @toptal/davinci-cli-shared 1.3.4, @toptal/davinci-bootstrap 2.1.74, @3liv/[email protected], @9188/[email protected], @akala-modules/[email protected], @aquestsrl/[email protected], etc.).
As such, issue CVE-2021-27290 can be propagated into these downstream projects and expose security threats to them.

These projects cannot easily upgrade cacache from version 10.0.4 to 11.*.* . For instance, [email protected] is introduced into the above projects via the following package dependency paths:
(1) @3liv/[email protected] βž” @3liv/[email protected] βž” @3liv/[email protected] βž” [email protected] βž” [email protected] βž” [email protected]
(2) @9188/[email protected] βž” [email protected] βž” [email protected] βž” [email protected] βž” [email protected]
(3) @akala-modules/[email protected] βž” @akala/[email protected] βž” [email protected] βž” [email protected] βž” [email protected] βž” [email protected]
(4) @aquestsrl/[email protected] βž” @aquestsrl/[email protected] βž” [email protected] βž” [email protected] βž” [email protected] βž” [email protected]
......

The projects such as @3liv/fero-resource-subscriber, w-webpack, @aquestsrl/create-app-cli-utils and server-static etc. which introduced [email protected], are not maintained anymore. These unmaintained packages can neither upgrade cacache nor be easily migrated by the large amount of affected downstream projects.
On behalf the downstream users, could you help us remove the vulnerability from package [email protected]?

Sorry for the inconvenience caused.

Suggested Solution

Since these unactive projects set a version constaint ~10.0.* for cacache on the above vulnerable dependency paths, if cacache removes the vulnerability from 10.0.4 and releases a new patched version [email protected],
such a vulnerability patch can be automatically propagated into the 5,086 affected downstream projects.

In [email protected], you can kindly try to perform the following upgrade:
ssri ^5.2.4 βž” 5.2.1;
Note:
[email protected] (<5.2.2, >=6.0.2 <7.0.0, >=8.0.1) doesn't have the vulnerability CVE-2021-27290

Thanks again for your contributions.

Best regards,
Paimon

[BUG] calling verify() should not modify the time property

What / Why

Documentation says about property time: Timestamp the entry was first added on.
Running cacache.verify() internally uses insert() method and sets time property to Date.now().
This is not what user expects and it makes the time property unusable.

How

const cachePath = '/tmp/cacache'
const key = 'key'

await cacache.put(cachePath, key, 'hello')
const item1 = await cacache.get.info(cachePath, key)
await cacache.verify(cachePath)
const item2 = await cacache.get.info(cachePath, key)

console.log(item1.time, item2.time, item1.time === item2.time ? 'ok' : 'WRONG')

[Feature] Async iterator over ls.stream

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

Calling ls.stream over a large store results large memory hit for all entries

Expected Behavior

An iterator which returns entries as needed

[BUG] TypeError: buckets.map is not a function => app crash

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

NpmCache::getCacheDbItems failed due to TypeError: buckets.map is not a function at readdirOrEmpty.then.buckets (main.js:238:3754) at at Function.reReject (main.js:4453:1407) at cacache_1.default.ls.then.catch.e (main.js:5513:7616) at code: undefined }

Looks like there is error during reading the dir.

function readdirOrEmpty (dir) {
  return fs.readdir(dir).catch((err) => {
    if (err.code === 'ENOENT' || err.code === 'ENOTDIR') {
      return []
    }

    throw err
  })
}

But it's not properly handled here.

  Promise.resolve().then(async () => {
    const buckets = await readdirOrEmpty(indexDir)
    await Promise.all(buckets.map(async (bucket) => {
    ...

Expected Behavior

Promise rejected instead of crash.

Steps To Reproduce

Issue is sporadic.

Environment

  • npm: 6.14.16
  • Node: 8.16.2
  • OS: Linux

[BUG] Put failed due to TypeError: Data must be a string or a buffer

What / Why

Running example failed

fetch(
  'https://registry.npmjs.org/cacache/-/cacache-1.0.0.tgz'
).then(data => {
  return cacache.put(cachePath, 'registry.npmjs.org|[email protected]', data)
}).then(integrity => {
  console.log('integrity hash is', integrity)
})

(node:443) UnhandledPromiseRejectionWarning: TypeError: Data must be a string or a buffer
at Hash.update (crypto.js:99:16)
at algorithms.reduce (/repo/apps/allen/node-test/node_modules/ssri/index.js:322:44)
at Array.reduce ()
at Object.fromData (/repo/apps/allen/node-test/node_modules/ssri/index.js:317:21)
at write (/repo/apps/allen/node-test/node_modules/cacache/lib/content/write.js:31:20)
at Object.putData [as put] (/repo/apps/allen/node-test/node_modules/cacache/put.js:20:10)
at fetch.then.data (/repo/apps/allen/node-test/tests/cacache.js:39:20)
at
at process._tickCallback (internal/process/next_tick.js:189:7)

[BUG] Dependency move-file only supports node >= 10.17

What / Why

I can't upgrade a package depending of this package since I was using the same version of node (1.16.1) that the server uses, but it just throws an error that is hard to trace because incompatibility is in a subdependency.

When

When I run yarn install in server

Where

Where node is 10.16, supposedly compatible with this package

How

Current Behavior

yarn throws error that move-file is not compatible

Steps to Reproduce

just run

nvm use 10.16
yarn install

Expected Behavior

If move-file is really needed, then update node engine constraint to >= 10.17

References

Related to raineorshine/npm-check-updates#651
Related to npm/pacote#41
Broken by sindresorhus/move-file#8

[FEATURE] implement reference counting

What / Why

In order to safely prune old content after storing fresh content in the cache, we need some means of knowing if the old content has now become orphaned. The fastest approach to this is reference counting.

Implementation will require

  • actually implementing reference counting
  • automating migration of old cache stores to deduplicate references and count them

Custom Error classes

I think it'd be good for cacache to raise custom errors in known cases, so that it could be more clear that an error is coming from cacache for a known scenario.

This is necessary in Pacote, where it retries on ENOENT errors, but should only do that for ENOENT errors that indicate a cache miss, vs ENOENT trying to read a tarball or directory.

We already duplicate the sizeError function in a few places, so why not just have a dedicated SizeError class?

All of the errors created should set this.cacache = true, and a meaningful error code. Following the pattern in node-tar and node-fetch/minipass-fetch, it'd also be super handy when debugging issues that bubble up to the CLI, if we could narrow in on the subsystem involved, and could even provide more user-friendly reporting.

Configure the size of the cache stored on file system

Hi @isaacs
I'm looking for a library to manage a local-file-system cache to be used in a node.js server.
Maybe I'm way off here, but I'm trying to understand if cacache can fit. The problem is that I need a TTL and (more importantly) MAX size features (So I can control the size that cache library take on fs so it will not explode) is it possible to do this with cacache? Looking at the code, I could not see a way to do this.

I've also been playing with the idea of using your lru-cache to "mirror" the file system cache library. Bassically store the files on fs and then add them to the lru-cache. Then when I get the dispose event I'll delete the file (So this basically mean I will have LRU cache on file system)

Since I run on docker I don't have a risk of lru-cache inRam cache to get out of sync with fs (Because if the server crashes/stop the container will be killed and recreated with new clean fs)

Does this make any sense ?

_cacache installs in root

_cacache installs in root folder of project

when i install react via npm there is a -cacache-folder in root,
under that there is also a _lock folder.

this didn't happen before
that gives hard times for git, 5000 files waiting
node-modules folder has already a _cacache

[BUG] cacache ignores npm cache config

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

my npmrc file contains

prefix=${XDG_DATA_HOME}/node
cache=${XDG_CACHE_HOME}/npm
init-module=${XDG_CONFIG_HOME}/npm/config/npm-init.js

cacache ignores this frequently causing a ~/.npm directory to reappear.

Expected Behavior

cacache should always use the configured path (which in this case is ~/.cache/npm/) when used through npm.

Steps To Reproduce

  1. Edit npmrc as shown above
  2. perform operation that uses caching (presumably a global package install? That and npm publish are all I used npm for)
  3. the ~/.npm directory reappears with the new cache.

Environment

[BUG] CVE-2021-27290 due to using old version of `ssri`

Can you please make new releases when this issue is fixed?

Versions 15.0.6 and 12.0.5 (a 12.x release would be nice because many projects depend on cache 12.x).

What / Why

CVE-2021-27290

ssri 5.2.2-8.0.0, fixed in 8.0.1, processes SRIs using a regular expression which is vulnerable to a denial of service. Malicious SRIs could take an extremely long time to process, leading to denial of service. This issue only affects consumers using the strict option.

The fix is to bump ssri to 8.0.1.

When

  • n/a

Where

  • n/a

How

Current Behavior

  • n/a

Steps to Reproduce

  • n/a

Expected Behavior

  • n/a

Who

  • n/a

References

  • n/a

[ISSUE] Explain to me why this is a stupid use of cacache (storing simple key/value pairs with a simplified cacache interface)

I've been looking for solid, stable and well maintained Node.js file caching libraries. (Un)Fortunately, cacache was the only one that I could find.

Now, having general experience with remote caching, looking first time at cacache documentation was overwhelming. Tarball why? Integrity what? ...

After experimenting a bit around, I believe I figured out the basics and wrote a simplified API around it.

// cache.js
import path from "path";
import cacache from "cacache";

const cachePath = path.join(__dirname, ".cache");

/**
 * Simplified Cache API written on top of cacache
 * This interface makes using cacache bit more straightforward without
 * deeper understanding of caching itself.
 * @see https://github.com/npm/cacache
 */

/**
 * Set or override key/value in the cache
 */
const put = async (key, value) => {
    const writeAction = await cacache.put(
        cachePath,
        key,
        JSON.stringify(value)
    );

    return writeAction;
};

/**
 * Retrieve a value from the cache.
 */
const get = async (key) => {
    const readAction = await cacache.get(cachePath, key);
    /**
     * Returned "data" key contains data as a Buffer object
     * Convert Buffer binary contents to string
     * @see https://nodejs.org/en/knowledge/advanced/buffers/how-to-use-buffers/#reading-from-buffers
     */
    const dataJSON = readAction.data.toString();
    const data = JSON.parse(dataJSON);
    return data;
};

const remove = async (key) => {
    const removeAction = await cacache.rm.entry(cachePath, key, {
        removeFully: true,
    });
    return removeAction;
};

/**
 * Clears the entire cache
 */
const destroy = async () => {
    const destroyAction = await cacache.rm.all(cachePath);
    return destroyAction;
};

export const cache = {
    put,
    get,
    remove,
    destroy,
};

Before putting this in a production environment, convince me why the above simplified interface is a bad idea for storing key/value pairs, where values may be string, numbers or objects with 500 properties?

[BUG] @npmcli/move-file is dilicated

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

@npmcli/[email protected]: This functionality has been moved to @npmcli/fs

Expected Behavior

npm WARN deprecated @npmcli/[email protected]: This functionality has been moved to @npmcli/fs

Steps To Reproduce

  1. In this environment...
    nodeJS 18.16.0
  2. With this config...
  3. Run '...'
    npm i cacache
  4. See error...
    npm WARN deprecated @npmcli/[email protected]: This functionality has been moved to @npmcli/fs

Environment

  • npm: 9.5.1
  • Node: 18.16.0
  • OS: Windows 11, Ubuntu 20 & 22
  • platform:

idea: Add a way to write arbitrary files into the cache

This might be a little out of cacache's lane, not sure.

npm writes a few random files into the cache folder.

  • filesystem mutex locks
  • search metadata
  • debug logs

When it does this, it has to reproduce the logic that cacache (now) has built-in to preserve ownership and keep track of these files.

Idea: add a misc folder in cacache that could be used for these sorts of things. Files in this folder would not be indexed based on their content or name.

Also, the npm cli has some logic to store at most 10 debug log files, deleting old ones as new ones are added. It'd be great if cacache had a way to define a folder that can contain at most n files, and take care of the pruning.

The argument against doing this is that it's not really a "content addressable cache" at that point, so doesn't exactly make sense to have cacache manage it. But, it does already manage tmp files, which are also not content addressable, so maybe it's not so much of a stretch?

Failing this (or in addition to it), at least, lib/util/infer-owner.js should probably be split out into a separate module, which would at least make it easier to get the ownership right when other code writes into the cache directly.

Don't split contents over multiple directories

The current cache layout uses (afaict) the first 4 hex digits to make 2 levels of directories and then stores the file. So abcdefgh => /ab/cd/efgh

I believe this to be inefficient, because modern filesystems can handle millions of files in a single directory just fine.

With the current layout, the fs needs to do 3 tree lookups to find the inode of a filename: 2 in shallow trees and 1 in a deep tree. By putting all files in 1 directory, there's only 1 lookup in a slightly deeper tree.

Furthermore, unless you have 10s of thousands of files, you end up creating a lot of directories with just one or two entries in them. So you're actually increasing the required storage.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.