Code Monkey home page Code Monkey logo

cache's Issues

How to use env vars in path on windows

Hey,

I'd need some help understanding what I'm doing wrong.

Expectation: path: ${{ env.LOCALAPPDATA }} points to C:\Users\runneradmin\AppData\Local.

    runs-on: windows-latest
    - name: Test print stuff
      run: |
        echo $env:LOCALAPPDATA
        ls $env:LOCALAPPDATA
 Test print stuff7s

Run echo $env:LOCALAPPDATA
C:\Users\runneradmin\AppData\Local


    Directory: C:\Users\runneradmin\AppData\Local

Mode                LastWriteTime         Length Name
----                -------------         ------ ----
d-----        11/11/2019  2:16 PM                Google
d-----        11/17/2019 11:28 PM                Microsoft
d-----        11/17/2019  9:44 PM                Packages
d-----        11/17/2019 11:29 PM                Temp
    - name: Cache electron-builder dependencies
      uses: actions/cache@v1
      with:
        path: ${{ env.LOCALAPPDATA }}electron-builder
Post Cache electron-builder dependencies3s
##[warning]The process 'C:\Program Files\Git\usr\bin\tar.exe' failed with exit code 2
Post job cleanup.
"C:\Program Files\Git\usr\bin\tar.exe" -cz --force-local -f d:/a/_temp/0b413849-9e8e-4665-894e-02bf8c693b6e/cache.tgz -C d:/a/my-project/my-project/electron-builder .
/usr/bin/tar: d\:/a/my-project/my-project/electron-builder: Cannot open: No such file or directory
/usr/bin/tar: Error is not recoverable: exiting now
##[warning]The process 'C:\Program Files\Git\usr\bin\tar.exe' failed with exit code 2

Meanwhile, path: C:\Users\runneradmin\AppData\Local\electron-builder works as expected.

Caching build of Docker actions

Describe the enhancement

In my opinion, in the case of Docker actions, in many cases it is worthwhile to provide the cache for actions build. At least for published in the marketplace, it is possible to pre-build them by GitHub Actions and store as cache.

Currently, Docker actions are built multiple times as the first step of all builds, which is extremely inefficient for complex Docker actions.

Code Snippet
This is due to the way the GitHub Actions worker works.

Additional information

The issue actions/toolkit#47 has been closed indicating a reference to @actions/cache, which however is limited to user data cache only. In this case, we do not have user data, but public actions, and the process is not controlled by the user at this stage (yet).

Failed to extract the cache, but taken as a successful cache hit anyway

First the cache was successfully restored here: https://github.com/stevencpp/cpp_modules/commit/bce1c922e0525b936336c67d6ae50e9aed114963/checks?check_suite_id=310043429#step:8:1
Then in the very next commit it failed to restore the cache from the same key to the same path: https://github.com/stevencpp/cpp_modules/commit/23a1cb1b99b080800c3d8f122085a2d5a2a5116a/checks?check_suite_id=311040306#step:8:1 .
There were no changes between the two commits that could explain the change, so maybe there were some changes to the action at that time ?
A few commits later it still failed to restore the cache but now all the checks were successful and yet it refused to save the cache, thinking that it was a cache hit
https://github.com/stevencpp/cpp_modules/commit/664390b2b1665f8a413fe3546db0fda3ea909390/checks?check_suite_id=311386473#step:27:1
So, it would be helpful to figure out what caused the extraction to fail in the first place, but in any case such failures should either fail the cache action or at least allow rebuilding and saving the cache afterwards.

Use case: (PHP) composer cache

In a PHP project using composer for managing project dependencies, we need the ability to cache the $COMPOSER_HOME/cache/files directory, and overwrite its contents by reusing the same cache key.

In other words, we need a way to save and update a globally shared cache, regardless of the contents of composer.json (composer.lock is not committed for a library), because that would result in stale cache that never gets updated after the first time it's written (not great for perfomance).

See travis-ci/travis-ci#4579 (comment) for why caching the vendor directory is dangerous and not recommended.

Docker caching

Should we try to use this cache action to cache docker layers, doing trickery with docker save and docker load, or are you working on a different path for Docker caching?

Display cache size warning in human readable format (MB instead of bytes)

When I hit the cache size limit I am getting the following warning message which prints out the actual cache size in bytes and not MB. It would be nice, if the warning message would print out the actualy bytes also in MB.

##[warning]Cache size of 712762190 bytes is over the 400MB limit, not saving cache.

Cache container deleted when using in workflow with container jobs

When run within a container job the action fails to run - it looks like the cache action container is deleted in the stop containers phase, before the cache post action is able to run.

Presumably, this only runs within the context of the VM, not the container context?

Warning when cache item exists for cache key

When assembling cache keys that have been used before, a warning is issued:

Screen Shot 2019-11-07 at 08 13 44

The question is: why?

The point of a cache item is to be reusable, so when we use the same cache key/item multiple times, why would that be worth a warning?

In this particular case, the cache key is assembled like this:

- name: "Cache dependencies installed with composer"
  uses: actions/[email protected]
  with:
    path: ~/.composer/cache
    key: php7.2-composer-locked-${{ hashFiles('**/composer.lock') }}
    restore-keys: |
      php7.2-composer-locked-

However, when the content of composer.lock has not changed, the same cache key will be generated, and apparently the action then issues a warning as a cache item with a corresponding cache key already exists.

Issuing a warning probably makes sense when someone uses a cache key that will always be the same (something that does not use expressions), but here it is intended.

I have fallen back to using the github.sha now, but I'm not sure whether this is the best approach. This will create a lot more cache items.

What do you think?

 - name: "Cache dependencies installed with composer"
   uses: actions/[email protected]
   with:
     path: ~/.composer/cache
-    key: php7.2-composer-locked-${{ hashFiles('**/composer.lock') }}
+    key: php7.2-composer-locked-${{ github.sha }}
     restore-keys: |
       php7.2-composer-locked-

For reference, see .github/workflows/continuous-integration.yml.

Is it possible to have time-based cache expiration?

For example, the latest stable Dart SDK is at:

https://storage.googleapis.com/dart-archive/channels/stable/release/latest/sdk/dartsdk-{os}-x64-release.zip

The URL does not change when a new stable release is made, but there's no easy access to invalidate a cache because it's not easy to tell if the zip contains a new version. It'd be convenient to be able to cache a folder for a period (for ex. 24hours) and have it automatically removed (regardless of use when it was last accessed).

This would ensure you're generally on the latest version (to within 24hrs, or whatever) but you don't need to download it every time.

There are many parts of my builds that are versioned similarly to this (Dart stable/dev SDK, Flutter master/dev branch, VS Code stable/insiders builds) and trying to code caching strategies for them all would be complicated so it's easiest to just download them every time, but a short time-based cache would save on both resources and runtime.

Cache multiple paths

It can be useful to cache multiple directories, for example from different tools like pip and pre-commit.

With Travis CI (ignoring their pip: true shortcut):

cache:
  directories:
    - $HOME/.cache/pip
    - $HOME/.cache/pre-commit

The Actions equivalent requires adding a duplicated step for each directory, something like:

      - name: pip cache
        uses: actions/cache@preview
        with:
          path: ~/.cache/pip
          key: ${{ matrix.python-version }}-pip

      - name: pre-commit cache
        uses: actions/cache@preview
        with:
          path: ~/.cache/pre-commit
          key: ${{ matrix.python-version }}-pre-commit

Perhaps also allow multiple directories in a single step, something like this?

      - name: pip cache
        uses: actions/cache@preview
        with:
          path:
            - ~/.cache/pip
            - ~/.cache/pre-commit
          key: ${{ matrix.python-version }}

Too much configuration required

This is the suggested configuration to cache npm:

    - name: Cache node modules
      uses: actions/cache@v1
      with:
        path: node_modules
        key: ${{ runner.OS }}-build-${{ hashFiles('**/package-lock.json') }}
        restore-keys: |
          ${{ runner.OS }}-build-${{ env.cache-name }}-
          ${{ runner.OS }}-build-
          ${{ runner.OS }}-

This is the suggested configuration to cache npm on Travis:

Not a mistake. It's empty. It's the default.

Can better defaults be provided? GitHub Actions workflows are 5x to 10x longer than Travis’ for the most common configurations.

Allow using the cache for other than push and pull_request events

When triggering my action with on: ['deployment']. It throws a warning about having no read permissions.

It doesn't happen with on: push.

##[debug]Resolved Keys:
##[debug]["Linux-yarn-...hash...","Linux-yarn-"]
##[debug]Cache Url: https://artifactcache.actions.githubusercontent.com/...hash.../
##[warning]No scopes with read permission were found on the request.
::set-output name=cache-hit,::false

Yarn restores cache, but does not install from it

Hey team!

First, awesome job on this feature, it will immensely help our CI speed for our JavaScript projects, kudos!

I've been running on the "over the limit" error for a yarn project with workspaces enabled:

Post job cleanup.
/bin/tar -cz -f /home/runner/work/_temp/3c08f6f0-f11f-4d8f-bed5-d491e7d8d443/cache.tgz -C /home/runner/.cache/yarn .
##[warning]Cache size of 231440535 bytes is over the 200MB limit, not saving cache.

But when I run the same tar command locally, I get a 100.3 MB bundle. Is there anything I'm missing here?

Here's my workflow:

name: Test
on:
  push:
    branches:
      - '**'
    tags:
      - '!**'
jobs:
  test:
    name: Test, lint, typecheck and build
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@master
      - name: Dump GitHub context
        env:
          GITHUB_CONTEXT: ${{ toJson(github) }}
        run: echo "$GITHUB_CONTEXT"
      - name: Use Node.js 10.16.0
        uses: actions/setup-node@v1
        with:
          node-version: 10.16.0
      - name: Cache yarn node_modules
        uses: actions/cache@v1
        with:
          path: ~/.cache/yarn
          key: ${{ runner.OS }}-yarn-${{ hashFiles('**/yarn.lock') }}
          restore-keys: |
            ${{ runner.os }}-yarn-
      - name: Install
        run: yarn install --frozen-lockfile
        # ...

Thanks a lot!

Create the examples directory

Currently, PR's have been submitted to add language specific examples (#4, #5, #8, #10), and probably more PRs will be submitted if you approve those. So, if you plan to approve it, you should create an examples directory to keep clean README. What do you think?

Post action failed with Error: No such container

When trying to save a new cache I get an error, but seeing as there was no cache hit at the start of the job it is a bit weird.

Post job cleanup.
/usr/bin/docker exec  b41de5d1c382827f57c505ea59d0741bb205fddc841d2aae747e845b338f36ea sh -c "cat /etc/*release | grep ^ID"
Running JavaScript Action with default external tool: node12
Error: No such container: b41de5d1c382827f57c505ea59d0741bb205fddc841d2aae747e845b338f36ea
##[error]Node run failed with exit code 1

I created an open version here https://github.com/samhamilton/phoenix_hello/commit/eb71e941b763caa4df121f28b7085c14fb17f585/checks?check_suite_id=303196383

Thanks
Sam

Clear cache

Thanks for the action. It would be great if it was possible to clear the cache as well either using the action or via the web interface.

Skip post actions/cache if cache exists

Is it possible to skip resaving the cached files if the cache already exists? I used the #37 example to save docker images for one of our Github Action. The execution time for the run action step improves from 1 minute to 8 seconds.. But resaving the cache takes 30-50 seconds which slows down the workflow speed more then it improves :-(

Example github action exection time:

  • GA without caching: +- 1:12m
  Set up job1s
  Run actions/checkout@v1 3s
  Docker login 5s
  Actions checkout 1s
  Run action 1m 2s
  Complete job
  • GA with caching: +- 1:23m
    Current steps and total time
  Run actions/checkout@v1 1s
  Docker login 2s
  Run actions/cache@v1 9s
  Load cached Docker layers 23s
  Actions checkout 1s
  Run action 8s
  Build image 0s # Skipped, because cache already exists
  Post actions/cache @v1 39s
  Complete job```


Same cache key not shared between different pull requests

Hi, I'm not sure if this is a feature or a bug, but when we use the same cache key for different PRs (sequentially without a race condition), the second PR doesn't find the cache even though the keys are the same and the first PR completes the workflow successfully (including cache upload).

I looked through the actions/cache code and I think it may be related to the scope attribute of the ArtifactCacheEntry object. I weren't able to hack around the code to ignore this attribute, so I'm not 100% sure. The attribute does contain values that would point to the caches being scoped to different PRs, though (refs/pull/1660/merge and refs/pull/1661/merge on the two test pull requests I tried).

This is the setup we use:

      - uses: actions/cache@v1
        id: cache
        with:
          path: .github/vendor/bundle
          key: github-gems-${{ hashFiles('**/.github/Gemfile.lock') }}
          restore-keys: |
            github-gems-

The bundle is installing gems to the correct folder, since the cache is successfully fetched starting from the second commit on each pull request.

Let me know if I can provide more info. Thanks!

404 when including github.ref in key

With config like this:

    - name: Ubuntu cache
      uses: actions/cache@preview
      if: startsWith(matrix.os, 'ubuntu')
      with:
        path: ~/.cache/pip
        key: ${{ github.workflow }}-${{ github.ref }}-${{ matrix.os }}-${{ matrix.python-version }}

    - name: macOS cache
      uses: actions/cache@preview
      if: startsWith(matrix.os, 'macOS')
      with:
        path: ~/Library/Caches/pip
        key: ${{ github.workflow }}-${{ github.ref }}-${{ matrix.os }}-${{ matrix.python-version }}

It creates a key like "Test-refs/heads/gha-cache-ubuntu-18.04-3.5".

The upload step fails with a 404:

Post Ubuntu cache1s
##[warning]Cache service responded with 404
Post job cleanup.
/bin/tar -cz -f /home/runner/work/_temp/802beeb6-cc68-45cb-a68d-488cc4008df5/cache.tgz -C /home/runner/.cache/pip .
##[warning]Cache service responded with 404

eg. https://github.com/hugovk/Pillow/runs/286812478#step:22:1

That's because github.ref includes slashes (eg. refs/heads/gha-cache):

The branch or tag ref that triggered the workflow. For example, refs/heads/feature-branch-1. If neither a branch or tag is available for the event type, the variable will not exist.

https://help.github.com/en/github/automating-your-workflow-with-github-actions/virtual-environments-for-github-actions#environment-variables

Should I escape or replace illegal characters (and how?) or should the Action take care of them?

Or is there another way to include the branch name in the key?

Calculate a default key

Care must be taken to figure out a key: name, so it doesn't collide with another one and, for example, fetch an incompatible cache for another OS or Python version.

A key is not something I've had to worry about with Travis CI:

There is one cache per branch and language version/ compiler version/ JDK version/ Gemfile location/ etc.

If the key is omitted, how about calculating one from the branch, workflow name and matrix values (and anything else relevant)?

For example, given this workflow:

name: Test

on: [push, pull_request]

jobs:
  build:
    runs-on: ${{ matrix.os }}
    strategy:
      fail-fast: false
      matrix:
        python-version: ["3.6", "3.7"]
        os: [ubuntu-latest, ubuntu-16.04, macOS-latest]

Perhaps it would generate something like Test__master__ubuntu-latest__3.6.

As a user, I don't really care, I'd just like the same cache to be fetched when rebuilding the same sort of thing.

hashFiles() does not work for valid patterns

The expected behavior is that all three of the following steps pass on all platforms:

- name: Cache **/README.md
  uses: actions/cache@preview
  with:
    path: .
    key: test-${{ runner.os }}-${{ hashFiles('**/README.md') }}
  continue-on-error: true

- name: Cache README.md
  uses: actions/cache@preview
  with:
    path: .
    key: test-${{ runner.os }}-${{ hashFiles('README.md') }}
  continue-on-error: true

- name: Cache *README.md
  uses: actions/cache@preview
  with:
    path: .
    key: test-${{ runner.os }}-${{ hashFiles('*README.md') }}
  continue-on-error: true

But right now hashFiles(README.md) and hashFiles(*README.md) fails macOS:

2019-11-02T18:08:19.8057240Z ##[error]The template is not valid. 'hashFiles(README.md)' failed. Search pattern 'README.md' doesn't match any file under '/Users/runner/runners/2.160.0/work/github-actions-hashfiles-test/github-actions-hashfiles-test'
2019-11-02T18:08:19.8346580Z ##[error]The template is not valid. 'hashFiles(*README.md)' failed. Search pattern '*README.md' doesn't match any file under '/Users/runner/runners/2.160.0/work/github-actions-hashfiles-test/github-actions-hashfiles-test'

And hashFiles(README.md) and hashFiles(**/README.md) fails Windows:

2019-11-02T18:09:01.2005576Z ##[error]The template is not valid. 'hashFiles(**/README.md)' failed. Search pattern '**/README.md' doesn't match any file under 'd:\a\github-actions-hashfiles-test\github-actions-hashfiles-test'
2019-11-02T18:09:01.2212553Z ##[error]The template is not valid. 'hashFiles(README.md)' failed. Search pattern 'README.md' doesn't match any file under 'd:\a\github-actions-hashfiles-test\github-actions-hashfiles-test'

See example repo: https://github.com/poiru/github-actions-hashfiles-test
And run: https://github.com/poiru/github-actions-hashfiles-test/commit/23dcdc7705d1660ba845291f744dd9b4a9157458/checks?check_suite_id=293065410

Cache always misses for hidden directories

I've noticed that in one of my private repos, the cache is never hit even though the cache is small and at the end of each run it says it successfully saved.

Cache not found for input keys: Linux-eslint-e2e650427648991b0a4acaf9d0a8af37ce90a05d, Linux-eslint-.
Post job cleanup.
/bin/tar -cz -f /home/runner/work/_temp/random-hash/cache.tgz -C /home/runner/work/myproject/myproject/.cache/eslint .
Cache saved successfully

Here's the workflow step:

- name: cache eslint
  uses: actions/cache@v1
  with:
    path: ./.cache/eslint
    key: ${{ runner.os }}-eslint-${{ github.sha }}
    restore-keys: |
      ${{ runner.os }}-eslint-

Are there any reasons for this? I'm assuming caching isn't supported in free-tier private repos yet but I haven't seen that anywhere in the documentation.

Unsupported event types should be blocked as verification errors

Currently, only push and pull_request events work with the cache service, other events will fail server authorization with an error like "No scopes with read permission were found on the request."

We should check the event (from env variable GITHUB_EVENT_NAME) and provide a more helpful error message when the event is not supported.

Yarn workspaces

Hi, we need to work with yarn workspaces but we don't have any option to use a param to select which folder we need to cache...

When we used CircleCI, we have something like this:

- save_cache:
    key: yarn-{{ checksum "yarn.lock" }}
    paths:
      - node_modules
      - workspace-a/node_modules
      - workspace-b/node_modules

Can we do that with Github Actions Cache?

Thanks.

Missing a way to update package cache when upstream versions change

One common use case for caching appears to be caching build dependencies, like:

- name: Dependency cache
  uses: actions/cache@v1
  with:
    key: {os}-{language tooling version}-${{ hashFiles('dependencies') }}
- run: pkgtool update
- run: pkgtool build-dependencies

Unless dependency versions are precisely specified using some lock file, this will mean that the key doesn't change with time / newly released upstream dependency versions. On the other hand the call to pkgtool update will be aware of those upstream updates, and hence they will be rebuilt. But since the cache key doesn't change, actions/cache won't update the cache.

I see a couple of approaches to address this, but none seem to be available right now:

  • allow specifying whether to update the cache even on a cache hit, using something like with: update: true
  • provide the current date or timestamp, which might then be incorporated in the cache key
  • have actions/cache take pre- and post-checksums of the cached path, and update the cache if the checksum changed

Post cache hook runs after containers shut down

Hi,

I use a container for job running (see here)

And being that the container shuts down before the post cache action - the data for the cache is unavailable. it just gives an error Error: No such container: CONTAINER_ID

it seems the (undocumented?) post parameter for the actions runs should be configurable as to be post container shutdown or pre shutdown

Fails on self-hosted Windows runner with "tar --force-local is not supported"

While the hosted virtual environments work just fine, trying to run on a self-hosted windows runner fails today if gnu tar is not already on the path (e.g. via a mingw install or similar).

On newer versions of Windows, BSD tar is included, so you get an error like:

C:\Windows\system32\tar.exe -cz --force-local -f C:/Users/zarenner/actions-runner/_work/_temp/8b5e8d5e-a823-42dd-a103-7b47503f2fdb/cache.tgz -C C:/Users/zarenner/actions-runner/_work/zarenner-testperf/zarenner-testperf .
tar.exe: Option --force-local is not supported
Usage:
  List:    tar.exe -tf <archive-filename>
  Extract: tar.exe -xf <archive-filename>
  Create:  tar.exe -cf <archive-filename> [filenames...]
  Help:    tar.exe --help
##[warning]The process 'C:\Windows\system32\tar.exe' failed with exit code 1

Note that this isn't especially high priority since:

  1. People don't usually wipe self-hosted runners each run making hosted caching less important, and
  2. Unless the self-hosted runner happens to co-located with the caching service storage, significant performance gains are unlikely anyways

Recommend caching ~/.npm rather than node_modules

The readme currently provides an example workflow that suggests caching a project's node_modules folder.

This is generally not recommended: see here, here, here, etc. It also doesn't integrate well with npm's suggested CI workflow -- which is to cache ~/.npm and use npm ci -- because npm ci always removes node_modules if it exists so caching it strictly slows down the build.

For reference, Azure and Travis both cache ~/.npm rather than node_modules by default.

Was this a conscious design decision? If not, would you consider revising the readme to suggest using the shared cache?

Cache file is not compressed

This action is what I wanted for GitHub Actions. Thank you for creating this.

As far as I read the sources,

  • export async function saveCache(stream: NodeJS.ReadableStream, key: string) {
    const cacheUrl = getCacheUrl();
    const token = process.env["ACTIONS_RUNTIME_TOKEN"] || "";
    const bearerCredentialHandler = new BearerCredentialHandler(token);
    const resource = `_apis/artifactcache/cache/${encodeURIComponent(key)}`;
    const postUrl = cacheUrl + resource;
    const restClient = new RestClient("actions/cache", undefined, [
    bearerCredentialHandler
    ]);
    const requestOptions = getRequestOptions();
    requestOptions.additionalHeaders = {
    "Content-Type": "application/octet-stream"
    };
    const response = await restClient.uploadStream<void>(
    "POST",
    postUrl,
    stream,
    requestOptions
    );
    if (response.statusCode !== 200) {
    throw new Error(`Cache service responded with ${response.statusCode}`);
    }
    core.info("Cache saved successfully");
    }
  • cache/src/save.ts

    Lines 30 to 68 in b2cac08

    let cachePath = utils.resolvePath(
    core.getInput(Inputs.Path, { required: true })
    );
    core.debug(`Cache Path: ${cachePath}`);
    let archivePath = path.join(
    await utils.createTempDirectory(),
    "cache.tgz"
    );
    core.debug(`Archive Path: ${archivePath}`);
    // http://man7.org/linux/man-pages/man1/tar.1.html
    // tar [-options] <name of the tar archive> [files or directories which to add into archive]
    const args = ["-cz"];
    const IS_WINDOWS = process.platform === "win32";
    if (IS_WINDOWS) {
    args.push("--force-local");
    archivePath = archivePath.replace(/\\/g, "/");
    cachePath = cachePath.replace(/\\/g, "/");
    }
    args.push(...["-f", archivePath, "-C", cachePath, "."]);
    const tarPath = await io.which("tar", true);
    core.debug(`Tar Path: ${tarPath}`);
    await exec(`"${tarPath}"`, args);
    const fileSizeLimit = 200 * 1024 * 1024; // 200MB
    const archiveFileSize = fs.statSync(archivePath).size;
    core.debug(`File Size: ${archiveFileSize}`);
    if (archiveFileSize > fileSizeLimit) {
    core.warning(
    `Cache size of ${archiveFileSize} bytes is over the 200MB limit, not saving cache.`
    );
    return;
    }
    const stream = fs.createReadStream(archivePath);
    await cacheHttpClient.saveCache(stream, primaryKey);

specified directory is archived with tar. But it is not compressed. I think size of build dependencies is often quite big. And size limitation of cache is 200MB as per implementation in src/save.ts. I think compressing cache file would be better.

Post actions failed

I am writing the actions right now for a Rails app with below setup. However it's failing on the post action. Seems like hashFiles is looking for Gemfile.lock at the wrong place.

- uses: actions/cache@v1
      id: bundle-cache
      with:
        path: vendor/bundle
        key: ${{ runner.os }}-gem-${{ hashFiles('**/Gemfile.lock') }}
        restore-keys: |
          ${{ runner.os }}-gem-
Post actions/cache@v10s

##[error]The template is not valid. Could not find file '/home/runner/work/my-app/my-app/Gemfile.lock'.

Very long execution of post action/cache@v1

Problem

The average step time is about 4-6 hours🤕
Unfortunately, the reason why the step is being performed for so long is not clear. Logs do not open

Снимок экрана 2019-11-14 в 14 44 18

Workflow:

...
steps:
  - uses: actions/checkout@v1
  - uses: actions/setup-node@v1
       with:
         node-version: '12.x'
  - uses: actions/cache@v1
      with:
        path: node_modules
        key: node-${{ hashFiles('**/package-lock.json') }}
  - name: Setup npm
    run: echo "//registry.npmjs.org/:_authToken=${{ secrets.NPM_AUTH_TOKEN }}" > ~/.npmrc
  - name: Bootstrap
    run: npm i
...

Any idea what might have gone wrong?

Cache single file

I'm trying to cache the result of docker save, which creates a tar file.
When I try to cache a single file, an error is raised.

- name: Cache docker layers
  uses: actions/cache@preview
  id: cache
  with:
    path: docker_cache.tar
    key: ${{ runner.os }}-docker
/bin/tar -cz -f /home/runner/work/_temp/9d43813f-9c7b-4799-ac7d-f8d0749e7d8c/cache.tgz -C /home/runner/work/<repo>/<repo>/docker_cache.tar .
3
/bin/tar: /home/runner/work/<repo>/<repo>/docker_cache.tar: Cannot open: Not a directory
4
/bin/tar: Error is not recoverable: exiting now
5
##[warning]The process '/bin/tar' failed with exit code 2

I could create a directory just to hold the tar file, but it would be nice if this actions supports caching single files.

Can this module be used from other JavaScript actions?

I am trying to write a JavaScript action that runs npm ci and caches ~/.npm and ~/.cache/Cypress folders. I can use this action from YML file like this

# https://github.com/actions/cache
- name: Cache node modules
  uses: actions/cache@v1
  with:
    path: ~/.npm
    key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
    restore-keys: |
      ${{ runner.os }}-node-

- name: Cache Cypress binary
  uses: actions/cache@v1
  with:
    path: ~/.cache/Cypress
    key: cypress-${{ runner.os }}-node-${{ hashFiles('**/package.json') }}
    restore-keys: |
      cypress-${{ runner.os }}-node-

- name: install dependencies
  env:
    # make sure every Cypress install prints minimal information
    CI: 1
  run: npm ci

But I would like to have our own Cypress action that does the caching and NPM install for the user, something like

- name: Install NPM and Cypress
  uses: @cypress/github-action

This means from Cypress GH action JS code I need to call restore and save files in this action. I have tried but without success, for example

// after executing `npm ci` successfully
const homeDirectory = os.homedir()
const npmCacheDirectory = path.join(homeDirectory, '.npm')

core.saveState('path', npmCacheDirectory)
core.saveState('key', 'abc123')
core.saveState('restore-keys', 'abc123')

console.log('loading cache dist restore')
require('cache/dist/restore')

which gives me

done installing NPM modules
loading cache dist restore
##[error]Input required and not supplied: path
##[error]Node run failed with exit code 1

I think this particular error is due to a different core singleton between "regular" npm module and ncc- bundled actions/cache one. But in general, do you have by any chance a plan to document how to use this module from JavaScript? I could write TypeScript and bundle it as a top level action, which should bundle actions/cache code I think.

Scope of cache being stored

If a PR restores cache that was scoped to master and saves it at the end of the action, does the scope of the cache change to that of the PR's or does it remain in master's scope?

For example -
Let's say I have folder tempFolder cached on the master branch.
I create a PR that restores cache from master and downloads tempFolder. The action then makes some changes tempFolder, and at the end, the cache is saved.

Does the scope of tempFolder change from master to that of the PR?

Permission denied during cache store

I am trying to cache the security database of my docker container scanner.

Log:

 Post cached scan db
##[warning]The process '/bin/tar' failed with exit code 2
Post job cleanup.
/bin/tar -cz -f /home/runner/work/_temp/28e557d6-3aeb-4359-9a00-d29f2722deda/cache.tgz -C /home/runner/work/iron-alpine/iron-alpine/vulndb .
/bin/tar: ./db: Cannot open: Permission denied
/bin/tar: ./vuln-list: Cannot open: Permission denied
/bin/tar: Exiting with failure status due to previous errors
##[warning]The process '/bin/tar' failed with exit code 2

Config:

  dockerscan:
    name: image security scan
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@master
    - name: docker build
      run: docker build . --file Dockerfile --tag image
    - name: cached scan db
      uses: actions/cache@preview
      with:
        path: vulndb/
        key: ${{ runner.os }}-vulndb
    - name: run security scan
      run: |
        docker run --rm \
          -v /var/run/docker.sock:/var/run/docker.sock \
          -v "$(pwd)/vulndb/":/root/.cache/ \
          aquasec/trivy --severity HIGH,CRITICAL,MEDIUM --no-progress --auto-refresh --ignore-unfixed --exit-code 1 --cache-dir /root/.cache/ image


Split this action into save-cache and restore-cache actions

Proposal: Split this action into two, similar to actions/upload-artifact / actions/download-artifact (also similar to what CircleCI does):

  • actions/save-cache
  • actions/restore-cache

Rationale: This will give the user a lot more control (e.g. I might want to save the cache even before running the test steps).

Display used cache size

Hello,

First of all, thank you very much for providing this much needed functionality!

I just have a question: How do I know how much space my cache is using?

Could it be displayed somewhere? Maybe logging it in the build console is enough?

Cache size

Are there any plans to increase the cache size?
For instance 200 mb for java dependencies is too small

Configurable save cache on failure

Currently this cache action will only save caches if all tests succeed. In many cases this is desirable behavior. However I have some projects with long build times and flakey test suites. It would be very helpful if I could configure the cache to be saved regardless of the test suite success or failure.

I have created a fork of this project action to set the post-if to always().

Is it possible to make the cache policy configurable? Or to pass post-if as an argument from the cache configuration?

Where is `hashFiles` documented/defined?

When I search this repository for "hashFiles" I only get results matching markdown and yaml files. I don't see any code that defines this function. I also don't see any mention of it in
https://help.github.com/en/actions/automating-your-workflow-with-github-actions/software-installed-on-github-hosted-runners

I am trying to figure out how this function actually works.

Reason:

I would like to define a cache per branch by using the GITHUB_REF environment variable (https://help.github.com/en/actions/automating-your-workflow-with-github-actions/using-environment-variables). But since that variable can contain path separators, I am thinking it would be a good idea to hash that string and use the hash value, e.g.:

- uses: actions/cache@v1
  with:
    path: ~/.npm
    key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}-${{ getStrHash(${{GITHUB_REF}}) }}

The cache limit is too small

I understand that Issue is not related to the code of this repository, but I would like to discuss with many people, so I open Issue here. (and I know community forums exist. but many people probably don't know yet.)

First, I really appreciate the GitHub team for adding the cache feature to GitHub Actions. That's great for us! But, In recent years, node_modules is too large. 200MB can't cover it. It's the same in other languages. For example, using esy to install the opam packages, it can easily exceed 800MB. Is there a way to increase the cache limit? or if individual cache limits are removed, it becomes a relatively realistic limit. I know that if the file size is too large, save/restore may insanely slow down. but it shouldn't be limited on the cache action side.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.