actions / cache Goto Github PK
View Code? Open in Web Editor NEWCache dependencies and build outputs in GitHub Actions
License: MIT License
Cache dependencies and build outputs in GitHub Actions
License: MIT License
For debugging cache integrity, code was added to calculate the checksum of the cache during restore and save (see https://github.com/actions/cache/blob/master/src/restore.ts#L100)
Since we can't conditionally run this in debug mode only, it's run on every use of the action and is a waste of time. We should remove that block of code.
Hey,
I'd need some help understanding what I'm doing wrong.
Expectation: path: ${{ env.LOCALAPPDATA }}
points to C:\Users\runneradmin\AppData\Local
.
runs-on: windows-latest
- name: Test print stuff
run: |
echo $env:LOCALAPPDATA
ls $env:LOCALAPPDATA
Test print stuff7s
Run echo $env:LOCALAPPDATA
C:\Users\runneradmin\AppData\Local
Directory: C:\Users\runneradmin\AppData\Local
Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 11/11/2019 2:16 PM Google
d----- 11/17/2019 11:28 PM Microsoft
d----- 11/17/2019 9:44 PM Packages
d----- 11/17/2019 11:29 PM Temp
- name: Cache electron-builder dependencies
uses: actions/cache@v1
with:
path: ${{ env.LOCALAPPDATA }}electron-builder
Post Cache electron-builder dependencies3s
##[warning]The process 'C:\Program Files\Git\usr\bin\tar.exe' failed with exit code 2
Post job cleanup.
"C:\Program Files\Git\usr\bin\tar.exe" -cz --force-local -f d:/a/_temp/0b413849-9e8e-4665-894e-02bf8c693b6e/cache.tgz -C d:/a/my-project/my-project/electron-builder .
/usr/bin/tar: d\:/a/my-project/my-project/electron-builder: Cannot open: No such file or directory
/usr/bin/tar: Error is not recoverable: exiting now
##[warning]The process 'C:\Program Files\Git\usr\bin\tar.exe' failed with exit code 2
Meanwhile, path: C:\Users\runneradmin\AppData\Local\electron-builder
works as expected.
Describe the enhancement
In my opinion, in the case of Docker actions, in many cases it is worthwhile to provide the cache for actions build. At least for published in the marketplace, it is possible to pre-build them by GitHub Actions and store as cache.
Currently, Docker actions are built multiple times as the first step of all builds, which is extremely inefficient for complex Docker actions.
Code Snippet
This is due to the way the GitHub Actions worker works.
Additional information
The issue actions/toolkit#47 has been closed indicating a reference to @actions/cache
, which however is limited to user data cache only. In this case, we do not have user data, but public actions, and the process is not controlled by the user at this stage (yet).
Could you add an example for a python workspace
First the cache was successfully restored here: https://github.com/stevencpp/cpp_modules/commit/bce1c922e0525b936336c67d6ae50e9aed114963/checks?check_suite_id=310043429#step:8:1
Then in the very next commit it failed to restore the cache from the same key to the same path: https://github.com/stevencpp/cpp_modules/commit/23a1cb1b99b080800c3d8f122085a2d5a2a5116a/checks?check_suite_id=311040306#step:8:1 .
There were no changes between the two commits that could explain the change, so maybe there were some changes to the action at that time ?
A few commits later it still failed to restore the cache but now all the checks were successful and yet it refused to save the cache, thinking that it was a cache hit
https://github.com/stevencpp/cpp_modules/commit/664390b2b1665f8a413fe3546db0fda3ea909390/checks?check_suite_id=311386473#step:27:1
So, it would be helpful to figure out what caused the extraction to fail in the first place, but in any case such failures should either fail the cache action or at least allow rebuilding and saving the cache afterwards.
In a PHP project using composer for managing project dependencies, we need the ability to cache the $COMPOSER_HOME/cache/files
directory, and overwrite its contents by reusing the same cache key.
In other words, we need a way to save and update a globally shared cache, regardless of the contents of composer.json
(composer.lock
is not committed for a library), because that would result in stale cache that never gets updated after the first time it's written (not great for perfomance).
See travis-ci/travis-ci#4579 (comment) for why caching the vendor
directory is dangerous and not recommended.
Should we try to use this cache action to cache docker layers, doing trickery with docker save
and docker load
, or are you working on a different path for Docker caching?
When I hit the cache size limit I am getting the following warning message which prints out the actual cache size in bytes and not MB. It would be nice, if the warning message would print out the actualy bytes also in MB.
##[warning]Cache size of 712762190 bytes is over the 400MB limit, not saving cache.
When run within a container job the action fails to run - it looks like the cache action container is deleted in the stop containers phase, before the cache post action is able to run.
Presumably, this only runs within the context of the VM, not the container context?
When assembling cache keys that have been used before, a warning is issued:
The question is: why?
The point of a cache item is to be reusable, so when we use the same cache key/item multiple times, why would that be worth a warning?
In this particular case, the cache key is assembled like this:
- name: "Cache dependencies installed with composer"
uses: actions/[email protected]
with:
path: ~/.composer/cache
key: php7.2-composer-locked-${{ hashFiles('**/composer.lock') }}
restore-keys: |
php7.2-composer-locked-
However, when the content of composer.lock
has not changed, the same cache key will be generated, and apparently the action then issues a warning as a cache item with a corresponding cache key already exists.
Issuing a warning probably makes sense when someone uses a cache key that will always be the same (something that does not use expressions), but here it is intended.
I have fallen back to using the github.sha
now, but I'm not sure whether this is the best approach. This will create a lot more cache items.
What do you think?
- name: "Cache dependencies installed with composer"
uses: actions/[email protected]
with:
path: ~/.composer/cache
- key: php7.2-composer-locked-${{ hashFiles('**/composer.lock') }}
+ key: php7.2-composer-locked-${{ github.sha }}
restore-keys: |
php7.2-composer-locked-
For reference, see .github/workflows/continuous-integration.yml
.
For example, the latest stable Dart SDK is at:
The URL does not change when a new stable release is made, but there's no easy access to invalidate a cache because it's not easy to tell if the zip contains a new version. It'd be convenient to be able to cache a folder for a period (for ex. 24hours) and have it automatically removed (regardless of use when it was last accessed).
This would ensure you're generally on the latest version (to within 24hrs, or whatever) but you don't need to download it every time.
There are many parts of my builds that are versioned similarly to this (Dart stable/dev SDK, Flutter master/dev branch, VS Code stable/insiders builds) and trying to code caching strategies for them all would be complicated so it's easiest to just download them every time, but a short time-based cache would save on both resources and runtime.
It can be useful to cache multiple directories, for example from different tools like pip and pre-commit.
With Travis CI (ignoring their pip: true
shortcut):
cache:
directories:
- $HOME/.cache/pip
- $HOME/.cache/pre-commit
The Actions equivalent requires adding a duplicated step for each directory, something like:
- name: pip cache
uses: actions/cache@preview
with:
path: ~/.cache/pip
key: ${{ matrix.python-version }}-pip
- name: pre-commit cache
uses: actions/cache@preview
with:
path: ~/.cache/pre-commit
key: ${{ matrix.python-version }}-pre-commit
Perhaps also allow multiple directories in a single step, something like this?
- name: pip cache
uses: actions/cache@preview
with:
path:
- ~/.cache/pip
- ~/.cache/pre-commit
key: ${{ matrix.python-version }}
This is the suggested configuration to cache npm
:
- name: Cache node modules
uses: actions/cache@v1
with:
path: node_modules
key: ${{ runner.OS }}-build-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.OS }}-build-${{ env.cache-name }}-
${{ runner.OS }}-build-
${{ runner.OS }}-
This is the suggested configuration to cache npm
on Travis:
Not a mistake. It's empty. It's the default.
Can better defaults be provided? GitHub Actions workflows are 5x to 10x longer than Travis’ for the most common configurations.
When triggering my action with on: ['deployment']
. It throws a warning about having no read permissions.
It doesn't happen with on: push
.
##[debug]Resolved Keys:
##[debug]["Linux-yarn-...hash...","Linux-yarn-"]
##[debug]Cache Url: https://artifactcache.actions.githubusercontent.com/...hash.../
##[warning]No scopes with read permission were found on the request.
::set-output name=cache-hit,::false
Hey team!
First, awesome job on this feature, it will immensely help our CI speed for our JavaScript projects, kudos!
I've been running on the "over the limit" error for a yarn project with workspaces enabled:
Post job cleanup.
/bin/tar -cz -f /home/runner/work/_temp/3c08f6f0-f11f-4d8f-bed5-d491e7d8d443/cache.tgz -C /home/runner/.cache/yarn .
##[warning]Cache size of 231440535 bytes is over the 200MB limit, not saving cache.
But when I run the same tar
command locally, I get a 100.3 MB bundle. Is there anything I'm missing here?
Here's my workflow:
name: Test
on:
push:
branches:
- '**'
tags:
- '!**'
jobs:
test:
name: Test, lint, typecheck and build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@master
- name: Dump GitHub context
env:
GITHUB_CONTEXT: ${{ toJson(github) }}
run: echo "$GITHUB_CONTEXT"
- name: Use Node.js 10.16.0
uses: actions/setup-node@v1
with:
node-version: 10.16.0
- name: Cache yarn node_modules
uses: actions/cache@v1
with:
path: ~/.cache/yarn
key: ${{ runner.OS }}-yarn-${{ hashFiles('**/yarn.lock') }}
restore-keys: |
${{ runner.os }}-yarn-
- name: Install
run: yarn install --frozen-lockfile
# ...
Thanks a lot!
When trying to save a new cache I get an error, but seeing as there was no cache hit at the start of the job it is a bit weird.
Post job cleanup.
/usr/bin/docker exec b41de5d1c382827f57c505ea59d0741bb205fddc841d2aae747e845b338f36ea sh -c "cat /etc/*release | grep ^ID"
Running JavaScript Action with default external tool: node12
Error: No such container: b41de5d1c382827f57c505ea59d0741bb205fddc841d2aae747e845b338f36ea
##[error]Node run failed with exit code 1
I created an open version here https://github.com/samhamilton/phoenix_hello/commit/eb71e941b763caa4df121f28b7085c14fb17f585/checks?check_suite_id=303196383
Thanks
Sam
Thanks for the action. It would be great if it was possible to clear the cache as well either using the action or via the web interface.
Is it possible to skip resaving the cached files if the cache already exists? I used the #37 example to save docker images for one of our Github Action. The execution time for the run action
step improves from 1 minute to 8 seconds.. But resaving the cache takes 30-50 seconds which slows down the workflow speed more then it improves :-(
Example github action exection time:
Set up job1s
Run actions/checkout@v1 3s
Docker login 5s
Actions checkout 1s
Run action 1m 2s
Complete job
Run actions/checkout@v1 1s
Docker login 2s
Run actions/cache@v1 9s
Load cached Docker layers 23s
Actions checkout 1s
Run action 8s
Build image 0s # Skipped, because cache already exists
Post actions/cache @v1 39s
Complete job```
Hi, I'm not sure if this is a feature or a bug, but when we use the same cache key for different PRs (sequentially without a race condition), the second PR doesn't find the cache even though the keys are the same and the first PR completes the workflow successfully (including cache upload).
I looked through the actions/cache
code and I think it may be related to the scope
attribute of the ArtifactCacheEntry
object. I weren't able to hack around the code to ignore this attribute, so I'm not 100% sure. The attribute does contain values that would point to the caches being scoped
to different PRs, though (refs/pull/1660/merge
and refs/pull/1661/merge
on the two test pull requests I tried).
This is the setup we use:
- uses: actions/cache@v1
id: cache
with:
path: .github/vendor/bundle
key: github-gems-${{ hashFiles('**/.github/Gemfile.lock') }}
restore-keys: |
github-gems-
The bundle is installing gems to the correct folder, since the cache is successfully fetched starting from the second commit on each pull request.
Let me know if I can provide more info. Thanks!
According comment by @joshmgross actions/cache
use undocumented API ( 37c4544#r35753492 ). I create that issue to track success in that area.
With config like this:
- name: Ubuntu cache
uses: actions/cache@preview
if: startsWith(matrix.os, 'ubuntu')
with:
path: ~/.cache/pip
key: ${{ github.workflow }}-${{ github.ref }}-${{ matrix.os }}-${{ matrix.python-version }}
- name: macOS cache
uses: actions/cache@preview
if: startsWith(matrix.os, 'macOS')
with:
path: ~/Library/Caches/pip
key: ${{ github.workflow }}-${{ github.ref }}-${{ matrix.os }}-${{ matrix.python-version }}
It creates a key like "Test-refs/heads/gha-cache-ubuntu-18.04-3.5"
.
The upload step fails with a 404:
Post Ubuntu cache1s
##[warning]Cache service responded with 404
Post job cleanup.
/bin/tar -cz -f /home/runner/work/_temp/802beeb6-cc68-45cb-a68d-488cc4008df5/cache.tgz -C /home/runner/.cache/pip .
##[warning]Cache service responded with 404
eg. https://github.com/hugovk/Pillow/runs/286812478#step:22:1
That's because github.ref
includes slashes (eg. refs/heads/gha-cache
):
The branch or tag ref that triggered the workflow. For example,
refs/heads/feature-branch-1
. If neither a branch or tag is available for the event type, the variable will not exist.
Should I escape or replace illegal characters (and how?) or should the Action take care of them?
Or is there another way to include the branch name in the key
?
Care must be taken to figure out a key:
name, so it doesn't collide with another one and, for example, fetch an incompatible cache for another OS or Python version.
A key
is not something I've had to worry about with Travis CI:
There is one cache per branch and language version/ compiler version/ JDK version/ Gemfile location/ etc.
If the key
is omitted, how about calculating one from the branch, workflow name and matrix values (and anything else relevant)?
For example, given this workflow:
name: Test
on: [push, pull_request]
jobs:
build:
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
python-version: ["3.6", "3.7"]
os: [ubuntu-latest, ubuntu-16.04, macOS-latest]
Perhaps it would generate something like Test__master__ubuntu-latest__3.6
.
As a user, I don't really care, I'd just like the same cache to be fetched when rebuilding the same sort of thing.
The expected behavior is that all three of the following steps pass on all platforms:
- name: Cache **/README.md
uses: actions/cache@preview
with:
path: .
key: test-${{ runner.os }}-${{ hashFiles('**/README.md') }}
continue-on-error: true
- name: Cache README.md
uses: actions/cache@preview
with:
path: .
key: test-${{ runner.os }}-${{ hashFiles('README.md') }}
continue-on-error: true
- name: Cache *README.md
uses: actions/cache@preview
with:
path: .
key: test-${{ runner.os }}-${{ hashFiles('*README.md') }}
continue-on-error: true
But right now hashFiles(README.md)
and hashFiles(*README.md)
fails macOS:
2019-11-02T18:08:19.8057240Z ##[error]The template is not valid. 'hashFiles(README.md)' failed. Search pattern 'README.md' doesn't match any file under '/Users/runner/runners/2.160.0/work/github-actions-hashfiles-test/github-actions-hashfiles-test'
2019-11-02T18:08:19.8346580Z ##[error]The template is not valid. 'hashFiles(*README.md)' failed. Search pattern '*README.md' doesn't match any file under '/Users/runner/runners/2.160.0/work/github-actions-hashfiles-test/github-actions-hashfiles-test'
And hashFiles(README.md)
and hashFiles(**/README.md)
fails Windows:
2019-11-02T18:09:01.2005576Z ##[error]The template is not valid. 'hashFiles(**/README.md)' failed. Search pattern '**/README.md' doesn't match any file under 'd:\a\github-actions-hashfiles-test\github-actions-hashfiles-test'
2019-11-02T18:09:01.2212553Z ##[error]The template is not valid. 'hashFiles(README.md)' failed. Search pattern 'README.md' doesn't match any file under 'd:\a\github-actions-hashfiles-test\github-actions-hashfiles-test'
See example repo: https://github.com/poiru/github-actions-hashfiles-test
And run: https://github.com/poiru/github-actions-hashfiles-test/commit/23dcdc7705d1660ba845291f744dd9b4a9157458/checks?check_suite_id=293065410
Hey,
Just wondering why restore-keys
is defined as a multi line string, while yaml obviously supports sequences?
Line 23 in 25e0c8f
Thanks!
I've noticed that in one of my private repos, the cache is never hit even though the cache is small and at the end of each run it says it successfully saved.
Cache not found for input keys: Linux-eslint-e2e650427648991b0a4acaf9d0a8af37ce90a05d, Linux-eslint-.
Post job cleanup.
/bin/tar -cz -f /home/runner/work/_temp/random-hash/cache.tgz -C /home/runner/work/myproject/myproject/.cache/eslint .
Cache saved successfully
Here's the workflow step:
- name: cache eslint
uses: actions/cache@v1
with:
path: ./.cache/eslint
key: ${{ runner.os }}-eslint-${{ github.sha }}
restore-keys: |
${{ runner.os }}-eslint-
Are there any reasons for this? I'm assuming caching isn't supported in free-tier private repos yet but I haven't seen that anywhere in the documentation.
Currently, only push
and pull_request
events work with the cache service, other events will fail server authorization with an error like "No scopes with read permission were found on the request."
We should check the event (from env variable GITHUB_EVENT_NAME
) and provide a more helpful error message when the event is not supported.
Hi, we need to work with yarn workspaces but we don't have any option to use a param to select which folder we need to cache...
When we used CircleCI, we have something like this:
- save_cache:
key: yarn-{{ checksum "yarn.lock" }}
paths:
- node_modules
- workspace-a/node_modules
- workspace-b/node_modules
Can we do that with Github Actions Cache?
Thanks.
One common use case for caching appears to be caching build dependencies, like:
- name: Dependency cache
uses: actions/cache@v1
with:
key: {os}-{language tooling version}-${{ hashFiles('dependencies') }}
- run: pkgtool update
- run: pkgtool build-dependencies
Unless dependency versions are precisely specified using some lock file, this will mean that the key doesn't change with time / newly released upstream dependency versions. On the other hand the call to pkgtool update
will be aware of those upstream updates, and hence they will be rebuilt. But since the cache key doesn't change, actions/cache won't update the cache.
I see a couple of approaches to address this, but none seem to be available right now:
with: update: true
Hi,
I use a container for job running (see here)
And being that the container shuts down before the post cache action - the data for the cache is unavailable. it just gives an error Error: No such container: CONTAINER_ID
it seems the (undocumented?) post
parameter for the actions runs
should be configurable as to be post container shutdown or pre shutdown
While the hosted virtual environments work just fine, trying to run on a self-hosted windows runner fails today if gnu tar is not already on the path (e.g. via a mingw install or similar).
On newer versions of Windows, BSD tar is included, so you get an error like:
C:\Windows\system32\tar.exe -cz --force-local -f C:/Users/zarenner/actions-runner/_work/_temp/8b5e8d5e-a823-42dd-a103-7b47503f2fdb/cache.tgz -C C:/Users/zarenner/actions-runner/_work/zarenner-testperf/zarenner-testperf .
tar.exe: Option --force-local is not supported
Usage:
List: tar.exe -tf <archive-filename>
Extract: tar.exe -xf <archive-filename>
Create: tar.exe -cf <archive-filename> [filenames...]
Help: tar.exe --help
##[warning]The process 'C:\Windows\system32\tar.exe' failed with exit code 1
Note that this isn't especially high priority since:
The readme currently provides an example workflow that suggests caching a project's node_modules
folder.
This is generally not recommended: see here, here, here, etc. It also doesn't integrate well with npm's suggested CI workflow -- which is to cache ~/.npm
and use npm ci
-- because npm ci
always removes node_modules if it exists so caching it strictly slows down the build.
For reference, Azure and Travis both cache ~/.npm
rather than node_modules
by default.
Was this a conscious design decision? If not, would you consider revising the readme to suggest using the shared cache?
I think this has the same root cause as #43, except I am using my own custom string with slash (tmp release/benchmark
) and receives 404 when trying to save the cache.
Please provide an example for a Scala project built using sbt.
This action is what I wanted for GitHub Actions. Thank you for creating this.
As far as I read the sources,
specified directory is archived with tar
. But it is not compressed. I think size of build dependencies is often quite big. And size limitation of cache is 200MB as per implementation in src/save.ts
. I think compressing cache file would be better.
I am writing the actions right now for a Rails app with below setup. However it's failing on the post action. Seems like hashFiles
is looking for Gemfile.lock
at the wrong place.
- uses: actions/cache@v1
id: bundle-cache
with:
path: vendor/bundle
key: ${{ runner.os }}-gem-${{ hashFiles('**/Gemfile.lock') }}
restore-keys: |
${{ runner.os }}-gem-
Post actions/cache@v10s
##[error]The template is not valid. Could not find file '/home/runner/work/my-app/my-app/Gemfile.lock'.
The average step time is about 4-6 hours🤕
Unfortunately, the reason why the step is being performed for so long is not clear. Logs do not open
Workflow:
...
steps:
- uses: actions/checkout@v1
- uses: actions/setup-node@v1
with:
node-version: '12.x'
- uses: actions/cache@v1
with:
path: node_modules
key: node-${{ hashFiles('**/package-lock.json') }}
- name: Setup npm
run: echo "//registry.npmjs.org/:_authToken=${{ secrets.NPM_AUTH_TOKEN }}" > ~/.npmrc
- name: Bootstrap
run: npm i
...
Any idea what might have gone wrong?
I'm trying to cache the result of docker save
, which creates a tar
file.
When I try to cache a single file, an error is raised.
- name: Cache docker layers
uses: actions/cache@preview
id: cache
with:
path: docker_cache.tar
key: ${{ runner.os }}-docker
/bin/tar -cz -f /home/runner/work/_temp/9d43813f-9c7b-4799-ac7d-f8d0749e7d8c/cache.tgz -C /home/runner/work/<repo>/<repo>/docker_cache.tar .
3
/bin/tar: /home/runner/work/<repo>/<repo>/docker_cache.tar: Cannot open: Not a directory
4
/bin/tar: Error is not recoverable: exiting now
5
##[warning]The process '/bin/tar' failed with exit code 2
I could create a directory just to hold the tar
file, but it would be nice if this actions supports caching single files.
I am trying to write a JavaScript action that runs npm ci
and caches ~/.npm
and ~/.cache/Cypress
folders. I can use this action from YML file like this
# https://github.com/actions/cache
- name: Cache node modules
uses: actions/cache@v1
with:
path: ~/.npm
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.os }}-node-
- name: Cache Cypress binary
uses: actions/cache@v1
with:
path: ~/.cache/Cypress
key: cypress-${{ runner.os }}-node-${{ hashFiles('**/package.json') }}
restore-keys: |
cypress-${{ runner.os }}-node-
- name: install dependencies
env:
# make sure every Cypress install prints minimal information
CI: 1
run: npm ci
But I would like to have our own Cypress action that does the caching and NPM install for the user, something like
- name: Install NPM and Cypress
uses: @cypress/github-action
This means from Cypress GH action JS code I need to call restore
and save
files in this action. I have tried but without success, for example
// after executing `npm ci` successfully
const homeDirectory = os.homedir()
const npmCacheDirectory = path.join(homeDirectory, '.npm')
core.saveState('path', npmCacheDirectory)
core.saveState('key', 'abc123')
core.saveState('restore-keys', 'abc123')
console.log('loading cache dist restore')
require('cache/dist/restore')
which gives me
done installing NPM modules
loading cache dist restore
##[error]Input required and not supplied: path
##[error]Node run failed with exit code 1
I think this particular error is due to a different core
singleton between "regular" npm module and ncc
- bundled actions/cache
one. But in general, do you have by any chance a plan to document how to use this module from JavaScript? I could write TypeScript and bundle it as a top level action, which should bundle actions/cache
code I think.
If a PR restores cache that was scoped to master and saves it at the end of the action, does the scope of the cache change to that of the PR's or does it remain in master's scope?
For example -
Let's say I have folder tempFolder
cached on the master branch.
I create a PR that restores cache from master and downloads tempFolder
. The action then makes some changes tempFolder
, and at the end, the cache is saved.
Does the scope of tempFolder
change from master to that of the PR?
I am trying to cache the security database of my docker container scanner.
Log:
Post cached scan db
##[warning]The process '/bin/tar' failed with exit code 2
Post job cleanup.
/bin/tar -cz -f /home/runner/work/_temp/28e557d6-3aeb-4359-9a00-d29f2722deda/cache.tgz -C /home/runner/work/iron-alpine/iron-alpine/vulndb .
/bin/tar: ./db: Cannot open: Permission denied
/bin/tar: ./vuln-list: Cannot open: Permission denied
/bin/tar: Exiting with failure status due to previous errors
##[warning]The process '/bin/tar' failed with exit code 2
Config:
dockerscan:
name: image security scan
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@master
- name: docker build
run: docker build . --file Dockerfile --tag image
- name: cached scan db
uses: actions/cache@preview
with:
path: vulndb/
key: ${{ runner.os }}-vulndb
- name: run security scan
run: |
docker run --rm \
-v /var/run/docker.sock:/var/run/docker.sock \
-v "$(pwd)/vulndb/":/root/.cache/ \
aquasec/trivy --severity HIGH,CRITICAL,MEDIUM --no-progress --auto-refresh --ignore-unfixed --exit-code 1 --cache-dir /root/.cache/ image
Proposal: Split this action into two, similar to actions/upload-artifact
/ actions/download-artifact
(also similar to what CircleCI does):
Rationale: This will give the user a lot more control (e.g. I might want to save the cache even before running the test steps).
I'm trying to cache the ccache directory, it fails with warning Cache service responded with 404
and cache doesn't get saved.. The weird thing is it happens only on one job repeatedly. The size of the cache is around 25M which is way less than 400M.
https://github.com/rashedmyt/turtlecoin/runs/303082196
The remaining jobs are getting cached perfectly..
Hello,
First of all, thank you very much for providing this much needed functionality!
I just have a question: How do I know how much space my cache is using?
Could it be displayed somewhere? Maybe logging it in the build console is enough?
Are there any plans to increase the cache size?
For instance 200 mb for java dependencies is too small
Currently this cache action will only save caches if all tests succeed. In many cases this is desirable behavior. However I have some projects with long build times and flakey test suites. It would be very helpful if I could configure the cache to be saved regardless of the test suite success or failure.
I have created a fork of this project action to set the post-if to always()
.
Is it possible to make the cache policy configurable? Or to pass post-if as an argument from the cache configuration?
When I search this repository for "hashFiles" I only get results matching markdown and yaml files. I don't see any code that defines this function. I also don't see any mention of it in
https://help.github.com/en/actions/automating-your-workflow-with-github-actions/software-installed-on-github-hosted-runners
I am trying to figure out how this function actually works.
Reason:
I would like to define a cache per branch by using the GITHUB_REF
environment variable (https://help.github.com/en/actions/automating-your-workflow-with-github-actions/using-environment-variables). But since that variable can contain path separators, I am thinking it would be a good idea to hash that string and use the hash value, e.g.:
- uses: actions/cache@v1
with:
path: ~/.npm
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}-${{ getStrHash(${{GITHUB_REF}}) }}
I understand that Issue is not related to the code of this repository, but I would like to discuss with many people, so I open Issue here. (and I know community forums exist. but many people probably don't know yet.)
First, I really appreciate the GitHub team for adding the cache feature to GitHub Actions. That's great for us! But, In recent years, node_modules
is too large. 200MB can't cover it. It's the same in other languages. For example, using esy to install the opam packages, it can easily exceed 800MB. Is there a way to increase the cache limit? or if individual cache limits are removed, it becomes a relatively realistic limit. I know that if the file size is too large, save/restore may insanely slow down. but it shouldn't be limited on the cache action side.
As in the example at actions/toolkit#47 (comment), it is useful to determine the key based on the contents of a file, such as package-lock.json
for Node.js or mix.lock
for Elixir.
key: v1-npm-dependency-cache-{{ checksum "package-lock.json" }}
That way the cache lasts for as long as the dependencies remain the same.
This was actually pretty easy to implement in Go.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.