Code Monkey home page Code Monkey logo

cache-buildkite-plugin's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

cache-buildkite-plugin's Issues

Simpler scopes?

Can we make the “most basic example” in the readme even simpler, by just having a restrictive scope to start with? Or is there a way to SHA256 things and fully trust the manifest?

For yarn and bundler and things, the only things to bust a SHA256 manifest would be an arch difference between agent machines for native deps.

Permission denied when restoring cache with S3

We're restoring files from an S3 cache with v0.3.2 and seeing this error when the cache has a hit and the restore is attempted:

Cache hit at file level, restoring /bundle...
--
  |  
  | [Errno 13] Permission denied: '/bundle'

Here's the relevant part of our setup:

steps:
  - label: ":docker: :bundler: Bundler"
    key: docker-gem-build
    env:
      BUILDKIT_PROGRESS: plain
      COMPOSE_DOCKER_CLI_BUILD: 1
      DOCKER_BUILDKIT: 1
    plugins:
      - ecr#v2.6.0:
          login: true
          account_ids: <REDACTED>
          region: <REDACTED>
      - cache#v0.3.2:
          backend: s3
          manifest: Gemfile.lock
          path: /bundle
          restore: file
          save: file
      - docker#v5.6.0:
          image: <REDACTED>
          command: ["bin/ci/bundle_install"]
          mount-checkout: true
          env-propagation-list: BUNDLE_ENTERPRISE__CONTRIBSYS__COM
          volumes:
            - "/bundle:/bundle"

Error with S3 backend: "Unknown options: --recursive"

We're running the Elastic CI stack v5.16.1 and trying to use the S3 storage option. At the end of our build, during the cache plugin's post-command hook, we're getting this error:

Running plugin cache post-command hook | 0s
-- | --
  | $ /var/lib/buildkite-agent/plugins/github-com-buildkite-plugins-cache-buildkite-plugin-v0-3-0/hooks/post-command
  | Saving file-level cache of /bundle
  |  
  | Unknown options: --recursive
  | 🚨 Error: The plugin cache post-command hook exited with status 255

It looks like the flag is being passed in here: https://github.com/buildkite-plugins/cache-buildkite-plugin/blob/master/backends/cache_s3#L22-L26

Are we doing something wrong?

S3 backend fails with status 255 on Elastic CI Stack for AWS

Hi team,

I've tried using the s3 backend on agents running in the Elastic CI Stack for AWS using this configuration:

steps:
  - label: 'test caching'
    command: echo "test caching"
    plugins:
      - cache#v0.5.0:
          backend: s3
          manifest: .buildkite/pipeline.yml
          path: .buildkite
          restore: file
          save: file

and am getting this error: Error: The plugin cache post-command hook exited with status 255.

Permission to interact with the S3 bucket are granted through the EC2 instance role. I think I've configured the BUILDKITE_PLUGIN_S3_CACHE_BUCKET environment variable correctly: I tested it with a step:

steps:
  - label: 'test s3'
    command: aws s3 sync log "s3://${BUILDKITE_PLUGIN_S3_CACHE_BUCKET}/log"

which completed successfully.

Support for caching multiple directories

Currently, the plugin only takes in a single directory/path for caching. It would be great to have support for caching multiple directories so you don't have to create multiple entries of cache.

S3 backend fails to sync with `tgz` compression

The S3 backend is unable to sync when using tgz compression:

image

I think this is because the s3 sync command does not expect to receive a single file as argument, so it tries to find a directory with the same name instead. Perhaps it should create a temporary directory in /tmp and archive a .tgz file named after the cache key into that temporary directory instead, i.e.

ACTUAL_PATH="$(mktemp -d)"
"${COMPRESS_COMMAND[@]}" "${ACTUAL_PATH}/${KEY}.tgz" "${CACHE_PATH}"

I was able to replicate this on the build agent host manually:

$ ACTUAL_PATH=$(mktemp)
$ echo $ACTUAL_PATH 
/tmp/tmp.WUc3AxyL2d
$ tar czf "$ACTUAL_PATH" .gradle
$ aws s3 sync --endpoint-url "https://data.mina-lang.org" "$ACTUAL_PATH" "s3://buildkite-cache/test-key"          
warning: Skipping file /tmp/tmp.WUc3AxyL2d/. File does not exist.

The pipeline YAML can be viewed here.

You can view a gist of a build log exhibiting this problem here.

One other thing that I noticed in this log is that the second cache post-command hook did not run after the first one failed. Is that expected?

Is `fs` backend supposed to work with EBS?

It seems like using the fs backend keeps failing with the following error (the directory exists and s3 backend works just fine)

Waiting for folder lock
Waiting for folder lock
Waiting for folder lock
Waiting for folder lock
Waiting for folder lock

I'm guessing this might have something to do with using AWS EBS instead of built-in NVMe storage?

The `s3` backend is not very suitable for common use cases

I've tried using the S3 backend with a configuration like the one from the README:

steps:
  - label: ':nodejs: Install dependencies'
    command: npm ci
    plugins:
      - cache#v0.4.0:
+          backend: s3
           manifest: package-lock.json
           path: node_modules
           restore: file
           save: file

There are problems with using S3 to cache node_modules, especially via the AWS CLI:

  1. S3 does not preserve file permissions
  2. S3 does not preserve symlinks

These problems are pretty noticeable for TypeScript projects, since tsc is no longer executable once it is restored from S3:

image

If we manage to fix that problem (e.g. by using MinIO client instead of AWS CLI to restore the file permissions), we will run into the next problem - restoring node_modules from S3 replaces all of the symlinks in node_modules/.bin with a duplicate file containing the contents of the symlink target, which leads to obscure errors when running executables:

image

I don't know if other build tools rely quite so heavily on symlinks, but it seems unwise to advertise a workflow that is unlikely to work for a lot of folks so prominently in the README.

Perhaps these limitations should be documented until something like #44 is merged?

`BUILDKITE_PLUGIN_S3_CACHE_ONLY_SHOW_ERRORS` must be set

At the moment, the following error is produced when using the AWS CLI v1 with the s3 backend, unless BUILDKITE_PLUGIN_S3_CACHE_ONLY_SHOW_ERRORS is set:

image

I think this is because of the double quotes around the "$(verbose)" argument in save_cache and restore_cache, which forces the argument to be interpreted as an empty option string.

Support for backend: artifacts

Is there a reason buildkite artifacts are not a suitable backend for this?

I read

Buildkite recommends using Artifacts for build artifacts that are the result of a build and useful for humans, where as we see cache as being an optional byproduct of builds that doesn't need to be content addressable.

but that doesn't explain why we shouldn't/can't use artifacts as the backend for this. To me that seems like a more suitable and native buildkite integration, than relying on S3 for remote caching.

Getting a "Waiting for folder lock"

I'm trying to integrate this plugin to the building process in our company with the following:

      - cache#v0.5.0:
          manifest: webapp/npm-shrinkwrap.json # this exists, I've tried anything else and I get a `No such file or directory` message.
          path: webapp/node_modules
          restore: file
          save: file

then when the build starts, as part of the initial groups I can see this:

~~~ Running plugin cache post-checkout hook
�[90m$�[0m /var/lib/buildkite-agent/plugins/github-com-buildkite-plugins-cache-buildkite-plugin-v0-5-0/hooks/post-checkout
Cache miss up to file-level, sorry

which I think is normal, because there's no cache created yet, but right after the last step, where the new image is pushed to aws, I got this new last step which makes the build to fail:

~~~ Running plugin cache post-command hook
�[90m$�[0m /var/lib/buildkite-agent/plugins/github-com-buildkite-plugins-cache-buildkite-plugin-v0-5-0/hooks/post-command
Saving file-level cache of webapp/node_modules

Waiting for folder lock

Waiting for folder lock

Waiting for folder lock

Waiting for folder lock

Waiting for folder lock

�[31m🚨 Error: The plugin cache post-command hook exited with status 1�[0m

not sure if it's involved, but we also use the docker-compose plugin, so we create a container with the web app built that is eventually pushed to aws, so maybe this plugin operates out of that context? Is there a way to make it work in such conditions?

Support saving multiple cache types in one plugin definition

Use case

I've got a single build step that caches a path based off a manifest file. I want to restore the cache based off that manifest, but fall back to a pipeline-level cache.

Unless I've misunderstood some part of the docs, currently the only way to achieve this is to define two separate instances of the plugin. For example:

steps:
  plugins:
    - cache#v1.0.1:
        manifest: some-file
        path: ./some/path
        restore: pipeline
        save: file
    - cache#v1.0.1:
        path: ./some/path
        save: pipeline

When using the compression option, the above config will compress the same file/folder twice, instead of just uploading differently-named copies of the same compressed file.

Potential implementation

IMO a nice way to represent this would be to allow specifying an array of values to save. For example:

steps:
  plugins:
    - cache#v1.0.1:
        manifest: some-file
        path: ./some/path
        restore: pipeline
        save:
          - file
          - pipeline

An alternative could be some kind of restore key, similar to what the cache github action provides, though I'm not sure if that would align with this plugin's current configuration API.

Skip save step if manifest hash has not changed

Currently if the save parameter is configured, a cache will always be saved on the post-command hook.

Saving cache can be a costly operation if the file/folder is large.

It would be good if there was an option to skip saving the cache if it is detected that a cache already exists with the same cache key.

This would mimic what some other CI platforms like CircleCI do:

image

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Open

These updates have all been created already. Click a checkbox below to force a retry/rebase of any.

Detected dependencies

buildkite
.buildkite/pipeline.yml
  • docker-compose v3.9.0
  • shellcheck v1.1.2
  • plugin-linter v3.0.0
docker-compose
docker-compose.yml

  • Check this box to trigger a request for Renovate to run again on this repository

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.