Code Monkey home page Code Monkey logo

drone-s3's Introduction

drone-s3

Build Status Gitter chat Join the discussion at https://discourse.drone.io Drone questions at https://stackoverflow.com Go Doc Go Report

Drone plugin to publish files and artifacts to Amazon S3 or Minio. For the usage information and a listing of the available options please take a look at the docs.

Run the following script to install git-leaks support to this repo.

chmod +x ./git-hooks/install.sh
./git-hooks/install.sh

Build

Build the binary with the following commands:

go build
go test

Docker

Build the Docker image with the following commands:

CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -a -tags netgo -o release/linux/amd64/drone-s3
docker build --rm=true -t plugins/s3 .

Please note incorrectly building the image for the correct x64 linux and with CGO disabled will result in an error when running the Docker image:

docker: Error response from daemon: Container command
'/bin/drone-s3' not found or does not exist..

Usage

Execute from the working directory:

  • For upload
docker run --rm \
  -e PLUGIN_SOURCE=<source> \
  -e PLUGIN_TARGET=<target> \
  -e PLUGIN_BUCKET=<bucket> \
  -e AWS_ACCESS_KEY_ID=<token> \
  -e AWS_SECRET_ACCESS_KEY=<secret> \
  -v $(pwd):$(pwd) \
  -w $(pwd) \
  plugins/s3 --dry-run
  • For download
docker run --rm \
  -e PLUGIN_SOURCE=<source directory to be downloaded from bucket> \
  -e PLUGIN_BUCKET=<bucket> \
  -e AWS_ACCESS_KEY_ID=<token> \
  -e AWS_SECRET_ACCESS_KEY=<secret> \
  -e PLUGIN_REGION=<region where the bucket is deployed> \
  -e PLUGIN_DOWNLOAD="true" \
  -v $(pwd):$(pwd) \
  -w $(pwd) \
  plugins/s3 --dry-run

drone-s3's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

drone-s3's Issues

Using wildcards doesn't work.

Code comments indicate that using a * will work, but this isn't true. The underlying AWS command requires the use of the --include and --exclude flags to configure wildcard behavior just like the s3_sync plugin.

Support setting ContentType and ContentEncoding

Hey,

Currently this plugin does not support setting ContentType and ContentEnconding. At QuintoAndar we use this plugin to upload files to S3 which are then served to different web pages so this is necessary (and should be a pretty common use-case), and is one of the reasons I keep a fork here: https://github.com/quintoandar/drone-s3.

Drone S3 Sync currently supports them.

I have a Pull Request on the way but would like to consider a few points first.

  1. drone-s3-sync uses glob matching, but in my original fork I went for a regex matcher. This allows us to set metadata like this:
    content_type:
      ".*(css|cgz)$": "text/css"
      ".*(ttf|woff|woff2)$": "application/octet-stream"

I'm fine either way but would like to know if there's more interest in using regex over glob.

  1. Have you ever considered merging drone-s3 and drone-s3-sync plugins? A lot of the code is pretty similar, so this means half the maintenance costs. The sync could be a parameter sync: true that is false by default.

Support IAM roles

Our drone EC2 instances have the ability to write to S3 through the IAM role they were given at startup. The normal way to use this is to not specify access_key or secret_key.

However, the current setup for this plugin is such that it will not run the plugin if the access_key/secret_key are not specified (though the docs say they are optional).

My preferred solution would be to make the access_key/secret_key actually optional, but the use case in the comment above the non-optional code is:

// skip if AWS key or SECRET are empty. A good example for this would
// be forks building a project. S3 might be configured in the source
// repo, but not in the fork

Which would break if they were optional. An alternative is to add a use_iam: true setting, though it would still be the case that a fork would fail if that were set.

Can't resolve AWS DNS - no such host

I'm getting an strange error when running this plugin.

time="2017-03-08T21:04:38Z" level=info msg="Attempting to upload" bucket=my-bucket endpoint= region=eu-east-2 
time="2017-03-08T21:04:38Z" level=info msg="Uploading file" bucket=my-bucket content-type="image/png" name="dist/assets/images/background_2.png" target="/dist/assets/images/background_2.png" 
time="2017-03-08T21:04:38Z" level=error msg="Could not upload file" bucket=my-bucket error="RequestError: send request failed\ncaused by: Put https://my-bucket.s3-eu-east-2.amazonaws.com/dist/assets/images/background_2.png: dial tcp: lookup my-bucket.s3-eu-east-2.amazonaws.com on 10.0.0.2:53: no such host" name="dist/assets/images/background_2.png" target="/dist/assets/images/background_2.png" 
RequestError: send request failed
caused by: Put https://my-bucket.s3-eu-east-2.amazonaws.com/dist/assets/images/background_2.png: dial tcp: lookup my-bucket.s3-eu-east-2.amazonaws.com on 10.0.0.2:53: no such host

It seems that the DNS lookup is returning with an internal IP. It should be in the range of 52.92.88.0/22. Any ideas?

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Repository problems

These problems occurred while renovating this repository. View logs.

  • WARN: Found renovate config warnings

Open

These updates have all been created already. Click a checkbox below to force a retry/rebase of any.

Ignored or Blocked

These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.

Detected dependencies

dockerfile
docker/Dockerfile.linux.amd64
  • alpine 3.17
docker/Dockerfile.linux.arm64
docker/Dockerfile.windows.1809
  • mcr.microsoft.com/windows/nanoserver 1809
docker/Dockerfile.windows.ltsc2022
droneci
.drone.yml
  • golang 1.22
  • golang 1.22
  • golang 1.22
  • golang 1.22
  • golang 1.22
  • golang 1.22
  • golang 1.22
  • golang 1.22
  • golang 1.22
  • golang 1.22
  • golang 1.22
gomod
go.mod
  • go 1.22
  • github.com/aws/aws-sdk-go v1.44.156
  • github.com/joho/godotenv v1.4.0
  • github.com/mattn/go-zglob v0.0.4
  • github.com/sirupsen/logrus v1.9.0
  • github.com/urfave/cli v1.22.10
  • github.com/pkg/errors v0.9.1
  • golang.org/x/sync v0.7.0

  • Check this box to trigger a request for Renovate to run again on this repository

Need support for secret-less usage

This will cover the usecase, when an EC2 instance is configured with an IAM role and aws CLI can figure out the credentials itself.

In my environment I can clearly upload files without credentials:

$ docker run --rm -it --entrypoint /usr/bin/aws plugins/drone-s3 s3 cp /usr/bin/aws s3://crossdev/tools/os/any/ --acl public-read --region eu-west-1
upload: usr/bin/aws to s3://crossdev/tools/os/any/aws

Possibly, the fix would be simply to remove the bailout shortcut

windows paths contain backwards slashes

When I try to upload a file from a windows build of this plugin (for windows 2019) to minio, I see that the backwards slashes from the path become part of the filename.

time="2021-11-26T08:57:19+01:00" level=info msg="Attempting to upload" bucket=conda endpoint="https://minio.xxx.net" region=us-east-1
--
5 | time="2021-11-26T08:57:19+01:00" level=info msg="Uploading file" bucket=conda name=conda_build/win-64/cn_ws-4.2.0-0.tar.bz2 target="/\\win-64\\cn_ws-4.2.0-0.tar.bz2"

Note I did compile the exe on linux, if that could be the issue, I can try rebuilding it on windows.

I saw the same behaviour when using the s3-sync plugin.

Feature: Allow user to load environment variables from file.

This PR allows the user to load environment variables from env-file. In particular, the environment variables loaded from env-file will overwrite any existing environment variables, since one of the primary use for this feature will be to override the plugin variables (PLUGIN_*) that is injected by drone.

Example: Dynamically set target to testReport_XXXX, where XXXX is the current date and time.

...

- name: create env
  image: alpine
  commands:
    - echo PLUGIN_TARGET=/testReport_$$(date +"%Y-%m-%dT%H.%m.%S")UTC > target-env

- name: upload
  image: plugins/s3
  settings:
    bucket: BUCKET_NAME
    region: us-east-1
    source: results/**/*
    env-file: ./target-env
  depends_on:
    - create env

#132

Support S3 V2 signing

Some S3 clones only support v2 signing and not the v4 signing.

It should be support s3 sign like aws-cli.

[default]
s3 =
    signature_version = s3

Support compression / gzip

You can upload S3 objects with Content-Encoding: gzip and the majority HTTP clients will automatically decompress during the GET.

As an API, I'm thinking a simple compress: true / gzip: true yaml setting will suffice.

If we wanted to take it further, there could be a list of files to gzip, or a list of included file types, or maybe the smartest thing to do is a excluded file types (with defaults: mp3, jpg, gz, zip) - basically so we don't compress files which are already compressed.

I'm just looking into creating custom Drone plugins now so I can send a PR. Does this addition sound okay?

Setting bucket ACLs is not optional and requires bucket ACL which is not preferred mechanism per AWS

code ref

Our upload pattern would prefer to not use ACLs at all for our S3 uploads, instead using relying on IAM policies at the bucket level. However, in the plugin the Access value which stores the ACL to use is a mandatory field (defaulting to private). Since the private ACL is a good "secure by default" setting, having a skip value or similar to not pass the Access value when performing the PutObject would be great.

This also lines up with AWS' advice on how to set permissions/ACLs https://docs.aws.amazon.com/AmazonS3/latest/userguide/acl-overview.html#CannedACL and https://docs.aws.amazon.com/AmazonS3/latest/userguide/about-object-ownership.html

Missing documentation on secrets, unable to publish to Minio

I'm using the following configuration for drone-s3:

  publish:
    image: plugins/s3
    bucket: get
    secrets: [ plugin_access_key, plugin_secret_key ]
    source: gook
    target: /
    path_style: true
    endpoint: https://minio.mo-mar.de
    when:
      branch: master

In Drone, I set the two secrets plugin_access_key and plugin_secret_key, which gives me the following error message:

time="2018-06-11T13:32:59Z" level=info msg="Attempting to upload" bucket=get endpoint="https://minio.mo-mar.de" region=us-east-1 
time="2018-06-11T13:32:59Z" level=info msg="Uploading file" bucket=get content-type="application/octet-stream" name=gook target="/gook" 
time="2018-06-11T13:33:01Z" level=error msg="Could not upload file" bucket=get error="SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your key and signing method.\n\tstatus code: 403, request id: 15371E5C340F19FD, host id: " name=gook target="/gook" 
SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your key and signing method.
    status code: 403, request id: 15371E5C340F19FD, host id: 

Now, I actually got that secret name from the source, because it's not mentioned anywhere in the documentation. Looking at https://github.com/drone-plugins/drone-docker/blob/master/cmd/drone-docker/main.go (e.g. PLUGIN_REPO) and the documentation for it (which mentions docker_repo, which works), I also tried s3_access_key and s3_secret_key, but it results in the following different error message:

time="2018-06-11T16:12:20Z" level=info msg="Attempting to upload" bucket=get endpoint="https://minio.mo-mar.de" region=us-east-1 
time="2018-06-11T16:12:20Z" level=info msg="Uploading file" bucket=get content-type="application/octet-stream" name=gook target="/gook" 
time="2018-06-11T16:12:40Z" level=error msg="Could not upload file" bucket=get error="NoCredentialProviders: no valid providers in chain. Deprecated.\n\tFor verbose messaging see aws.Config.CredentialsChainVerboseErrors" name=gook target="/gook" 
NoCredentialProviders: no valid providers in chain. Deprecated.
    For verbose messaging see aws.Config.CredentialsChainVerboseErrors

Now, the first one somehow looks more like it makes sense, and I also triple-checked my secrets, but I just can't upload anything to my Minio instance. What am I doing wrong here, and why is the documentation actually recommending to just store the secrets in the .drone.yml?

Unable to use drone secret for aws key and token.

Using secrets returns the following error, using secrets for other plugins is fine.

InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records. status code: 403, request id: EFA7165BFF6E8129

AWS keys with special characters break the upload and create a misleading error

When using aws access keys that have non-alphanumeric characters, I get the following error from the aws cli tool:

A client error (SignatureDoesNotMatch) occurred when calling the CreateMultipartUpload operation: The request signature we calculated does not match the signature you provided. Check your key and signing method.

This is a misleading error but goes away when I use access keys that do not contain special characters. Not sure whether the fix has to do with some kind of escaping here or this is just a bug in the aws cli app.

Feature request s3 download mode

I'd like to be able to use s3 plugin in reverse, to download artifact.

Golden rule of continuous delivery states that code should only be built once, then deployed without rebuilding.
I was looking at example here: http://docs.drone.io/promoting-builds/
And it seemed to me that it can be improved.

  • build & test & publish artifact w/o deployment event
  • get artifact & deploy w/ deployment event

Please let me know whenever idea is worth it.

Thanks a lot in advance!

Support multiple targets

Hi, a feature I would love is to support multiple targets for the same set of source files without having to create 2 pipeline steps in your drone yaml. Thanks for the plugin!

Target Pattern?

As far as I can tell looking through the source, there's no support for any sort of target patterning?

Would there be any interest in this if I were to fork this and develop this?

The need I have is to deploy my binaries to an S3 bucket containing the CI_TAG as a directory. ala /production/release/#{CI_TAG}/mybinary

Use environment variables to customize target location

Sorry if this is obvious, but I couldn't find the answer anywhere.
Is it possible to use drone environment variables in the target location?
I'd like to be able to set something like:

publish:
  s3:
    target: /drone/$CI_REPO/$CI_BUILD_NUMBER/

Can't be used to upload build artifacts

I guess it's because this is a publish plugin, but it's not executed when a build fails, thus it can't be used to upload build artifacts for debugging...
Is there an easy way to make it publish to s3 even when the build fails?

Support custom endpoint urls

awscli supports the --endpoint-url option which let's a user specify a custom endpoint. If we add the URL to this plugin, users could use this plugin to deploy assets to any S3 compatibles API, i.e. riak, swiftstack, ceph, skylable, minio or similar projects. As it would only be one more optional config option, I think the actual work should be limited.

I'd try it myself but I have never done anything in go and will probably break something.

Support storage class

The plugin should support specifying a storage class. This allows users to upload artifacts to lower cost storage backends.

From the CLI docs:

--storage-class (string) The type of storage to use for the object.
  Valid choices are: STANDARD | REDUCED_REDUNDANCY | STANDARD_IA | ONEZONE_IA.
  Defaults to 'STANDARD'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.