Code Monkey home page Code Monkey logo

s3-sync-action's Introduction

GitHub Action to Sync S3 Bucket 🔄

This simple action uses the vanilla AWS CLI to sync a directory (either from your repository or generated during your workflow) with a remote S3 bucket.

Usage

workflow.yml Example

Place in a .yml file such as this one in your .github/workflows folder. Refer to the documentation on workflow YAML syntax here.

As of v0.3.0, all aws s3 sync flags are optional to allow for maximum customizability (that's a word, I promise) and must be provided by you via args:.

The following example includes optimal defaults for a public static website:

  • --acl public-read makes your files publicly readable (make sure your bucket settings are also set to public).
  • --follow-symlinks won't hurt and fixes some weird symbolic link problems that may come up.
  • Most importantly, --delete permanently deletes files in the S3 bucket that are not present in the latest version of your repository/build.
  • Optional tip: If you're uploading the root of your repository, adding --exclude '.git/*' prevents your .git folder from syncing, which would expose your source code history if your project is closed-source. (To exclude more than one pattern, you must have one --exclude flag per exclusion. The single quotes are also important!)
name: Upload Website

on:
  push:
    branches:
    - master

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@master
    - uses: jakejarvis/s3-sync-action@master
      with:
        args: --acl public-read --follow-symlinks --delete
      env:
        AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }}
        AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
        AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
        AWS_REGION: 'us-west-1'   # optional: defaults to us-east-1
        SOURCE_DIR: 'public'      # optional: defaults to entire repository

Configuration

The following settings must be passed as environment variables as shown in the example. Sensitive information, especially AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY, should be set as encrypted secrets — otherwise, they'll be public to anyone browsing your repository's source code and CI logs.

Key Value Suggested Type Required Default
AWS_ACCESS_KEY_ID Your AWS Access Key. More info here. secret env Yes N/A
AWS_SECRET_ACCESS_KEY Your AWS Secret Access Key. More info here. secret env Yes N/A
AWS_S3_BUCKET The name of the bucket you're syncing to. For example, jarv.is or my-app-releases. secret env Yes N/A
AWS_REGION The region where you created your bucket. Set to us-east-1 by default. Full list of regions here. env No us-east-1
AWS_S3_ENDPOINT The endpoint URL of the bucket you're syncing to. Can be used for VPC scenarios or for non-AWS services using the S3 API, like DigitalOcean Spaces. env No Automatic (s3.amazonaws.com or AWS's region-specific equivalent)
SOURCE_DIR The local directory (or file) you wish to sync/upload to S3. For example, public. Defaults to your entire repository. env No ./ (root of cloned repository)
DEST_DIR The directory inside of the S3 bucket you wish to sync/upload to. For example, my_project/assets. Defaults to the root of the bucket. env No / (root of bucket)

License

This project is distributed under the MIT license.

s3-sync-action's People

Contributors

danielsinclair avatar jakejarvis avatar rmcfadzean avatar timoschilling avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

s3-sync-action's Issues

Use environment variables instead of CLI profiles

In #1, it's discussed that the action should be using profiles to prevent stomping on other AWS actions which are used by the repo.

However, in the AWS CLI documentation, it's mentioned that the CLI supports certain environment variables natively. This includes authentication-related variables such as AWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEY/AWS_SESSION_TOKEN.

Any reason we're not relying on these instead?

Using aws cp instead of aws sync

I am currently using this fork https://github.com/tpaschalis/s3-cp-action which essentially boils down to being able to cp a single file up to S3 instead of syncing a whole dir's contents.

It requires less permissions on the S3 side so you can use it in write-only mode which is quite nice as that ensures the build system will not be able to list or read or delete any other objects in the bucket.

Just thought it might be nice if this action allowed cp too optionally, then the fork wouldn't be needed and duplication is reduced.

add destination path variable

if you do not want to sync to root of a bucket it is necessary that you can also define the s3 bucket path to sync to

Using an Assumed Role breaks

If using an Assumed Role, the profile setup in the action breaks the assumed role connection, as the session token is not included.

sh: .git/FETCH_HEAD: Permission denied

I am trying to use this action, but am getting this error. Any ideas on what is causing it?

sh: .git/FETCH_HEAD: Permission denied
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:

  aws help
  aws <command> help
  aws <command> <subcommand> help
aws: error: argument --exclude: expected 1 argument
##[error]Docker run failed with exit code 2

Ignore certain files and directories

Hello, great action by the way. I am a fan! Is there a way to ignore certain files or directories if I specify a source directory that includes some files that I don't want to sync with my S3 bucket?

Thank you!

Upload failed : An error occurred (AccessDenied) when calling the PutObject operation: Access Denied

I tried to use your action with this config:

- name: Run AWS Cli to sync content
        uses: jakejarvis/s3-sync-action@master
        with:
          args: --acl public-read --follow-symlinks --delete
        env:
          AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }}
          AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
          AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          AWS_REGION: 'eu-west-1'   # optional: defaults to us-east-1
          SOURCE_DIR: '_site'      # optional: defaults to entire repository
          DEST_DIR: 'deploy_test' # optional: default to bucket root

But got this error for every files:

upload failed: _site/2019/07/24/ai4eu-the-h2020-initiative-for-a-sovereign-ai-platform-in-europe.html to s3://***/deploy_test/2019/07/24/ai4eu-the-h2020-initiative-for-a-sovereign-ai-platform-in-europe.html An error occurred (AccessDenied) when calling the PutObject operation: Access Denied

Permission denied error

Hi - I’m trying to use this for a basic react app. When it runs, it uploads everything correctly, but then fails at the very end with the following error:

...
Completed 38.6 MiB/38.7 MiB (4.1 MiB/s) with 2 file(s) remaining
Completed 38.7 MiB/38.7 MiB (4.1 MiB/s) with 2 file(s) remaining
upload: build/static/media/socicon.a35b6574.svg to s3://***/static/media/socicon.a35b6574.svg
Completed 38.7 MiB/38.7 MiB (4.1 MiB/s) with 1 file(s) remaining
upload: build/static/js/2.4a354a39.chunk.js.map to s3://***/static/js/2.4a354a39.chunk.js.map
sh: /: Permission denied

The IAM user has full S3 access, so permissioning shouldn’t be an issue.

This is the code block in our Github actions workflow:

    - name: Deploy to S3
      uses: jakejarvis/s3-sync-action@master
      with:
        args: --acl public-read --follow-symlinks --delete
      env:
        AWS_S3_BUCKET: ${{ secrets.AWS_PRODUCTION_BUCKET_NAME }}
        AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
        AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
        AWS_REGION: ${{ secrets.AWS_REGION }}
        SOURCE_DIR: "build"

The user-provided path public does not exist

Hello,

I'm interested in this action to deploy my website auto-magically when I push on master. Quite simple case I think, following both the Github Actions guide and the Readme of this action I managed to trigger the workflow and setup my secrets.

My issue : every time I'm trying to push on the master branch my action is triggered but crash at the deploy step because The user-provided path public does not exist.
I searched some similar cases but could not find informations on it and I'm not quite sure if this is a Github secrets / AWS rights issue.
I think it's coming from my Github secrets AWS_S3_BUCKET, but in the documentation it says

The name of the bucket you're syncing to. For example, jarv.is or my-app-releases.

My bucket is something like awesomewebsite.fr so I've put this in Github secrets with AWS_S3_BUCKET.

Any advice ?

Doesn't seem to Support MacOS

This doesn't seem to support MacOS, the runner retuns:

Run jakejarvis/s3-sync-action@master
  with:
    args: --acl public-read --delete
  env:
    ImageOS: macos1015
    AWS_S3_BUCKET: 
    AWS_ACCESS_KEY_ID: ***
    AWS_SECRET_ACCESS_KEY: ***
    AWS_REGION: ***
    SOURCE_DIR: ./Home/Android/Releases/

Error: Container action is only supported on Linux

ENDPOINT URL NOT FOUND

The action is failing because it says it cannot connect to endpoint. But I don't even want it to add --endpoint-url flag, since I tried the sync action from my PC without endpoint and used credentials and it works just fine.

EDIT: my bad - delete this issue. Typo

Add --acl public-read option

I'm using your action to upload a public website to a S3 bucket. It works fine to upload all the files but the permission of files once uploaded are not public.
Is is possible to add a --acl public-read option ?
Thanks

Upload only changed files

Hello, I don't really have an issue, its more like a request, would it be possible to make it only upload the changed files ? I have a big repository and its reuploading every file.

If its not possible that's okay too, this is already very helpful, good job!

Not working on windows-latest

I tried the same action that is working perfectly fine on ubuntu-latest with runs-on: windows-latest.

Got the following:

2020-03-26T18:13:25.6130539Z ##[group]Run jakejarvis/s3-sync-action@master
2020-03-26T18:13:25.6130727Z with:
2020-03-26T18:13:25.6130848Z   args: --follow-symlinks
2020-03-26T18:13:25.6130965Z env:
2020-03-26T18:13:25.6131674Z   AWS_S3_BUCKET: ***
2020-03-26T18:13:25.6131825Z   AWS_ACCESS_KEY_ID: ***
2020-03-26T18:13:25.6132356Z   AWS_SECRET_ACCESS_KEY: ***
2020-03-26T18:13:25.6132501Z   AWS_REGION: us-east-1
2020-03-26T18:13:25.6132816Z ##[endgroup]
2020-03-26T18:13:25.6285053Z ##[error]Container action is only supported on Linux

Maybe this is expected, I don't know. If yes, could be documented.

Can't upload single file using wildcard

Unable to upload a single file generated in the job using a wildcard. I have input it into SOURCE_DIR and though it would work as the documentation for SOURCE_DIR states "The local directory (or file) you wish to sync/upload to S3."

I have to use a wildcard for a part of the file name as I can't always be sure what the version name is. E.X.:

SOURCE_DIR: 'package-name-*.zip'

But it always fails with the error:

warning: Skipping file /github/workspace/package-name-1.0.0.zip/. File does not exist.

Obviously it sees the file or else it wouldn't have been able to grab that 1.0.0 version number so it is strange that can't find it. I do see that it has /. at the end so I wonder if it is thinking it is a folder? Is there some way I can make it understand it is a single file and not a folder?

mkdir: can't create directory '/github/home/.aws': File exists

I added your action to my CI with the command

      - name: Update website
        uses: jakejarvis/s3-sync-action@master
        env:
          SOURCE_DIR: './www/dist/www'
          AWS_REGION: ${{secrets.AWS_REGION}}
          AWS_S3_BUCKET: ${{secrets.AWS_WWW_S3_BUCKET_STAGING}}
          AWS_ACCESS_KEY_ID: ${{secrets.AWS_ACCESS_KEY_ID}}
          AWS_SECRET_ACCESS_KEY: ${{secrets.AWS_SECRET_ACCESS_KEY}}

I get the error : mkdir: can't create directory '/github/home/.aws': File exists

For your information, i run a cloudformation just before :

      - name: Deploy to AWS via CloudFormation
        uses: mgenteluci/[email protected]
        env:
          TEMPLATE: 'aws-configuration.yml'
          AWS_STACK_NAME: 'libraquery-staging'
          AWS_REGION: ${{secrets.AWS_REGION}}
          AWS_ACCESS_KEY_ID: ${{secrets.AWS_ACCESS_KEY_ID}}
          AWS_SECRET_ACCESS_KEY: ${{secrets.AWS_SECRET_ACCESS_KEY}}
          AWS_DEPLOY_BUCKET: ${{secrets.AWS_DEPLOY_BUCKET}}
          PARAMETER_OVERRIDES: ARN=${{secrets.AWS_ARN_STAGING}}

Ful error :

Run jakejarvis/s3-sync-action@master
/usr/bin/docker run --name b6eba6c17c9e653ad54f62b23bce69b1e7e531_b00afa --label b6eba6 --workdir /github/workspace --rm -e JAVA_HOME -e JAVA_HOME_8.0.222_x64 -e SOURCE_DIR -e AWS_REGION -e AWS_S3_BUCKET -e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY -e HOME -e GITHUB_REF -e GITHUB_SHA -e GITHUB_REPOSITORY -e GITHUB_ACTOR -e GITHUB_WORKFLOW -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GITHUB_EVENT_NAME -e GITHUB_WORKSPACE -e GITHUB_ACTION -e GITHUB_EVENT_PATH -e RUNNER_OS -e RUNNER_TOOL_CACHE -e RUNNER_TEMP -e RUNNER_WORKSPACE -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/home/runner/work/_temp/_github_home":"/github/home" -v "/home/runner/work/_temp/_github_workflow":"/github/workflow" -v "/home/runner/work/libraquery/libraquery":"/github/workspace" b6eba6:c17c9e653ad54f62b23bce69b1e7e531
mkdir: can't create directory '/github/home/.aws': File exists
##[error]Docker run failed with exit code 1

S3 Access Points

I am able to connect to my S3 Access Point using Panic's Transmit app so I know the permissions are setup correctly but when try to use this action, it tells me that "an error occurred (NoSuchKey) calling the ListObjectsV2 operation.

image

But as you can see from Github's action console, I am setting everything:

image

Issue with SOURCE_DIR variable

Hi, I am pushing a static generated site to S3. My whole project lives on github, but only one dir './output/'generated_html_css_etc actually needs to be synced to S3. I have tried several different ways to pass the SOURCE_DIR var but I keep getting github actions errors that it cannot find the dir. Below is my main.yml file with a few variable variations I have tried:

name: Push Website To S3
on:
  push:
    branches: [ master ]
  pull_request:
    branches: [ master ]
jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@master
    - uses: jakejarvis/s3-sync-action@master
      with:
        args: --acl public-read --follow-symlinks --delete --exclude '.git/*'
      env:
        AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }}
        AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
        AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
        AWS_REGION: 'us-west-2'
        SOURCE_DIR: './output'

I have also tried:

SOURCE_DIR: './this_project_repo/output'
SOURCE_DIR: './../../output'
SOURCE_DIR: 'output'
SOURCE_DIR: '~/this_project_repo/output'

I'm pretty sure I am doing something silly, I looked at the entrypoint.sh and the conditional SOURCE_DIR:-. looks fine. The documentation says that './' is the git clone root, so I assumed './output' would have worked.

Thanks for the help!

"Your workflow file was invalid" error

I'm trying to use your action but I get the following error:

Your workflow file was invalid: .github/workflows/staging.yml (Line: 26, Col: 9): There's not enough info to determine what you meant. Add one of these properties: run, shell, uses, with, working-directory

I added your action after my current setup so that I can run npm install and npm build first. Did I do something wrong?

jobs:
  build:
    runs-on: ubuntu-latest

    strategy:
      matrix:
        node-version: [12.x]

    steps:
      - uses: actions/checkout@v1
      - name: Use Node.js ${{ matrix.node-version }}
        uses: actions/setup-node@v1
        with:
          node-version: ${{ matrix.node-version }}
      - name: npm install & build
        run: |
          npm install
          npm run build --if-present
      - name: S3 Sync
      - uses: jakejarvis/[email protected]
        with:
          args: --acl public-read --follow-symlinks --delete
        env:
          SOURCE_DIR: './build'
          AWS_REGION: 'ap-southeast-2'
          AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET_STAGING }}
          AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
          AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}

Cache invalidation

This action doesn't invalidate the cache.

Another action, forked from this one, does : https://github.com/kefranabg/s3-sync-action

I've looked at the commits and they look legit, but considering the huge usage of this action, I wonder if those small changes couldn't get added to this action, maybe as an option?

Warning invalid input `args`

Hello. I have very strange issue. The deployment is working and is green but GithubActions report warning.

This is how looks in logs:

##[warning]Unexpected input 'args', valid inputs are ['']
Run jakejarvis/s3-sync-action@master
  with:
    args: --acl public-read --follow-symlinks --delete
  env:
    AWS_S3_BUCKET: bucketname
    AWS_ACCESS_KEY_ID: ***
    AWS_SECRET_ACCESS_KEY: ***
    AWS_REGION: eu-west-1
    SOURCE_DIR: dist
    DEST_DIR: app
/usr/bin/docker run --name cb188eef0244fd08856deb6be8de267_c2fe9f --label 765292 --workdir /github/workspace --rm -e AWS_S3_BUCKET -e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY -e AWS_REGION -e SOURCE_DIR -e DEST_DIR -e INPUT_ARGS -e HOME -e GITHUB_JOB -e GITHUB_REF -e GITHUB_SHA -e GITHUB_REPOSITORY -e GITHUB_REPOSITORY_OWNER -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_ACTOR -e GITHUB_WORKFLOW -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GITHUB_EVENT_NAME -e GITHUB_WORKSPACE -e GITHUB_ACTION -e GITHUB_EVENT_PATH -e RUNNER_OS -e RUNNER_TOOL_CACHE -e RUNNER_TEMP -e RUNNER_WORKSPACE -e ACTIONS_RUNTIME_URL -e ACTIONS_RUNTIME_TOKEN -e ACTIONS_CACHE_URL -e GITHUB_ACTIONS=true -e CI=true -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/opt/actions-runner/_work/_temp/_github_home":"/github/home" -v "/opt/actions-runner/_work/_temp/_github_workflow":"/github/workflow" -v "/opt/actions-runner/_work/tc-frontend/tc-frontend":"/github/workspace" 765292:7cb188eef0244fd08856deb6be8de267 --acl public-read --follow-symlinks --delete

The warning is on the top.
This is configuration in yaml:

      - name: Deploy results to Staging
        uses: jakejarvis/s3-sync-action@master
        with:
          args: --acl public-read --follow-symlinks --delete
        env:
          AWS_S3_BUCKET: 'bucketname'
          AWS_ACCESS_KEY_ID: ${{ secrets.STAGING_AWS_ACCESS_KEY_ID }}
          AWS_SECRET_ACCESS_KEY: ${{ secrets.STAGING_AWS_SECRET_ACCESS_KEY }}
          AWS_REGION: 'eu-west-1'
          SOURCE_DIR: 'dist'
          DEST_DIR: 'app'

I don't know if this is action relevant or GA itself.

The user-provided path dist does not exist.

I've verified the directory exists but I'm still getting this error when attempting to deploy. To be clear the goal is to have the contents of the dist directory copied to the root level of my specified bucket.

image

Support OIDC connect

I would like to use this action without AWS credentials, using OIDC for auth.

An example usage could look something like the following:

jobs:
  deploy:
    runs-on: ubuntu-latest
    permissions:
      id-token: write
      contents: read
    steps:

    - name: Configure AWS credentials
      uses: aws-actions/configure-aws-credentials@master
      with:
        role-to-assume: arn:aws:iam::${{ env.AWS_ACCOUNT_ID }}:role/${{ env.AWS_ROLE_NAME }}
        aws-region: ${{ env.AWS_REGION }}

    - name: Login to Amazon ECR
      id: login-ecr
      uses: aws-actions/amazon-ecr-login@v1

    - uses: actions/checkout@master

    - uses: jakejarvis/s3-sync-action@master
      with:
        args: --acl public-read --follow-symlinks --delete
      env:
        AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }}
        SOURCE_DIR: 'public'      # optional: defaults to entire repository

Self hosted runner

Hi, does Self hosted runner supported ?
If I have Self hosted runner on AWS and I don't need to use keys (IAM instance profile)

* handled scaping

I just have a question.

How can I handle something like this:

args: --acl public-read --follow-symlinks --exclude * --include *.ttf --content-type application/x-font-ttf

For some reason, the * (asterisk) is being changed by a string of folders and files, I asumming sh is converting it to that.

How can I scape it?

Thanks!

Will it create new directory if directory does not exist at s3 bucket.

I have specified DEST_DIR in the environment variables.
DEST_DIR is something like this: 'test/abc/xyz'
Assume the xyz directory does not exist at s3 bucket, will it create a new directory with the same name xyz. If not, how can I make it to create a directory and then push it?
I found an issue with the below error message

Could not connect to the endpoint URL: 
"https://***.s3.***.amazonaws.com/?list-type=2&prefix=test%abc%xyz%2F&encoding-type=url"
 Post Run actions/checkout@v2

AWS_S3_BUCKET is not set. Quitting.

My AWS bucket is in fact set, but I keep getting this error message so my action won't work to completion. Should I include the static website endpoint instead or in addition to the bucket name? I know the bucket is correct, but it's not being recognized at all.
Screen Shot 2021-02-03 at 12 09 58 PM

"Unknown options: /" Error message

Hi.

I am getting this error message in my build, specifically at the point of using jakejarvis/s3-sync-action@master:

with:
args: --acl public-read --follow-symlinks --delete
env:
AWS_S3_BUCKET: ***
AWS_ACCESS_KEY_ID: ***
AWS_SECRET_ACCESS_KEY: ***
AWS_REGION: us-east-1
SOURCE_DIR: bin/main
/usr/bin/docker run --name c488213b204a42648374365940a656f3_2515d7 --label 442333 --workdir /github/workspace --rm -e AWS_S3_BUCKET -e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY -e AWS_REGION -e SOURCE_DIR -e INPUT_ARGS -e HOME -e GITHUB_JOB -e GITHUB_REF -e GITHUB_SHA -e GITHUB_REPOSITORY -e GITHUB_REPOSITORY_OWNER -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RETENTION_DAYS -e GITHUB_ACTOR -e GITHUB_WORKFLOW -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GITHUB_EVENT_NAME -e GITHUB_SERVER_URL -e GITHUB_API_URL -e GITHUB_GRAPHQL_URL -e GITHUB_WORKSPACE -e GITHUB_ACTION -e GITHUB_EVENT_PATH -e GITHUB_ACTION_REPOSITORY -e GITHUB_ACTION_REF -e GITHUB_PATH -e GITHUB_ENV -e RUNNER_OS -e RUNNER_TOOL_CACHE -e RUNNER_TEMP -e RUNNER_WORKSPACE -e ACTIONS_RUNTIME_URL -e ACTIONS_RUNTIME_TOKEN -e ACTIONS_CACHE_URL -e GITHUB_ACTIONS=true -e CI=true -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/home/runner/work/_temp/_github_home":"/github/home" -v "/home/runner/work/_temp/_github_workflow":"/github/workflow" -v "/home/runner/work/_temp/_runner_file_commands":"/github/file_commands" -v "/home/runner/work/backend-api-non-container/backend-api-non-container":"/github/workspace" 442333:c488213b204a42648374365940a656f3 --acl public-read --follow-symlinks --delete

Unknown options: /

This is the script I am using for the "build" job:

build:
name: Build
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v2

- name: Setup Go
  uses: actions/setup-go@v2
  with:
    go-version: '1.14.0'

- name: Install dependencies
  run: |
    go version
    go get -u golang.org/x/lint/golint

# checks out our code locally so we can work with the files
- name: Run build
  run: go build -o bin/main

This is the script I am using for the "deploy" job:

deploy:
name: Deploy
runs-on: ubuntu-latest
needs: [build]
if: ${{ github.ref == 'refs/heads/main' && github.event_name == 'push' }}
steps:
- uses: jakejarvis/s3-sync-action@master
with:
args: --acl public-read --follow-symlinks --delete --exclude '.git/*'
env:
AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_REGION: 'us-east-1' # optional: defaults to us-east-1
SOURCE_DIR: 'bin/main' # optional: defaults to entire repository

Thanks in advance!

Warning on Running pip as the 'root' user

When the container is being built this warning is displayed.

WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
  WARNING: You are using pip version 22.0.4; however, version 22.2.2 is available.

We're using ubuntu-latest

Having an issue finding my SOURCE_DIR

Hi there! I'm receiving an error:

The user-provided path dist does not exist.

As you can see in the screenshot below. You can also see in the screenshot that the dist directory does in fact exist after running my build command.

Screen Shot 2020-05-08 at 10 51 20 AM

I believe this may be related to the fact that I have a working-directory set as seen in my yml file:

name: Website

on:
  push:
    branches: [ master ]

defaults:
  run:
    shell: bash
    working-directory: merchant-app

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - uses: actions/setup-node@v1
        with:
          node-version: 12
      - run: npm ci
      - run: npm run test:unit
  deploy:
    needs: build
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - uses: actions/setup-node@v1
        with:
          node-version: 12
      - run: npm ci
      - run: npm run build:staging
      - run: ls -gal
      - uses: jakejarvis/s3-sync-action@master
        with:
          args: --acl public-read --follow-symlinks --delete
        env:
          AWS_S3_BUCKET: ${{ secrets.AWS_WEBSITE_BUCKET }}
          AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
          AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          AWS_REGION: 'us-east-1'
          SOURCE_DIR: 'dist'

So my dist folder would be at /home/runner/work/paybidet/paybidet/merchant-app/dist not at /home/runner/work/paybidet/paybidet/dist.

Any help would be much appreciated!

List the files that were really uploaded

Is it possible to git list of objects that the action uploaded through its output?
The use case is to tun CloudFront validation only for appropriate resources after the upload.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.