Code Monkey home page Code Monkey logo

serverless-s3-local's Introduction

serverless-s3-local

serverless npm version MIT licensed npm downloads

v0.9.X is under developing. This version will have BREAKING CHANGES.

serverless-s3-local is a Serverless plugin to run S3 clone in local. This is aimed to accelerate development of AWS Lambda functions by local testing. I think it is good to collaborate with serverless-offline.

Installation

Use npm

npm install serverless-s3-local --save-dev

Use serverless plugin install

sls plugin install --name serverless-s3-local

Example

serverless.yaml

service: serverless-s3-local-example
provider:
  name: aws
  runtime: nodejs18.x
plugins:
  - serverless-s3-local
  - serverless-offline
custom:
# Uncomment only if you want to collaborate with serverless-plugin-additional-stacks
# additionalStacks:
#    permanent:
#      Resources:
#        S3BucketData:
#            Type: AWS::S3::Bucket
#            Properties:
#                BucketName: ${self:service}-data
  s3:
    host: localhost
    directory: /tmp
resources:
  Resources:
    NewResource:
      Type: AWS::S3::Bucket
      Properties:
        BucketName: local-bucket
functions:
  webhook:
    handler: handler.webhook
    events:
      - http:
        method: GET
        path: /
  s3hook:
    handler: handler.s3hook
    events:
      - s3: local-bucket
        event: s3:*

handler.js (AWS SDK v2)

const AWS = require("aws-sdk");

module.exports.webhook = (event, context, callback) => {
  const S3 = new AWS.S3({
    s3ForcePathStyle: true,
    accessKeyId: "S3RVER", // This specific key is required when working offline
    secretAccessKey: "S3RVER",
    endpoint: new AWS.Endpoint("http://localhost:4569"),
  });
  S3.putObject({
    Bucket: "local-bucket",
    Key: "1234",
    Body: new Buffer("abcd")
  }, () => callback(null, "ok"));
};

module.exports.s3hook = (event, context) => {
  console.log(JSON.stringify(event));
  console.log(JSON.stringify(context));
  console.log(JSON.stringify(process.env));
};

handler.js (AWS SDK v3)

const { S3Client, PutObjectCommand } = require("@aws-sdk/client-s3");

module.exports.webhook = (event, context, callback) => {
  const client = new S3Client({
    forcePathStyle: true,
    credentials: {
      accessKeyId: "S3RVER", // This specific key is required when working offline
      secretAccessKey: "S3RVER",
    },
    endpoint: "http://localhost:4569",
  });
  client
    .send(
      new PutObjectCommand({
        Bucket: "local-bucket",
        Key: "1234",
        Body: Buffer.from("abcd"),
      })
    )
    .then(() => callback(null, "ok"));
};

module.exports.s3hook = (event, context) => {
  console.log(JSON.stringify(event));
  console.log(JSON.stringify(context));
  console.log(JSON.stringify(process.env));
};

Configuration options

Configuration options can be defined in multiple ways. They will be parsed with the following priority:

  • custom.s3 in serverless.yml
  • custom.serverless-offline in serverless.yml
  • Default values (see table below)
Option Description Type Default value
address The host/IP to bind the S3 server to string 'localhost'
host The host where internal S3 calls are made. Should be the same as address string
port The port that S3 server will listen to number 4569
directory The location where the S3 files will be created. The directory must exist, it won't be created string './buckets'
accessKeyId The Access Key Id to authenticate requests string 'S3RVER'
secretAccessKey The Secret Access Key to authenticate requests string 'S3RVER'
cors Path to the S3 CORS configuration XML relative to the project root. See AWS docs string | Buffer
website Path to the S3 Website configuration XML relative to the project root. See AWS docs string | Buffer
noStart Set to true if you already have an S3rver instance running boolean false
allowMismatchedSignatures Prevent SignatureDoesNotMatch errors for all well-formed signatures boolean false
silent Suppress S3rver log messages boolean false
serviceEndpoint Override the AWS service root for subdomain-style access string amazonaws.com
httpsProtocol To enable HTTPS, specify directory (relative to your cwd, typically your project dir) for both cert.pem and key.pem files. string
vhostBuckets Disable vhost-style access for all buckets boolean true
buckets Extra bucket names will be created after starting S3 local string

Feature

  • Start local S3 server with specified root directory and port.
  • Create buckets at launching.
  • Support serverless-plugin-additional-stacks
  • Support serverless-webpack
  • Support serverless-plugin-existing-s3
  • Support S3 events.

Working with IaC tools

If your want to work with IaC tools such as terraform, you have to manage creating bucket process. In this case, please follow the below steps.

  1. Comment out configurations about S3 Bucket from resources section in serverless.yml.
#resources:
#  Resources:
#    NewResource:
#      Type: AWS::S3::Bucket
#      Properties:
#        BucketName: local-bucket
  1. Create bucket directory in s3rver working directory.
$ mkdir /tmp/local-bucket

Triggering AWS Events offline

This plugin will create a temporary directory to store mock S3 info. You must use the AWS cli to trigger events locally. First, using aws configure set up a new profile, i.e. aws configure --profile s3local. The default creds are

aws_access_key_id = S3RVER
aws_secret_access_key = S3RVER

You can now use this profile to trigger events. e.g. to trigger a put-object on a file at ~/tmp/userdata.csv in a local bucket run: aws --endpoint http://localhost:4569 s3 cp ~/tmp/data.csv s3://local-bucket/userdata.csv --profile s3local

You should see the event trigger in the serverless offline console: info: PUT /local-bucket/user-data.csv 200 16ms 0b and a new object with metadata will appear in your local bucket.

Enabling CORS

Create an .xml file with CORS configuration. See AWS docs

E.g. to enable CORS for GET requests:

<CORSConfiguration>
  <CORSRule>
    <AllowedOrigin>*</AllowedOrigin>
    <AllowedMethod>GET</AllowedMethod>
    <AllowedHeader>*</AllowedHeader>
    <MaxAgeSeconds>30000</MaxAgeSeconds>
    <ExposeHeader>x-amz-server-side-encryption</ExposeHeader>
  </CORSRule>
</CORSConfiguration>

Reference the file in your config:

custom:
  s3:
    host: localhost
    directory: /tmp
    cors: ./path/to/cors.xml

Running on Node 18 or higher

You may meet an error such as following on Node 18 or higher.

Error: error:0308010C:digital envelope routines::unsupported
      at Cipheriv.createCipherBase (node:internal/crypto/cipher:122:19)
      at Cipheriv.createCipherWithIV (node:internal/crypto/cipher:141:3)
      at new Cipheriv (node:internal/crypto/cipher:249:3)

...

In this case, please set the following environemnt variable.

export NODE_OPTIONS=--openssl-legacy-provider

See also

License

This software is released under the MIT License, see LICENSE.

serverless-s3-local's People

Contributors

a-marcel avatar allesklarbeidir avatar ar90n avatar archinal avatar baipi avatar dependabot-preview[bot] avatar dependabot[bot] avatar devotox avatar gevorggalstyan avatar goues avatar jdollar avatar jormaechea avatar knjtnk avatar mbudm avatar michaelkovalchuk avatar mikestopcontinues avatar mvayngrib avatar n1ru4l avatar nemmarramos avatar neverendingqs avatar purefan avatar raphaelmun avatar renovate-bot avatar renovate[bot] avatar rjchow avatar sabrinalionheart avatar tmulry avatar tomwillis608 avatar vlesierse avatar you54f avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

serverless-s3-local's Issues

Images cannot be seen in browser

I have uploaded an image to my local s3 bucket but whenever I want to see the image via web browser I get a black screen and a little square in the middle.

Is there a way to show it? PDF, xls, etc get downloaded successfully.

Deploy issue: "<bucketname> already exists in stack"

Main Issue: When running severless-s3-local locally, the plugin will scan the resources section of the serverless.yml file to find buckets to create. However, when "sls deploy" is executed, serverless will scan not only the resources section, but also the functions/events section as well. Serverless will create an s3 bucket for each it finds.

So - the example found here will not deploy with serverless, as it will attempt to create "local-bucket" twice, and will fail doing so. Here's the lines in question:

resources:
  Resources:
    NewResource:
      Type: AWS::S3::Bucket
      Properties:
        BucketName: local-bucket   <-- creates 'local-bucket'
functions:
  webhook:
    handler: handler.webhook
    events:
      - http:
          method: GET
  s3hook:
    handler: handler.s3hook
    events:
      - s3: local-bucket    <-- creates 'local-bucket' again.

Work-around: Currently I need to comment-out the resources section while deploying, and un-comment it when running locally.

Proposed Solution: serverless-s3-local should treat the events and resources the same way as serverless deploy will, and also create local buckets when they are found in the functions/events section.

Dependencies in package.json:

  "devDependencies": {
    "@types/node": "^10.12.18",
    "concurrently": "^4.1.0",
    "parcel-bundler": "^1.11.0",
    "serverless": "^1.35.1",
    "serverless-offline": "^3.31.3",
    "serverless-s3-local": "^0.3.20",
    "typescript": "^3.2.2"
  },
  "dependencies": {
    "aws-sdk": "^2.382.0"
  }

Support for S3:Events Prefix and Suffix Rules

Serverless allows for the setting of prefix and suffix rules on S3 Events:

https://serverless.com/framework/docs/providers/aws/events/s3#setting-filter-rules

functions:
  users:
    handler: incoming.handler
    events:
      - s3:
          bucket: mybucket
          event: s3:ObjectCreated:*
          rules:
            - prefix: incoming/
            - suffix: .zip

I think serverless-s3-local is ignoring these rules?

My use case is an existing system which changes the key of a file after it's completed processing. For example, a file is uploaded to s3 with a bucket and key: s3://mybucket/incoming/file1.zip and then after processing it is moved to s3://mybucket/processed/file1.zip.

When running locally with serverless-s3-local the handler is triggered twice, once when it is initially created and again when the key is changed. When deployed to AWS this doesn't happen as the prefix and suffix rules are applied.

conditional resource creation is not respected

I would like to only create buckets when running locally. For this I'm using a condition inside my serverless.yml as follows:

resources:
  Conditions:
    isLocal:
      Fn::Equals:
        - ${opt:stage}
        - local
  Resources:
    # only create this bucket locally
    LocalBucket:
      Type: AWS::S3::Bucket
      Condition: isLocal
      Properties:
        BucketName: ${file(./env.local.json):S3_BUCKET}

I can see that this resource is not being creating remotely, which is great, however, when running locally with a stage that is not local, it still gets created.

$ serverless offline start --stage dev
starting handler
client s3event object
S3 local started ( port:4578 )
Serverless: creating bucket: bucket-local
info: [S3rver] Created new bucket "bucket-local" successfully
info: [S3rver] PUT /bucket-local 200 - - 3.691 ms

It would be great if the plugin would respect the conditional too. Happy to submit a PR if anyone can point me in the right direction.

TypeError: fromEvent is not a function

I am trying to use the plugin for local development and testing. However while running the script I am getting a type error as follows.

As a result of this the plugin reaches the step to create the s3 buckets and it halts indefinitely.

TypeError: fromEvent is not a function
      at ServerlessS3Local.subscribe (/Users/tsd073/Workspace/testProj/node_modules/serverless-s3-local/index.js:147:21)
      at Promise (/Users/tsd073/Workspace/testProj/node_modules/serverless-s3-local/index.js:236:12)
      at new Promise (<anonymous>)
      at ServerlessS3Local.startHandler (/Users/tsd073/Workspace/testProj/node_modules/serverless-s3-local/index.js:186:12)
      at BbPromise.reduce (/Users/tsd073/Workspace/testProj/node_modules/serverless/lib/classes/PluginManager.js:505:55)
      at tryCatcher (/Users/tsd073/Workspace/testProj/node_modules/serverless/node_modules/bluebird/js/release/util.js:16:23)
      at Object.gotValue (/Users/tsd073/Workspace/testProj/node_modules/serverless/node_modules/bluebird/js/release/reduce.js:168:18)
      at Object.gotAccum (/Users/tsd073/Workspace/testProj/node_modules/serverless/node_modules/bluebird/js/release/reduce.js:155:25)
      at Object.tryCatcher (/Users/tsd073/Workspace/testProj/node_modules/serverless/node_modules/bluebird/js/release/util.js:16:23)
      at Promise._settlePromiseFromHandler (/Users/tsd073/Workspace/testProj/node_modules/serverless/node_modules/bluebird/js/release/promise.js:547:31)
      at Promise._settlePromise (/Users/tsd073/Workspace/testProj/node_modules/serverless/node_modules/bluebird/js/release/promise.js:604:18)
      at Promise._settlePromise0 (/Users/tsd073/Workspace/testProj/node_modules/serverless/node_modules/bluebird/js/release/promise.js:649:10)
      at Promise._settlePromises (/Users/tsd073/Workspace/testProj/node_modules/serverless/node_modules/bluebird/js/release/promise.js:729:18)
      at _drainQueueStep (/Users/tsd073/Workspace/testProj/node_modules/serverless/node_modules/bluebird/js/release/async.js:93:12)
      at _drainQueue (/Users/tsd073/Workspace/testProj/node_modules/serverless/node_modules/bluebird/js/release/async.js:86:9)
      at Async._drainQueues (/Users/tsd073/Workspace/testProj/node_modules/serverless/node_modules/bluebird/js/release/async.js:102:5)
      at Immediate.Async.drainQueues [as _onImmediate] (/Users/tsd073/Workspace/testProj/node_modules/serverless/node_modules/bluebird/js/release/async.js:15:14)
      at runCallback (timers.js:794:20)
      at tryOnImmediate (timers.js:752:5)
      at processImmediate [as _immediateCallback] (timers.js:729:5)

serverless.yml

custom:
  s3:
     port: 8001
     directory: /tmp

Resources:
  localS3Bucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: test-bucket-local
      AccessControl: PublicRead

version of plugin "serverless-s3-local": "0.5.3"

Entire serverless-offline crashed if s3 event handlers throw

If I throw in any of functions that have s3 as events the entire local serverless instance exits.

For example, a simple line like this will crash the app:

throw new Error("Something went wrong");

and the response in the terminal looks something like this:

Error --------------------------------------------------
 
  Error: Something went wrong
    at ./src/functions/event-listeners/listener.ts.exports.ts (/Users/g/Sites/app/.webpack/service/src/functions/event-listeners/webpack:/src/functions/event-listeners/listener.ts:352:9)
    at Object.func (/Users/g/Sites/app/node_modules/serverless-s3-local/index.js:385:11)
    at /Users/g/Sites/app/node_modules/serverless-s3-local/index.js:172:43
    at SafeSubscriber.s3eventSubscription.s3Event.pipe.subscribe.handler [as _next] (/Users/g/Sites/app/node_modules/serverless-s3-local/index.js:177:9)
    at SafeSubscriber.__tryOrUnsub (/Users/g/Sites/app/node_modules/rxjs/src/internal/Subscriber.ts:265:10)
    at SafeSubscriber.next (/Users/g/Sites/app/node_modules/rxjs/src/internal/Subscriber.ts:207:14)
    at Subscriber._next (/Users/g/Sites/app/node_modules/rxjs/src/internal/Subscriber.ts:139:22)
    at Subscriber.next (/Users/g/Sites/app/node_modules/rxjs/src/internal/Subscriber.ts:99:12)
    at MergeMapSubscriber.notifyNext (/Users/g/Sites/app/node_modules/rxjs/src/internal/operators/mergeMap.ts:162:22)
    at InnerSubscriber._next (/Users/g/Sites/app/node_modules/rxjs/src/internal/InnerSubscriber.ts:17:17)
    at InnerSubscriber.Subscriber.next (/Users/g/Sites/app/node_modules/rxjs/src/internal/Subscriber.ts:99:12)
    at /Users/g/Sites/app/node_modules/rxjs/src/internal/util/subscribeToArray.ts:9:16
    at Object.subscribeToResult (/Users/g/Sites/app/node_modules/rxjs/src/internal/util/subscribeToResult.ts:28:29)
    at MergeMapSubscriber._innerSub (/Users/g/Sites/app/node_modules/rxjs/src/internal/operators/mergeMap.ts:148:5)
    at MergeMapSubscriber._tryNext (/Users/g/Sites/app/node_modules/rxjs/src/internal/operators/mergeMap.ts:141:10)
    at MergeMapSubscriber._next (/Users/g/Sites/app/node_modules/rxjs/src/internal/operators/mergeMap.ts:125:12)
    at MergeMapSubscriber.Subscriber.next (/Users/g/Sites/app/node_modules/rxjs/src/internal/Subscriber.ts:99:12)
    at MapSubscriber._next (/Users/g/Sites/app/node_modules/rxjs/src/internal/operators/map.ts:89:22)
    at MapSubscriber.Subscriber.next (/Users/g/Sites/app/node_modules/rxjs/src/internal/Subscriber.ts:99:12)
    at S3rver.handler (/Users/g/Sites/app/node_modules/rxjs/src/internal/observable/fromEvent.ts:201:20)
    at S3rver.emit (events.js:198:13)
    at S3rver.EventEmitter.emit (domain.js:448:20)
    at triggerS3Event (/Users/g/Sites/app/node_modules/s3rver/lib/controllers/object.js:35:11)
    at Object.putObject (/Users/g/Sites/app/node_modules/s3rver/lib/controllers/object.js:280:5)
 
     For debugging logs, run again after setting the "SLS_DEBUG=*" environment variable.
 
  Get Support --------------------------------------------
     Docs:          docs.serverless.com
     Bugs:          github.com/serverless/serverless/issues
     Issues:        forum.serverless.com
 
  Your Environment Information ---------------------------
     Operating System:          darwin
     Node Version:              10.16.0
     Framework Version:         1.50.0
     Plugin Version:            1.3.8
     SDK Version:               2.1.0
 
Waiting for the debugger to disconnect...
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] start: `NODE_OPTIONS='--max-old-space-size=8192' node --inspect ./node_modules/.bin/serverless offline start -s local --location .webpack/service`
npm ERR! Exit status 1
npm ERR! 
npm ERR! Failed at the [email protected] start script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR!     /Users/g/.npm/_logs/2019-08-21T22_06_46_492Z-debug.log

Issue starting the application with serverless-s3-local plugin

Lately I have been facing this issue with the serverless-s3-local plugin. Any idea what I am doing wrong here?

  Type Error ---------------------------------------------
 
  this.client.s3Event.map is not a function
 
     For debugging logs, run again after setting the "SLS_DEBUG=*" environment variable.
 
  Stack Trace --------------------------------------------
 
TypeError: this.client.s3Event.map is not a function
    at Promise (D:\workspace\node_modules\serverless-s3-local\index.js:139:27)
    at new Promise (<anonymous>)
    at ServerlessS3Local.startHandler (D:\workspace\node_modules\serverless-s3-local\index.js:103:12)
    at BbPromise.reduce (D:\workspace\node_modules\serverless\lib\classes\PluginManager.js:390:55)
From previous event:
    at Watching.compiler.watch [as handler] (D:\workspace\node_modules\serverless-webpack\lib\wpwatch.js:67:11)
    at compiler.hooks.done.callAsync (D:\workspace\node_modules\webpack\lib\Watching.js:100:9)
    at AsyncSeriesHook.eval [as callAsync] (eval at create (D:\workspace\node_modules\tapable\lib\HookCodeFactory.js:24:12), <anonymous>:6:1)
    at AsyncSeriesHook.lazyCompileHook [as _callAsync] (D:\workspace\node_modules\tapable\lib\Hook.js:35:21)
    at Watching._done (D:\workspace\node_modules\webpack\lib\Watching.js:99:28)
    at compiler.emitRecords.err (D:\workspace\node_modules\webpack\lib\Watching.js:73:19)
    at Compiler.emitRecords (D:\workspace\node_modules\webpack\lib\Compiler.js:372:39)
    at compiler.emitAssets.err (D:\workspace\node_modules\webpack\lib\Watching.js:54:20)
    at hooks.afterEmit.callAsync.err (D:\workspace\node_modules\webpack\lib\Compiler.js:358:14)
    at AsyncSeriesHook.eval [as callAsync] (eval at create (D:\workspace\node_modules\tapable\lib\HookCodeFactory.js:24:12), <anonymous>:15:1)
    at AsyncSeriesHook.lazyCompileHook [as _callAsync] (D:\workspace\node_modules\tapable\lib\Hook.js:35:21)
    at asyncLib.forEach.err (D:\workspace\node_modules\webpack\lib\Compiler.js:355:27)
    at done (D:\workspace\node_modules\neo-async\async.js:2854:11)
    at D:\workspace\node_modules\neo-async\async.js:2805:7
    at D:\workspace\node_modules\graceful-fs\graceful-fs.js:43:10
    at FSReqWrap.oncomplete (fs.js:149:20)
From previous event:
    at PluginManager.invoke (D:\workspace\node_modules\serverless\lib\classes\PluginManager.js:390:22)
    at PluginManager.run (D:\workspace\node_modules\serverless\lib\classes\PluginManager.js:421:17)
    at variables.populateService.then.then (D:\workspace\node_modules\serverless\lib\Serverless.js:157:33)
    at runCallback (timers.js:773:18)
    at tryOnImmediate (timers.js:734:5)
    at processImmediate [as _immediateCallback] (timers.js:711:5)
From previous event:
    at Serverless.run (D:\workspace\node_modules\serverless\lib\Serverless.js:144:8)
    at serverless.init.then (D:\workspace\node_modules\serverless\bin\serverless:43:50)
    at <anonymous>
 
  Get Support --------------------------------------------
     Docs:          docs.serverless.com
     Bugs:          github.com/serverless/serverless/issues
     Issues:        forum.serverless.com
 
  Your Environment Information -----------------------------
     OS:                     win32
     Node Version:           9.3.0
     Serverless Version:     1.29.2

Here is my serverless.yml config

service: my-api

# ---------------------------------------
#            Custom Configs
# ---------------------------------------

custom:
  defaultStage: qa # Possible Values: qa, beta and prod
  defaultRegion: eu-central-1
  defaultBucket: my.uploads
  # webpack config
  webpack:
    webpackConfig: ./webpack.config.js
    includeModules: true
  s3:
    bucketName: ${self:custom.defaultBucket}
    host: 0.0.0.0
    port: 9000
    directory: ./s3-local
    cors: false
  serverless-offline:
    port: 8080

# ---------------------------------------
#               Providers
# ---------------------------------------

provider:
  name: aws
  runtime: nodejs6.10
  profile: default
  timeout: 120
  stage: ${opt:stage, self:custom.defaultStage}
  region: ${opt:region, self:custom.defaultRegion}
  environment:
    AWS_DEPLOYMENT_REGION: ${self:provider.region}
    AWS_DEPLOYMENT_STAGE: ${self:provider.stage}
    AWS_DEFAULT_BUCKET_NAME: ${self:custom.defaultBucket}
  # IAM Roles
  iamRoleStatements:
    - Effect: "Allow"
      Action:
        - "dynamodb:*"
      Resource: "arn:aws:dynamodb:${self:provider.region}:*:*"
    - Effect: "Allow"
      Action:
        - "s3:*"
      Resource: "arn:aws:s3:::${self:custom.s3.bucketName}/*"

# ---------------------------------------
#             Lambda Functions
# ---------------------------------------

functions:
  status:
    handler: src/modules/status/handler.handle
    events:
      - http:
          path: /status
          method: any
          cors: true	  
# ---------------------------------------
#            Plugins
# ---------------------------------------

plugins:
  - serverless-webpack
  - serverless-express
  - serverless-s3-local
  - serverless-offline

# ---------------------------------------
#             Resources
# ---------------------------------------

resources:
  Resources:
    # ---------------------------------------
    #             S3 Buckets
    # ---------------------------------------
    S3CoursesBucket:
      Type: AWS::S3::Bucket
      Properties:
        BucketName: ${self:custom.s3.bucketName}
        AccessControl: PublicRead
        CorsConfiguration:
          CorsRules:
          - AllowedMethods:
              - GET
              - PUT
              - POST
              - HEAD
          - AllowedOrigins:
              - "*"
          - AllowedHeaders:
              - "*"

No Events are triggered

I tried to use this plugin to trigger events locally but failed to do so. So I went along ant tried the examples. Creating buckets and listing their contents seem to work fine. But neither the simple_event nor the resize_image triggered and event for me. No errors in the console either.
I simply ran npm install and started everything with sls offline start. Did I miss something?

Does it support signed url upload?

I was having an issue with signed url upload, it keeps throwing an error like
blocked by CORS policy: Response to preflight request doesn't pass access control check: It does not have HTTP ok status.

And this is my cors setup:

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
    <AllowedOrigin>*</AllowedOrigin>
    <AllowedMethod>GET</AllowedMethod>
    <AllowedMethod>PUT</AllowedMethod>
    <AllowedMethod>POST</AllowedMethod>
    <AllowedMethod>DELETE</AllowedMethod>
    <AllowedMethod>HEAD</AllowedMethod>
    <MaxAgeSeconds>3000</MaxAgeSeconds>
    <AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>


I'm in port hell!

I have set s3 local on port 9000. But still serverless says it's in use:

Serverless: Offline listening on http://localhost:3000
2018-07-24 21:45:51.438:WARN:oejuc.AbstractLifeCycle:FAILED [email protected]:8000: java.net.BindException: Address already in use
java.net.BindException: Address already in use

I have DynamoDB on port 8000, Lambda local on port 4000, Reactjs on port 3000.

This is my custom secton in serverless.yml:

custom:
  # Our stage is based on what is passed in when running serverless
  # commands. Or fallsback to what we have set in the provider section.
  stage: ${opt:stage, self:provider.stage}
  # Set our DynamoDB throughput for prod and all other non-prod stages.
  tableThroughputs:
    prod: 5
    default: 1
  tableThroughput: ${self:custom.tableThroughputs.${self:custom.stage}, self:custom.tableThroughputs.default}
  # Load our webpack config
  webpackIncludeModules: true # enable auto including node_modules
  graphiql:
    babelOptions:
      presets: [es2015, es2016, stage-0]
      plugins: [transform-runtime]
  serverless-offline:
    babelOptions:
      presets: [es2015, es2016, stage-0]
      plugins: [transform-runtime]
  dynamodb:
   start:
      migrate: true
  s3:
    host: 0.0.0.0
    port: 9000
    directory: /tmp
    # Uncomment the first line only if you want to use cors with specified policy
    # Uncomment the second line only if you don't want to use cors
    # Not uncomment the these lines only if your wanto use cors with default policy
    # cors: relative/path/to/your/cors/policy/xml
    # cors: false
    # Uncomment only if you already have a S3 server running locally
    # noStart: true

injection of endpoint when offline from config

Is there any way the endpoint could be injected in when running offline,
at the moment i have to change my code to test locally and then remember to take out the localhost endpoint when i commit my code ,

similarly if the s3ForcePathStyle: true could be injected too then in my application i can just use

const s3Client = new S3() without any options and then will be the same both deployed and local

Issue at startup

Hey I just tried to load local-s3 and I get startup error:

client s3event object
S3 local started ( port:4001 )
WARN: No buckets found to create
error: [S3rver] uncaughtException: Cannot read property 'then' of null date=Tue Sep 11 2018 12:34:41 GMT+0200 (Cent
ral European Summer Time), pid=14708, uid=501, gid=20, cwd=/Users/klemenk/work/tt37.com/client-app, execPath=/Users
/klemenk/.nvm/versions/node/v10.9.0/bin/node, version=v10.9.0, argv=[/Users/klemenk/.nvm/versions/node/v10.9.0/bin/
node, /Users/klemenk/.nvm/versions/node/v10.9.0/bin/sls, offline, start, --dontPrintOutput], rss=112562176, heapTot
al=84844544, heapUsed=54042744, external=697365, loadavg=[1.44677734375, 1.97412109375, 2.05126953125], uptime=9130
6
TypeError: Cannot read property 'then' of null
    at client.S3rver.run (/Users/klemenk/work/tt37.com/client-app/node_modules/serverless-s3-local/index.js:196:29)
    at Server.server.listen.err (/Users/klemenk/work/tt37.com/client-app/node_modules/s3rver/lib/index.js:36:9)
    at Object.onceWrapper (events.js:273:13)
    at Server.emit (events.js:182:13)
    at Server.EventEmitter.emit (domain.js:442:20)
    at emitListeningNT (net.js:1368:10)
    at process._tickCallback (internal/process/next_tick.js:63:19)

My config looks like:

service: tt37com

plugins:
  - serverless-offline
  - serverless-s3-local

provider:
  name: aws
  runtime: nodejs8.10

functions:

  deployWebsite:
    handler: lambdas/deployWebsite.lambdaHandler
    memorySize: 512
    timeout: 60
    events:
      - http:
          path: deployWebsite
          method: post
    iamRoleStatements:
      - Effect: Allow
        Action:
          - lambda:*
        Resource: "*"

custom:
  serverless-offline:
    port: 4000
    resourceRoutes: true
  s3:
    host: 0.0.0.0
    port: 4001
    directory: /tmp

My environment:

  • OS: osx high sierra
  • Node version: 10.9

Objects Posted with PreSignedPOST urls do not trigger events

Relevant packages and versions:

    "serverless-offline": "^5.10.1",
    "serverless-s3-local": "^0.5.0",

I managed to get Presigned URLs working with serverless-s3-local by manually installing [email protected]. The problem is now my events no longer trigger off of simple requests.

Example configuration to reproduce:

Handler configuration

...
    events:
      - s3:
          bucket: my-bucket
          event: s3:ObjectCreated:*
          rules:
            - prefix: uploads/
            - suffix: .mp4
          existing: true

Example code

const AWS = require('aws-sdk');
const fs = require('fs');
const request = require('request-promise-native');

const S3 = new AWS.S3({
  accessKeyId: 'S3RVER',
  endpoint: new AWS.Endpoint('http://localhost:9393'),
  s3ForcePathStyle: true,
  secretAccessKey: 'S3RVER',
});

function uploadFile() {
  console.log('Uploading file...');
  const genParams = {
    Bucket: 'my-bucket',
    Expires: 3600,
    Fields: {
      key: 'uploads/test/source.mp4',
      'Content-Type': 'video/mp4',
    },
  };
  S3.createPresignedPost(genParams, (err, result) => {
    if (err) {
      console.error('Failed to make, err=', err);
      return;
    }
    console.log('Pre-signed result:', result);
    // Upload the file
    const formData = {
      ...result.fields,
      file: fs.createReadStream('/path/to/video.mp4'),
    };

    const opts = {
      formData,
      uri: result.url,
      method: 'POST',
    };
    request(opts)
      .then((res) => console.log('Upload result:', res))
      .catch((err1) => console.error(err1));
  });
}

uploadFile();

As a sanity check, the following aws-cli command functions as expected, triggering my S3 event:

aws s3 cp --profile s3local --endpoint http://localhost:9393 /path/to/video.mp4 s3://my-bucket/uploads/test/source.mp4

Is there something I am able to do to continue complete local development or is this something that's not easily fixed/corrected with this module?

Thanks, let me know if there's any information I'm missing or forgot to provide, or if this issue really belongs in s3rver's issues list.

ERR_INVALID_ARG_TYPE when trying to upload file

I have a setup with an additionalStacks S3 bucket:

custom:
  additionalStacks:
    permanent:
      Resources:
        S3BucketData:
            Type: AWS::S3::Bucket
            Properties:
                BucketName: my-local-bucket
  s3:
    host: 0.0.0.0
    port: 8000
    directory: ./buckets/
  webpack:
    keepOutputDirectory: true
    includeModules: false
    packager: 'yarn'

When I try to upload a file to my bucket I get an error.

 aws --endpoint http://localhost:8000 s3api put-object --bucket my-local-bucket --key test/test1.txt --body ./test/test1.txt

Error:

info: PUT /my-local-bucket/test/test1.txt 500 1ms -

  TypeError [ERR_INVALID_ARG_TYPE]: The first argument must be one of type string, Buffer, ArrayBuffer, Array, or Array-like Object. Received type undefined
      at Function.from (buffer.js:208:11)
      at authentication (~/IdeaProjects/my-project/lambda/node_modules/s3rver/lib/middleware/authentication.js:182:37)
      at dispatch (~/IdeaProjects/my-project/lambda/node_modules/koa/node_modules/koa-compose/index.js:42:32)
      at website (~/IdeaProjects/my-project/lambda/node_modules/s3rver/lib/middleware/website.js:47:14)

NoSuchBucket when BucketName doesn't have capital letter...?

Hi. I've come across a very weird error. It seems that I can use s3-local just fine with BucketName: dev-apacBucket, but I get 'NoSuchBucket' with dev-apac-bucket. I suspect the error might be with this plugin rather than s3rver because when I access it directly, it seems to correctly read both bucket variants.

Perhaps it's a version issue? Here's what I'm running:

  "dependencies": {
    "aws-sdk": "2.592.0",
    "serverless": "1.60.0",
    "serverless-offline": "5.12.1",
    "serverless-s3-local": "0.5.4",
  }

Any help would be greatly appreciated!

TypeError: Cannot read property 'timeout' of undefined when used with serverless-offline version 4.10.0 plugin

With [email protected] we are getting the following error:

TypeError: Cannot read property 'timeout' of undefined
    at createLambdaContext (/Users/david.dacosta/repositories/private-repos/pipelines/node_modules/serverless-offline/src/createLambdaContext.js:10:44)
    at Object.keys.forEach.key (/Users/david.dacosta/repositories/private-repos/pipelines/node_modules/serverless-s3-local/index.js:318:29)
    at Array.forEach (<anonymous>)
    at ServerlessS3Local.getEventHandlers (/Users/david.dacosta/repositories/private-repos/pipelines/node_modules/serverless-s3-local/index.js:315:41)
    at ServerlessS3Local.subscribe (/Users/david.dacosta/repositories/private-repos/pipelines/node_modules/serverless-s3-local/index.js:137:31)
    at Promise (/Users/david.dacosta/repositories/private-repos/pipelines/node_modules/serverless-s3-local/index.js:214:12)
    at new Promise (<anonymous>)
    at ServerlessS3Local.startHandler (/Users/david.dacosta/repositories/private-repos/pipelines/node_modules/serverless-s3-local/index.js:178:12)
    at BbPromise.reduce (/Users/david.dacosta/repositories/private-repos/pipelines/node_modules/serverless/lib/classes/PluginManager.js:408:55)
    at tryCatcher (/Users/david.dacosta/repositories/private-repos/pipelines/node_modules/bluebird/js/release/util.js:16:23)
    at Object.gotValue (/Users/david.dacosta/repositories/private-repos/pipelines/node_modules/bluebird/js/release/reduce.js:157:18)
    at Object.gotAccum (/Users/david.dacosta/repositories/private-repos/pipelines/node_modules/bluebird/js/release/reduce.js:144:25)
    at Object.tryCatcher (/Users/david.dacosta/repositories/private-repos/pipelines/node_modules/bluebird/js/release/util.js:16:23)
    at Promise._settlePromiseFromHandler (/Users/david.dacosta/repositories/private-repos/pipelines/node_modules/bluebird/js/release/promise.js:512:31)
    at Promise._settlePromise (/Users/david.dacosta/repositories/private-repos/pipelines/node_modules/bluebird/js/release/promise.js:569:18)
    at Promise._settlePromise0 (/Users/david.dacosta/repositories/private-repos/pipelines/node_modules/bluebird/js/release/promise.js:614:10)
    at Promise._settlePromises (/Users/david.dacosta/repositories/private-repos/pipelines/node_modules/bluebird/js/release/promise.js:694:18)
    at _drainQueueStep (/Users/david.dacosta/repositories/private-repos/pipelines/node_modules/bluebird/js/release/async.js:138:12)
    at _drainQueue (/Users/david.dacosta/repositories/private-repos/pipelines/node_modules/bluebird/js/release/async.js:131:9)
    at Async._drainQueues (/Users/david.dacosta/repositories/private-repos/pipelines/node_modules/bluebird/js/release/async.js:147:5)
    at Immediate.Async.drainQueues [as _onImmediate] (/Users/david.dacosta/repositories/private-repos/pipelines/node_modules/bluebird/js/release/async.js:17:14)
    at runCallback (timers.js:794:20)
    at tryOnImmediate (timers.js:752:5)
    at processImmediate [as _immediateCallback] (timers.js:729:5)

Line 138 on index.js should be changed to
const lambdaContext = createLambdaContext(serviceFunction, this.service.provider);

This will satisfy the issue with signature of createLambdaContext provided by serverless-offline plugin

Create Bucket silently fails if you do not have a local .aws/configuration file

AWS-SDK requires an accessKeyId/secretAccessKey be used when performing S3 bucket operations. Most devs likely use the aws cli and have therefore run aws configure at some point to generate a local configuration file.

Not everyone has done this though, and a serverless-offline project should not have an outside dependency like this.

If the configuration file doesn't exist, and you don't pass these two values through as options when initializing the S3 client, bucket creation within this plugin will silently fail.

To address this, I've opened a PR that adds these two properties to the options and uses them in the getClient method: #33

Support for serverless plugin install

Serverless now has a new way of installing plugins via the serverless plugin install command, however its not currently working for serverless-s3-local - serverless plugin install --name serverless-s3-local

$ serverless plugin install --name serverless-s3-local

  Serverless Error ---------------------------------------

  Plugin "serverless-s3-local" not found. Did you spell it correct?

  Get Support --------------------------------------------
     Docs:          docs.serverless.com
     Bugs:          github.com/serverless/serverless/issues
     Issues:        forum.serverless.com

  Your Environment Information -----------------------------
     OS:                     win32
     Node Version:           10.14.2
     Serverless Version:     1.35.1

S3-local doesn't pass in function-specific environment variables when triggered

When using this project to test S3 uploads, any specific environment variables set for a function aren't passed in when triggered from a local S3 upload. See below yml fragment:

emailUploaded:
  handler: handler.handle
  environment:
    command: email:uploaded
  events:
    - s3: bucket-name

The command variable isn't available when the function is ran from the local S3 trigger.

S3-local doesn't seem to work well with nodeJS 8.10

When I try to use this plugin combined with a project in NodeJS 8.10, I get all kinds of errors like
Unexpected token import
When I change all 'imports' into requires, the next error appeares:
Unexpected token export
And so on..
As soon as I remove this plugin, all errors are gone.

Anyone able to help me with this?

process.env breaks if any asynchronous code is run

You cannot use async / await, or promises inside an s3 event handler using this library.

The issue lies here, specifically line 309: https://github.com/ar90n/serverless-s3-local/blob/master/index.js#L285-L311

To reproduce:

provider:
  environment:
    SOMETHING_IMPORTANT: 123
// handler
module.exports = (event,context,callback) => {
  console.log(process.env.SOMETHING_IMPORTANT) // this works
  new Promise(r => {
    console.log(process.env.SOMETHING_IMPORTANT) // this breaks,
    callback()
  })

Or, perhaps more idiomatically (this is how this bug originally manifest itself):

module.exports = async (event,context) => {
  console.log(process.env.SOMETHING_IMPORTANT) // this works
  await new Promise(r => r())
  console.log(process.env.SOMETHING_IMPORTANT) // this breaks
}

The cause

process.env is set up and the handler is called, but if the handler performs any asynchronous action, control returns to serverless-s3-local, which then reverts process.env to its own context. By the time the asynchronous code runs, process.env has been reverted.

Suggestion:

Either:

  • Run serverless-s3-local in an async context (promises / async)
  • Don't reset the environment ... !

Regarding the latter suggestion, it appears that serverless-offline doesn't bother fully reverting process.env (all modifications to process.env are additive): https://github.com/dherault/serverless-offline/blob/e4724c8aa9bb4ca6c78a495953667f454f30901d/src/index.js#L580

s3.getObjectTagging method not implemented?

Hello! I'm trying to upload an object with --tagging parameter set and then trying to retrieve the the object tags using the s3.getObjectTagging method. I keep receiving the following error:

 message: 'A parameter you provided implies functionality that is not implemented',
  code: 'NotImplemented',
  region: null,
  time: 2019-06-17T16:57:47.151Z,
  requestId: null,
  extendedRequestId: undefined,
  cfId: undefined,
  statusCode: 501,
  retryable: true }

Here is my upload command:
AWS_ACCESS_KEY_ID=S3RVER AWS_SECRET_ACCESS_KEY=S3RVER aws --endpoint http://localhost:8000 s3api put-object --bucket local-bucket --key 'test.mp3' --body ./test/test.mp3 --tagging 'TagSet=[{Key=MESSAGE_ID,Value=1540}]'

Here is my handler for the s3 code:

module.exports.s3EventResponse = (event, context, callback) => {
  const key = event.Records[0].s3.object.key;
  const bucket_params = {
    Bucket: 'local-bucket',
    Key: key
  };
  s3.getObjectTagging(bucket_params, function(err, data) {
    if(err){
      console.error(err);
      return callback(null, 'error');
    }
    console.log(data);
    callback(null, 'ok');
  });
  // console.log(JSON.stringify(event));
};

I'm going to assume that this method has not been implemented into serverless-s3-local? Or am I missing something in my code. Thank you for the assistance!

Lambda is not triggered if S3 server was already started by another serverless project sharing the same S3 service with noStart=true attribute

Suppose if we have two serverless project Project 1 and 2.
Project 1: Executed with serverless-offline and starts the S3 server
Project 2: In serverless.yml, custom section, noStart=true. This will ensure that it will reuse the s3 server started by Project 1. This project contains a lambda with an S3 bucket trigger even source mapping.

When an object is pushed in bucket, lambda defined in project 2 is not triggered.
However, if we remove the noStart=true and start the project 2 before project 1. Lambda of Project 2 gets triggered.

This means lambda will not be triggered if S3 server was started by some other project, although the S3 is shared accross all the project using noStart=true in other projects.

When we execute the project 2(which has noStarts=true) and if it contains any s3 event source mappings with lambda in function section of serverless.yml, then we CANNOT see a line in log:

Serverless: Found S3 event listener for

But this line appear if S3 server was started by project 2 itself by executing it before project 1.

S3 local started ( port:undefined )

Actual i am new with serverless

in Serverless.yml

i have defined

Happy Coding!

service: products-athena # NOTE: update this with your service name

plugins:

  • serverless-s3-local
  • serverless-offline

custom:
ATHENA_BUCKET_NAME: athenabucketname
BUCKET_NAME: bucketname
DATABASE_NAME: products-athena

serverless-offline:

port: 4078

S3:
host: 0.0.0.0
port: 7877

functions:
athenaInit:
handler: athena.init
createProduct:
events:
-
http:
method: post
path: product2
handler: handler.createProduct
provider:
environment: "${self:custom}"
iamRoleStatements:
-
Action:
#- "s3:GetBucketLocation"
#- "s3:GetObject"
#- "s3:ListBucket"
#- "s3:ListBucketMultipartUploads"
#- "s3:ListMultipartUploadParts"
#- "s3:AbortMultipartUpload"
#- "s3:CreateBucket"
#- "s3:PutObject"
- "s3:"
Effect: Allow
Resource:
- "arn:aws:s3:::${self:custom.BUCKET_NAME}/
"
- "arn:aws:s3:::${self:custom.BUCKET_NAME}"
- "arn:aws:s3:::${self:custom.ATHENA_BUCKET_NAME}"
- "arn:aws:s3:::${self:custom.ATHENA_BUCKET_NAME}/"
-
Action:
- "glue:
"
Effect: Allow
Resource:
- ""
-
Action:
- "athena:
"
Effect: Allow
Resource:
- "*"
name: aws
runtime: nodejs8.10
resources:
Resources:
AthenaBucket:
Properties:
BucketName: "${self:custom.ATHENA_BUCKET_NAME}"
Type: "AWS::S3::Bucket"
ProductS3Bucket:
Properties:
BucketName: "${self:custom.BUCKET_NAME}"
Type: "AWS::S3::Bucket"
service: products-athena

after that also S3 local started ( port:undefined ) it's opening on default PORT

Uploaded files path

Where are the files I upload stored? What is the /tmp directory for? I have a tmp folder in my project but files are not uploaded there.

version 0.3.6 has broken environment variables for serverless-offline

For some reason, the plugin is launching all the functions, and at that moment the serverless environment variables are not populated yet.

With enough sophisticated factories, you get errors on missing environment variables that the functions need to work.

This is a very critical bug!
I suppose this may be related to lines 145-154 in index.js

Host address documentation is confusing

The documentation shows a value of 0.0.0.0 for the host config parameter, giving the impression that it can be set to 0.0.0.0 to allow connections on all interfaces. This is, however, not how the code works.

In index.js on line #13 a default value of localhostis set for the config option address. This option is undocumented. It is used as the listening address for the server on line #210. The server is thus always listening on localhost.

On the other hand, on line #284 the host config parameter is being used to set the endpoint on the S3 client that is used to create the buckets.

Setting the host parameter to 0.0.0.0 thus causes the client to connect to 0.0.0.0 and time out, instead of hosting the server on address 0.0.0.0.

It would be nice if the documentation could clarify the effect of setting the host parameter, and if the address parameter could be documented as well.

BUG - Meta Tags on PUT Command do not work

Hi,

When creating a PUT command followed by generating a pre signed request, you are given a correct url containing meta information through the x-amz-meta headers, when you then PUT to this url, the file will be stored (in your tmp bucket) however on reviewing the meta file, the data is not contained in there.

Steps to reproduce

`
/** @var CommandInterface $cmd */
$cmd = $this->s3ClientService->client->getCommand('PutObject', [
'Bucket' => env('S3_BUCKET'),
'Key' => 'imports/' . $filename,
'Metadata' => [
'import_id' => $import->id,
],
]);

        $preSignedRequest = $this->s3ClientService->client->createPresignedRequest($cmd, '+5 minutes');`

This generates the correct presigned url, when you PUT the file to that url, check your /tmp/bucket/keylocation folder look at the meta tag file and the import id will not be there.

NOTE: Locally this does not work
On a DEV environment using actual AWS, this does work.

TypeError: Object.keys(...).flatMap is not a function

After updating to the latest build it appears that I am getting the following error running node v8.10

TypeError: Object.keys(...).flatMap is not a function

I believe it is coming from MR #54. The flatMap array function according to MDN is experimental and only available in node v11. https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/flatMap

I'll try and setup a sample repo at a later time, but is this expected? Lambda currently only allows for node v8.10 so I would imagine this would exclude a lot of users in that technology pool.

s3rver require cors config to be xml

It looks like just setting cors:true for s3rver options is no longer supported. Perhaps update the cors config to be a path to a local policy? Then do something like:

const corsPolicy = cors ?
        fs.readFileSync(path.resolve(process.cwd(), cors), 'utf8') :
        false ;

this.client = new S3rver({
        port,
        hostname: host,
        silent: false,
        directory,
        cors: corsPolicy,

conflicts when multiple rxjs versions installed

Those 2 lines: https://github.com/ar90n/serverless-s3-local/blob/master/index.js#L10-L11 breaks when another package installs the version 6 of rxjs. Since serverless-s3-local doesn't explicitly declare a dependency on rxjs@5, rxjs@6 might be loaded and rxjs/add/operator/map doesn't work on this version if rxjs-compat is not a dependency as well.

Removing those 2 lines seems to work but I'm not sure what are the side effects so I'll let you find a proper fix ;)

Logs:

Error: Cannot find module 'rxjs-compat/add/operator/map'
    at Function.Module._resolveFilename (module.js:543:15)
    at Function.Module._load (module.js:470:25)
    at Module.require (module.js:593:17)
    at require (internal/module.js:11:18)
    at Object.<anonymous> (xxxxx/node_modules/rxjs/add/operator/map.js:3:1)
    at Module._compile (module.js:649:30)
    at Object.Module._extensions..js (module.js:660:10)
    at Module.load (module.js:561:32)
    at tryModuleLoad (module.js:501:12)
    at Function.Module._load (module.js:493:3)
    at Module.require (module.js:593:17)
    at require (internal/module.js:11:18)
    at Object.<anonymous> (xxxxx/node_modules/serverless-s3-local/index.js:10:1)

Integration with webpack and s3-local

Hi,

I'm currently developing a serverless application using serverless-webpack and serverless-s3-local in development. When I change my code, webpack automatically updates lambda functions but not those that are triggered by s3 events.

functions:
  Worker1:
    handler: src/handlers/Worker1.handler
    events:
      - http:
          method: GET
          path: /worker
  Worker2:
    handler: src/handlers/Worker2.handler
    events:
      - s3:
          bucket: my-bucket
          event: s3:ObjectCreated:*

Worker1 puts files to s3 and Worker2 should do stuff with them, but in watch mode, only Worker1 is updated, Worker2 stays unchanged. Using SLS_DEBUG, I can confirm that webpack does its job and does bundle the function.

I think there are missing hooks in s3-local for webpack when handling events so that s3 event callbacks point to the functions they were first created with.

I do not feel qualified to submit a PR, but if I get a little help with it, I'll try to do so. This is my first project in serverless. I've actually managed to make a PR.

Thank you

Unexpected behavior

Hi and thanks for the great work on this plugin!

I've been encountering a small issue which may be intended behavior, for me it wasn't completely clear until I jumped in the code. when S3.putObject I was getting a bucket does not exist error, what made it interesting is that the error named my bucket with the .localhost suffix ( my-bucket.localhost) but I didnt name the bucket like that, it was just my-bucket both in the serverless.yml file and when calling and setting the sdk, basically I never used .localhost.

I fixed this by creating the folder my-bucket.localhost but it was just a little unexpected for me, maybe I missed something in the docs. Just thought maybe someone else may run into this

Cheers!

s3.getObjectTagging not implemented?

Hello! I'm trying to upload an object with --tagging parameter set and then trying to retrieve the the object tags using the s3.getObjectTagging method. I keep receiving the following error:

Where can I see uploaded files?

Hi, thanks for sharing :)

I installed this package, Here's the settings inside my server.yml file:

service: myProject

provider:
  name: aws
  runtime: nodejs6.10

plugins:
  - serverless-plugin-typescript
  - serverless-s3-local
  - serverless-offline

custom:
  s3:
    port: 8000
    directory: /tmp
    cors: false
    # Uncomment only if you already have a S3 server running locally
    # noStart: true

resources:
  Resources:
    NewResource:
      Type: AWS::S3::Bucket
      Properties:
        BucketName: local-bucket

functions:
  s3hook:
    handler: server.uploadFile
    events:
      - http:
          path: uploadFile
          method: post
      - s3: local-bucket

As you can see I have defined a handler called s3hook which exposes an URL to upload a file. Here's also the handler inside a file called server.ts:

import * as awsServerlessExpress from "aws-serverless-express";
exports.uploadFile = async (event, context) => {
  awsServerlessExpress.proxy(server, event, context);
};

Which uses aws-serverless-express to proxy requests and send the requests to express server, the express implementation is inside a file called app.ts:

import * as express from "express";
import * as AWS from "aws-sdk";
import * as uuid from "uuid";

const app = express();

const S3 = new AWS.S3({
  s3ForcePathStyle: true,
  endpoint: "http://localhost:8000"
});
app.post("/uploadFile", (_, respose) => {
  const params = _.body;
  const fileName = uuid.v4();

  S3.putObject(
    {
      Bucket: "local-bucket",
      Key: fileName,
      Body: params
    },
    () => {
      respose.send("Uploaded");
    }
  );
});

export { app };

at this point I can use something like curl to post a file:

curl -X POST \
  http://localhost:3000/uploadFile \
  --data @myFile.jpg

After posting the file I can see the "Uploaded" message in the response. But the problem is that I can't see uploaded file inside /temp folder. it's empty:

image

Any idea?

Add support for Buckets without Properties/BucketName

Hi

I really like your project and tried to use it in a side project, but it wont work as expected. It seems that your code assumes that a bucket must have Properties and a BucketName and will throw otherwise (cannot read BucketName of undefined)

resources:
  Resources:
    SourceBucket:
      Type: AWS::S3::Bucket

I have forked this repo and added some checks and it works for me :)
If this is expected behavior please feel free to close this issue and decline my PR.

Thanks!

ApiGateway S3 Proxy

Hi,

I have created an api gateway as proxy to s3 to allow my client to upload items and download items directly as specified here.

This is all specified in cloud formation resources as the serverless framework or plugins dont currently support this. Unfortunately at present it means this part of my architecture cannot be run and tested locally.

Is it feasible to add functionality to serverless-s3-local that would be able to route the resource based routes to local s3 somehow?

Below is my resource spec

Resources:
  s3ItemResource: 
    DependsOn: 
      - ApiGatewayRestApi
      - ApiGatewayResourceS3Proxy
    Type: AWS::ApiGateway::Resource
    Properties: 
      ParentId:
        Ref: ApiGatewayResourceS3Proxy
      PathPart: "{key}"
      RestApiId:
        Ref: ApiGatewayRestApi
  s3ProxyOptionsMethod:
    Type: "AWS::ApiGateway::Method"
    Properties:
      ResourceId: 
        Ref: s3ItemResource 
      RestApiId:
        Ref: ApiGatewayRestApi # Reference API from serverless.yml using Serverless naming convention https://serverless.com/framework/docs/providers/aws/guide/resources/
      AuthorizationType: NONE
      HttpMethod: OPTIONS
      Integration:
        Type: MOCK
        IntegrationResponses:
          - ResponseParameters:
              method.response.header.Access-Control-Allow-Headers: "'Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token'"
              method.response.header.Access-Control-Allow-Methods: "'GET,OPTIONS'"
              method.response.header.Access-Control-Allow-Origin: "'*'" # TODO : Can we restrict this further
            ResponseTemplates:
              application/json: ''
            StatusCode: '200'
        PassthroughBehavior: NEVER
        RequestTemplates:
          application/json: '{"statusCode": 200}'
      MethodResponses:
        - ResponseModels:
            application/json: Empty
          ResponseParameters:
            method.response.header.Access-Control-Allow-Headers: true
            method.response.header.Access-Control-Allow-Methods: true
            method.response.header.Access-Control-Allow-Origin: true
          StatusCode: '200'
  s3ProxyGetMethod:
    Type: "AWS::ApiGateway::Method"
    DependsOn: CustomAuthorizerApiGatewayAuthorizer # Reference function Custom Authorizer from serverless.yml using Serverless naming convention https://serverless.com/framework/docs/providers/aws/guide/resources/
    Properties:
      ApiKeyRequired: false
      AuthorizationType: CUSTOM
      AuthorizerId: 
        Ref: CustomAuthorizerApiGatewayAuthorizer # Reference function Custom Authorizer from serverless.yml using Serverless naming convention https://serverless.com/framework/docs/providers/aws/guide/resources/
      HttpMethod: GET
      Integration:
        Credentials: !GetAtt ArtefactRole.Arn
        IntegrationHttpMethod: GET
        IntegrationResponses:
            - StatusCode: 200
              ResponseParameters:
                method.response.header.Content-Type: integration.response.header.Content-Type
                method.response.header.Content-Disposition: integration.response.header.Content-Disposition
                method.response.header.Access-Control-Allow-Headers: "'Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token'"
                method.response.header.Access-Control-Allow-Methods: "'GET,OPTIONS'"
                method.response.header.Access-Control-Allow-Origin: "'*'" # TODO : Can we restrict this further
            - StatusCode: 400
              SelectionPattern: "400"
              ResponseParameters:
                method.response.header.Content-Type: integration.response.header.Content-Type
                method.response.header.Content-Disposition: integration.response.header.Content-Disposition
            - StatusCode: 404
              SelectionPattern: "404"
              ResponseParameters:
                method.response.header.Content-Type: integration.response.header.Content-Type
                method.response.header.Content-Disposition: integration.response.header.Content-Disposition
            - StatusCode: 500
              SelectionPattern: '5\d{2}'
              ResponseParameters:
                method.response.header.Content-Type: integration.response.header.Content-Type
                method.response.header.Content-Disposition: integration.response.header.Content-Disposition
        PassthroughBehavior: WHEN_NO_MATCH
        RequestParameters:
          integration.request.header.Accept: method.request.header.Accept
          integration.request.path.key: method.request.path.key
        Type: AWS
        Uri: arn:aws:apigateway:${self:provider.region}:s3:path/${self:custom.bucketName}/{key}
      MethodResponses:
        - StatusCode: 200
          ResponseParameters:
            method.response.header.Content-Type: integration.response.header.Content-Type
            method.response.header.Content-Disposition: integration.response.header.Content-Disposition
            method.response.header.Access-Control-Allow-Headers: true
            method.response.header.Access-Control-Allow-Methods: true
            method.response.header.Access-Control-Allow-Origin: true
        - StatusCode: 400
          ResponseParameters:
            method.response.header.Content-Type: integration.response.header.Content-Type
            method.response.header.Content-Disposition: integration.response.header.Content-Disposition
        - StatusCode: 404
          ResponseParameters:
            method.response.header.Content-Type: integration.response.header.Content-Type
            method.response.header.Content-Disposition: integration.response.header.Content-Disposition
        - StatusCode: 500
          ResponseParameters:
            method.response.header.Content-Type: integration.response.header.Content-Type
            method.response.header.Content-Disposition: integration.response.header.Content-Disposition
      RequestParameters:
        method.request.header.Accept: false
        method.request.path.key: false
      ResourceId: 
        Ref: s3ItemResource 
      RestApiId:
        Ref: ApiGatewayRestApi # Reference API from serverless.yml using Serverless naming convention https://serverless.com/framework/docs/providers/aws/guide/resources/
  s3Deployment: # Need to redeploy after creating the resource otherwise the thing foesnt exist (may need to change the name of this deployment on each run using build epoch)
    Type: AWS::ApiGateway::Deployment
    DependsOn: 
      - ApiGatewayDeployment${sls:instanceId}
      - s3ProxyGetMethod
    Properties: 
      Description: Deploy with S3 Getter
      RestApiId:
        Ref: ApiGatewayRestApi
      StageName: ${self:custom.stage}

Where do i create the tmp dirctory

Hi,

Little confused as to where to create my tmp directory.

The bucket my lambda function accesses is configured via terraform and just reference via an environment variable in my serverless yaml.

Following the instructions for a bucket called mybucket i should just create a folder /tmp/mybucket

however when i go to http://localhost:8000 i get no buckets response

<ListAllMyBucketsResult xmlns="http://doc.s3.amazonaws.com/2006-03-01/">
<Owner>
<ID>123456789000</ID>
<DisplayName>S3rver</DisplayName>
</Owner>
<Buckets/>
</ListAllMyBucketsResult>

Using custom bucket configuration, the event handler doesn't match the bucket name

Hi - me again. I might be able to workaround this one by having a simpler config but it may be worth looking into.

With s3 events you can either have the bucket created by the event config for the triggered lambda or you can use custom bucket configuration.

I'm using the latter, like this.

functions:
  uploaded:
    handler: fotos/uploaded.uploadedItem
    events:
      - s3: fotos

Then I add my custom bucket config in the resources section, changing the name to something a bit more realistic for an s3 bucket name (fotopia-web-app-prod):

    S3BucketFotos:
      Type: AWS::S3::Bucket
      DeletionPolicy: Delete
      Properties:
        BucketName: ${self:provider.environment.S3_BUCKET}
        AccessControl: PublicRead

My lambda wasnt firing when I PUT images into the bucket so I added some logging into serverless-s3-local like so:

screen shot 2018-03-21 at 8 20 22 pm

Which revealed the problem - the event handler has the short (alias?) bucket name:

screen shot 2018-03-21 at 8 10 15 pm

I'll have a dig around perhaps theres a way via the sls plugin API to get the connection between the shorthand name and the longer name specified in resources.

botocore.exceptions.ClientError: An error occurred (MethodNotAllowed) when calling the AbortMultipartUpload operation: The specified method is not allowed against this resource.

I am trying to do a multipart upload using boto3 and the python module smart_open. However, when I try to do my upload, I get the following error:

botocore.exceptions.ClientError: An error occurred (MethodNotAllowed) when calling the AbortMultipartUpload operation: The specified method is not allowed against this resource.

Just to make it easy for me, I changed my cors.xml document to this:

<CORSConfiguration>
 <CORSRule>
   <AllowedOrigin>*</AllowedOrigin>
   <AllowedMethod>GET</AllowedMethod>
   <AllowedMethod>PUT</AllowedMethod>
   <AllowedMethod>POST</AllowedMethod>
   <AllowedMethod>DELETE</AllowedMethod>
   <AllowedMethod>HEAD</AllowedMethod>
 </CORSRule>
</CORSConfiguration>

As expected, this didn't change anything (as my issue is a resource permission error). After looking at the serverless-offline website, I see that it grabs the IAM role from my online aws profile. However, I have given the "s3:AbortMultipartUpload" permission to my online profile, but it doesn't seem to get registered with serverless-s3-local, as the same error message persists.

I think a problem might lie in the aws_access_key_id and aws_secret_access_key, as these have to be set to "S3RVER" instead of my profile's credentials. I have tried to make a custom IAM Role for my serverless.yml script, but that doesn't seem to work... Here's the serverless code for my custom IAM Role:

iamRoleStatements:
    - Effect: 'Allow'
      Action:
        - "s3:PutEncryptionConfiguration"
        - "s3:PutObject"
        - "s3:GetObject"
        - "s3:CreateBucket"
        - "s3:DeleteObject"
        - "s3:DeleteBucket"
        - "s3:AbortMultipartUpload"
      Resource:
        Fn::Join:
          - ''
          - - 'arn:aws:s3:::'
            - Ref: ServerlessDeploymentBucket
            - '/*'

Is there any change that should be done to the serverless-s3-local code for this to work?

Usage fails with or without AWS Credentials

I am unable to use this. If I start sls offline without my AWS keys exported, I get this error:

(node:70507) UnhandledPromiseRejectionWarning: CredentialsError: Missing credentials in config
    at ClientRequest.<anonymous> (/my/path/node_modules/aws-sdk/lib/http/node.js:83:34)
    at Object.onceWrapper (events.js:313:30)
    at emitNone (events.js:106:13)
    at ClientRequest.emit (events.js:208:7)
    at Socket.emitTimeout (_http_client.js:718:34)
    at Object.onceWrapper (events.js:313:30)
    at emitNone (events.js:106:13)
    at Socket.emit (events.js:208:7)
    at Socket._onTimeout (net.js:422:8)
    at ontimeout (timers.js:498:11)
    at tryOnTimeout (timers.js:323:5)
    at Timer.listOnTimeout (timers.js:290:5)

But even when I do export my valid AWS credentials, I get this error:

info: PUT /local-bucket/35e6ab220ef8eafb5ae749f77a3924fb 403 10ms -
{ InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records.
    at Request.extractError (/my/path/node_modules/aws-sdk/lib/services/s3.js:585:35)
    at Request.callListeners (/my/path/node_modules/aws-sdk/lib/sequential_executor.js:106:20)
    at Request.emit (/my/path/node_modules/aws-sdk/lib/sequential_executor.js:78:10)
    at Request.emit (/my/path/node_modules/aws-sdk/lib/request.js:683:14)
    at Request.transition (/my/path/node_modules/aws-sdk/lib/request.js:22:10)
    at AcceptorStateMachine.runTo (/my/path/node_modules/aws-sdk/lib/state_machine.js:14:12)
    at /my/path/node_modules/aws-sdk/lib/state_machine.js:26:10
    at Request.<anonymous> (/my/path/node_modules/aws-sdk/lib/request.js:38:9)
    at Request.<anonymous> (/my/path/node_modules/aws-sdk/lib/request.js:685:12)
    at Request.callListeners (/my/path/node_modules/aws-sdk/lib/sequential_executor.js:116:18)
    at Request.emit (/my/path/node_modules/aws-sdk/lib/sequential_executor.js:78:10)
    at Request.emit (/my/path/node_modules/aws-sdk/lib/request.js:683:14)
    at Request.transition (/my/path/node_modules/aws-sdk/lib/request.js:22:10)
    at AcceptorStateMachine.runTo (/my/path/node_modules/aws-sdk/lib/state_machine.js:14:12)
    at /my/path/node_modules/aws-sdk/lib/state_machine.js:26:10
    at Request.<anonymous> (/my/path/node_modules/aws-sdk/lib/request.js:38:9)
    at Request.<anonymous> (/my/path/node_modules/aws-sdk/lib/request.js:685:12)
    at Request.callListeners (/my/path/node_modules/aws-sdk/lib/sequential_executor.js:116:18)
    at callNextListener (/my/path/node_modules/aws-sdk/lib/sequential_executor.js:96:12)
    at IncomingMessage.onEnd (/my/path/node_modules/aws-sdk/lib/event_listeners.js:299:13)
    at emitNone (events.js:111:20)
    at IncomingMessage.emit (events.js:208:7)
    at endReadableNT (_stream_readable.js:1064:12)
    at _combinedTickCallback (internal/process/next_tick.js:139:11)
    at process._tickDomainCallback (internal/process/next_tick.js:219:9)
  message: 'The AWS Access Key Id you provided does not exist in our records.',
  code: 'InvalidAccessKeyId',
  region: null,
  time: 2019-05-16T20:45:16.542Z,
  requestId: null,
  extendedRequestId: undefined,
  cfId: undefined,
  statusCode: 403,
  retryable: false,
  retryDelay: 44.172674758005705 }

My S3 initialization code block looks like

// Initialize S3
const {AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION} = Config.get('AWS')
const s3Config = {
  accessKeyId: AWS_ACCESS_KEY_ID,
  secretAccessKey: AWS_SECRET_ACCESS_KEY,
  region: AWS_REGION
}
if (Config.get('NODE_ENV') === 'debug') {
  // Use local S3 for local development
  s3Config.s3ForcePathStyle = true
  s3Config.endpoint = new AWS.Endpoint('http://localhost:4002')
}
const s3 = new AWS.S3(s3Config)

Plugin does not create buckets

I get this in the console but the actual folder is not getting created in /tmp

Serverless: creating bucket: myia-serverless-api-dev-users-selfies

my s3 config is the following:

s3: {port: 8001, directory: /tmp, cors: false}

It seems the plugin clearly gets the list of the required buckets up the the line 196 in index.js and passes it to return s3Client.createBucket({ Bucket }).promise();

My env is the following:
macOS High Sierra Version: 10.13.4 (17E202)
node: v8.11.2
npm: 6.1.0
sls: 1.27.3

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.