Code Monkey home page Code Monkey logo

docker-lambda's Introduction

Deprecated

NB: these images are deprecated in favor of AWS' official images, which you can find at:

https://github.com/aws/aws-lambda-base-images

And browse on the ECR public gallery, eg:

https://gallery.ecr.aws/lambda/python

This project is now archived and will not receive any further updates.

docker-lambda

A sandboxed local environment that replicates the live AWS Lambda environment almost identically – including installed software and libraries, file structure and permissions, environment variables, context objects and behaviors – even the user and running process are the same.

Example usage with java11 runtime

You can use it for running your functions in the same strict Lambda environment, knowing that they'll exhibit the same behavior when deployed live. You can also use it to compile native dependencies knowing that you're linking to the same library versions that exist on AWS Lambda and then deploy using the AWS CLI.


Contents


Usage

Running Lambda functions

You can run your Lambdas from local directories using the -v arg with docker run. You can run them in two modes: as a single execution, or as an API server that listens for invoke events. The default is single execution mode, which outputs all logging to stderr and the result of the handler to stdout.

You mount your (unzipped) lambda code at /var/task and any (unzipped) layer code at /opt, and most runtimes take two arguments – the first for the handler and the second for the event, ie:

docker run --rm \
  -v <code_dir>:/var/task:ro,delegated \
  [-v <layer_dir>:/opt:ro,delegated] \
  lambci/lambda:<runtime> \
  [<handler>] [<event>]

(the --rm flag will remove the docker container once it has run, which is usually what you want, and the ro,delegated options ensure the directories are mounted read-only and have the highest performance)

You can pass environment variables (eg -e AWS_ACCESS_KEY_ID=abcd) to talk to live AWS services, or modify aspects of the runtime. See below for a list.

Running in "stay-open" API mode

If you pass the environment variable DOCKER_LAMBDA_STAY_OPEN=1 to the container, then instead of executing the event and shutting down, it will start an API server (on port 9001 by default), which you can then call with HTTP following the Lambda Invoke API. This allows you to make fast subsequent calls to your handler without paying the "cold start" penalty each time.

docker run --rm [-d] \
  -e DOCKER_LAMBDA_STAY_OPEN=1 \
  -p 9001:9001 \
  -v <code_dir>:/var/task:ro,delegated \
  [-v <layer_dir>:/opt:ro,delegated] \
  lambci/lambda:<runtime> \
  [<handler>]

(the -d flag will start the container in detached mode, in the background)

You should then see:

Lambda API listening on port 9001...

Then, in another terminal shell/window you can invoke your function using the AWS CLI (or any http client, like curl):

aws lambda invoke --endpoint http://localhost:9001 --no-sign-request \
  --function-name myfunction --payload '{}' output.json

(if you're using AWS CLI v2, you'll need to add --cli-binary-format raw-in-base64-out to the above command)

Or just:

curl -d '{}' http://localhost:9001/2015-03-31/functions/myfunction/invocations

It also supports the documented Lambda API headers X-Amz-Invocation-Type, X-Amz-Log-Type and X-Amz-Client-Context.

If you want to change the exposed port, eg run on port 3000 on the host, use -p 3000:9001 (then query http://localhost:3000).

You can change the internal Lambda API port from 9001 by passing -e DOCKER_LAMBDA_API_PORT=<port>. You can also change the custom runtime port from 9001 by passing -e DOCKER_LAMBDA_RUNTIME_PORT=<port>.

Developing in "stay-open" mode

docker-lambda can watch for changes to your handler (and layer) code and restart the internal bootstrap process so you can always invoke the latest version of your code without needing to shutdown the container.

To enable this, pass -e DOCKER_LAMBDA_WATCH=1 to docker run:

docker run --rm \
  -e DOCKER_LAMBDA_WATCH=1 -e DOCKER_LAMBDA_STAY_OPEN=1 -p 9001:9001 \
  -v "$PWD":/var/task:ro,delegated \
  lambci/lambda:java11 handler

Then when you make changes to any file in the mounted directory, you'll see:

Handler/layer file changed, restarting bootstrap...

And the next invoke will reload your handler with the latest version of your code.

NOTE: This doesn't work in exactly the same way with some of the older runtimes due to the way they're loaded. Specifically: nodejs8.10 and earlier, python3.6 and earlier, dotnetcore2.1 and earlier, java8 and go1.x. These runtimes will instead exit with error code 2 when they are in watch mode and files in the handler or layer are changed.

That way you can use the --restart on-failure capabilities of docker run to have the container automatically restart instead.

So, for nodejs8.10, nodejs6.10, nodejs4.3, python3.6, python2.7, dotnetcore2.1, dotnetcore2.0, java8 and go1.x, you'll need to run watch mode like this instead:

docker run --restart on-failure \
  -e DOCKER_LAMBDA_WATCH=1 -e DOCKER_LAMBDA_STAY_OPEN=1 -p 9001:9001 \
  -v "$PWD":/var/task:ro,delegated \
  lambci/lambda:java8 handler

When you make changes to any file in the mounted directory, you'll see:

Handler/layer file changed, restarting bootstrap...

And then the docker container will restart. See the Docker documentation for more details. Your terminal may get detached, but the container should still be running and the API should have restarted. You can do docker ps to find the container ID and then docker attach <container_id> to reattach if you wish.

If none of the above strategies work for you, you can use a file-watching utility like nodemon:

# npm install -g nodemon
nodemon -w ./ -e '' -s SIGINT -x docker -- run --rm \
  -e DOCKER_LAMBDA_STAY_OPEN=1 -p 9001:9001 \
  -v "$PWD":/var/task:ro,delegated \
  lambci/lambda:go1.x handler

Building Lambda functions

The build images have a number of extra system packages installed intended for building and packaging your Lambda functions. You can run your build commands (eg, gradle on the java image), and then package up your function using zip or the AWS SAM CLI, all from within the image.

docker run [--rm] -v <code_dir>:/var/task [-v <layer_dir>:/opt] lambci/lambda:build-<runtime> <build-cmd>

You can also use yumda to install precompiled native dependencies using yum install.

Run Examples

# Test a `handler` function from an `index.js` file in the current directory on Node.js v12.x
docker run --rm -v "$PWD":/var/task:ro,delegated lambci/lambda:nodejs12.x index.handler

# Using a different file and handler, with a custom event
docker run --rm -v "$PWD":/var/task:ro,delegated lambci/lambda:nodejs12.x app.myHandler '{"some": "event"}'

# Test a `lambda_handler` function in `lambda_function.py` with an empty event on Python 3.8
docker run --rm -v "$PWD":/var/task:ro,delegated lambci/lambda:python3.8 lambda_function.lambda_handler

# Similarly with Ruby 2.7
docker run --rm -v "$PWD":/var/task:ro,delegated lambci/lambda:ruby2.7 lambda_function.lambda_handler

# Test on Go 1.x with a compiled handler named my_handler and a custom event
docker run --rm -v "$PWD":/var/task:ro,delegated lambci/lambda:go1.x my_handler '{"some": "event"}'

# Test a function from the current directory on Java 11
# The directory must be laid out in the same way the Lambda zip file is,
# with top-level package source directories and a `lib` directory for third-party jars
# https://docs.aws.amazon.com/lambda/latest/dg/java-package.html
docker run --rm -v "$PWD":/var/task:ro,delegated lambci/lambda:java11 org.myorg.MyHandler

# Test on .NET Core 3.1 given a test.dll assembly in the current directory,
# a class named Function with a FunctionHandler method, and a custom event
docker run --rm -v "$PWD":/var/task:ro,delegated lambci/lambda:dotnetcore3.1 test::test.Function::FunctionHandler '{"some": "event"}'

# Test with a provided runtime (assumes you have a `bootstrap` executable in the current directory)
docker run --rm -v "$PWD":/var/task:ro,delegated lambci/lambda:provided handler '{"some": "event"}'

# Test with layers (assumes your function code is in `./fn` and your layers in `./layer`)
docker run --rm -v "$PWD"/fn:/var/task:ro,delegated -v "$PWD"/layer:/opt:ro,delegated lambci/lambda:nodejs12.x

# Run custom commands
docker run --rm --entrypoint node lambci/lambda:nodejs12.x -v

# For large events you can pipe them into stdin if you set DOCKER_LAMBDA_USE_STDIN
echo '{"some": "event"}' | docker run --rm -v "$PWD":/var/task:ro,delegated -i -e DOCKER_LAMBDA_USE_STDIN=1 lambci/lambda:nodejs12.x

You can see more examples of how to build docker images and run different runtimes in the examples directory.

Build Examples

To use the build images, for compilation, deployment, etc:

# To compile native deps in node_modules
docker run --rm -v "$PWD":/var/task lambci/lambda:build-nodejs12.x npm rebuild --build-from-source

# To install defined poetry dependencies
docker run --rm -v "$PWD":/var/task lambci/lambda:build-python3.8 poetry install

# To resolve dependencies on go1.x (working directory is /go/src/handler)
docker run --rm -v "$PWD":/go/src/handler lambci/lambda:build-go1.x go mod download

# For .NET Core, this will publish the compiled code to `./pub`,
# which you can then use to run with `-v "$PWD"/pub:/var/task`
docker run --rm -v "$PWD":/var/task lambci/lambda:build-dotnetcore3.1 dotnet publish -c Release -o pub

# Run custom commands on a build container
docker run --rm lambci/lambda:build-python3.8 aws --version

# To run an interactive session on a build container
docker run -it lambci/lambda:build-python3.8 bash

Using a Dockerfile to build

Create your own Docker image to build and deploy:

FROM lambci/lambda:build-nodejs12.x

ENV AWS_DEFAULT_REGION us-east-1

COPY . .

RUN npm install

RUN zip -9yr lambda.zip .

CMD aws lambda update-function-code --function-name mylambda --zip-file fileb://lambda.zip

And then:

docker build -t mylambda .
docker run --rm -e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY mylambda

Node.js module

Using the Node.js module (npm install docker-lambda) – for example in tests:

var dockerLambda = require('docker-lambda')

// Spawns synchronously, uses current dir – will throw if it fails
var lambdaCallbackResult = dockerLambda({event: {some: 'event'}, dockerImage: 'lambci/lambda:nodejs12.x'})

// Manually specify directory and custom args
lambdaCallbackResult = dockerLambda({taskDir: __dirname, dockerArgs: ['-m', '1.5G'], dockerImage: 'lambci/lambda:nodejs12.x'})

Options to pass to dockerLambda():

  • dockerImage
  • handler
  • event
  • taskDir
  • cleanUp
  • addEnvVars
  • dockerArgs
  • spawnOptions
  • returnSpawnResult

Docker tags

These follow the Lambda runtime names:

  • nodejs4.3
  • nodejs6.10
  • nodejs8.10
  • nodejs10.x
  • nodejs12.x
  • python2.7
  • python3.6
  • python3.7
  • python3.8
  • ruby2.5
  • ruby2.7
  • java8
  • java8.al2
  • java11
  • go1.x
  • dotnetcore2.0
  • dotnetcore2.1
  • dotnetcore3.1
  • provided
  • provided.al2
  • build-nodejs4.3
  • build-nodejs6.10
  • build-nodejs8.10
  • build-nodejs10.x
  • build-nodejs12.x
  • build-python2.7
  • build-python3.6
  • build-python3.7
  • build-python3.8
  • build-ruby2.5
  • build-ruby2.7
  • build-java8
  • build-java8.al2
  • build-java11
  • build-go1.x
  • build-dotnetcore2.0
  • build-dotnetcore2.1
  • build-dotnetcore3.1
  • build-provided
  • build-provided.al2

Verifying images

These images are signed using Docker Content Trust, with the following keys:

  • Repository Key: e966126aacd4be5fb92e0160212dd007fc16a9b4366ef86d28fc7eb49f4d0809
  • Root Key: 031d78bcdca4171be103da6ffb55e8ddfa9bd113e0ec481ade78d897d9e65c0e

You can verify/inspect an image using docker trust inspect:

$ docker trust inspect --pretty lambci/lambda:provided

Signatures for lambci/lambda:provided

SIGNED TAG          DIGEST                                                             SIGNERS
provided            838c42079b5fcfd6640d486f13c1ceeb52ac661e19f9f1d240b63478e53d73f8   (Repo Admin)

Administrative keys for lambci/lambda:provided

  Repository Key:	e966126aacd4be5fb92e0160212dd007fc16a9b4366ef86d28fc7eb49f4d0809
  Root Key:	031d78bcdca4171be103da6ffb55e8ddfa9bd113e0ec481ade78d897d9e65c0e

(The DIGEST for a given tag may not match the example above, but the Repository and Root keys should match)

Environment variables

  • AWS_LAMBDA_FUNCTION_HANDLER or _HANDLER
  • AWS_LAMBDA_EVENT_BODY
  • AWS_LAMBDA_FUNCTION_NAME
  • AWS_LAMBDA_FUNCTION_VERSION
  • AWS_LAMBDA_FUNCTION_INVOKED_ARN
  • AWS_LAMBDA_FUNCTION_MEMORY_SIZE
  • AWS_LAMBDA_FUNCTION_TIMEOUT
  • _X_AMZN_TRACE_ID
  • AWS_REGION or AWS_DEFAULT_REGION
  • AWS_ACCOUNT_ID
  • AWS_ACCESS_KEY_ID
  • AWS_SECRET_ACCESS_KEY
  • AWS_SESSION_TOKEN
  • DOCKER_LAMBDA_USE_STDIN
  • DOCKER_LAMBDA_STAY_OPEN
  • DOCKER_LAMBDA_API_PORT
  • DOCKER_LAMBDA_RUNTIME_PORT
  • DOCKER_LAMBDA_DEBUG
  • DOCKER_LAMBDA_NO_MODIFY_LOGS

Build environment

Yum packages installed on build images:

  • development (group, includes gcc-c++, autoconf, automake, git, vim, etc)
  • aws-cli
  • aws-sam-cli
  • docker (Docker in Docker!)
  • clang
  • cmake

The build image for older Amazon Linux 1 based runtimes also include:

  • python27-devel
  • python36-devel
  • ImageMagick-devel
  • cairo-devel
  • libssh2-devel
  • libxslt-devel
  • libmpc-devel
  • readline-devel
  • db4-devel
  • libffi-devel
  • expat-devel
  • libicu-devel
  • lua-devel
  • gdbm-devel
  • sqlite-devel
  • pcre-devel
  • libcurl-devel
  • yum-plugin-ovl

Questions

  • When should I use this?

    When you want fast local reproducibility. When you don't want to spin up an Amazon Linux EC2 instance (indeed, network aside, this is closer to the real Lambda environment because there are a number of different files, permissions and libraries on a default Amazon Linux instance). When you don't want to invoke a live Lambda just to test your Lambda package – you can do it locally from your dev machine or run tests on your CI system (assuming it has Docker support!)

  • Wut, how?

    By tarring the full filesystem in Lambda, uploading that to S3, and then piping into Docker to create a new image from scratch – then creating mock modules that will be required/included in place of the actual native modules that communicate with the real Lambda coordinating services. Only the native modules are mocked out – the actual parent JS/PY/Java runner files are left alone, so their behaviors don't need to be replicated (like the overriding of console.log, and custom defined properties like callbackWaitsForEmptyEventLoop)

  • What's missing from the images?

    Hard to tell – anything that's not readable – so at least /root/* – but probably a little more than that – hopefully nothing important, after all, it's not readable by Lambda, so how could it be!

  • Is it really necessary to replicate exactly to this degree?

    Not for many scenarios – some compiled Linux binaries work out of the box and an Amazon Linux Docker image can compile some binaries that work on Lambda too, for example – but for testing it's great to be able to reliably verify permissions issues, library linking issues, etc.

  • What's this got to do with LambCI?

    Technically nothing – it's just been incredibly useful during the building and testing of LambCI.

docker-lambda's People

Contributors

adamlc avatar adamlewisgmsl avatar alanjds avatar aripalo avatar austinlparker avatar billyshambrook avatar bshackelford avatar chrisoverzero avatar dlahn avatar endemics avatar gliptak avatar hsbt avatar jackmcguire1 avatar jfuss avatar justinmchase avatar kamilsamaj-accolade avatar koxudaxi avatar mhart avatar ndobryanskyy avatar ojongerius avatar patrickhousley avatar recumbent avatar rmax avatar sanathkr avatar smon avatar sriram-mv avatar timoschilling avatar tmo-trustpilot avatar wapmesquita avatar wsee avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker-lambda's Issues

access external service such as Localstack and Elasticsearch

I created a Lambda function using Localstack Lambda service and triggered it using docker-lambda.

My Lambda is supposed to save objects into Local stack s3 service which is in another container. But I always got this error messages. I wonder if anyone could help me to fix it.

err: 'UnknownEndpoint: Inaccessible host: test.localstack\'. This service may not be available in the us-east-1' region.\n

triggered Lambda by using:

docker run -d --link localstack:localstack --network mynetwork -v "/tmp/localstack/zipfile.283766df":/var/task "lambci/lambda:nodejs6.10" "test.handler"

My docker-compose file looks like following:

elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:5.2.1
volumes:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- mynetwork

lambci:
image: lambci/lambda:nodejs6.10
networks:
- mynetwork

localstack:
image: localstack/localstack
ports:
- "4567-4582:4567-4582"
- "8080:8080"
environment:
- DEFAULT_REGION=us-west-2
- SERVICES=${SERVICES-lambda, kinesis, s3}
- DEBUG=1
- DATA_DIR=${DATA_DIR- }
- LAMBDA_EXECUTOR=docker
- KINESIS_ERROR_PROBABILITY=${KINESIS_ERROR_PROBABILITY- }
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- "/tmp/localstack:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
networks:
- mynetwork

networks:
mynetwork:
driver: bridge

how to mount file credentials from host to container?

Hi,

Can I mount my $HOME/.aws into Docker container to share my AWS config/credentials and have my code like this:

console.log('starting lambda')

var AWS = require("aws-sdk");
AWS.config.update({region: 'us-west-2' });


if (process.env.IN_DOCKER_LAMBDA) {
  var credentials        = new AWS.SharedIniFileCredentials({profile: 'myprofile'});
  AWS.config.credentials = credentials;
  AWS.config.update({region: 'us-west-2' });
}

In this case the docker-lambda will just load the credentials in shared ini while in real AWS lambda it will just retrieve credentials from the EC2 metadata. And I don't have to hardcode my credentials in the code.

idea?

_sqlite3 error

repro:
docker run -ti lambci/lambda:build-python3.6 bash
bash-4.2# pip3 install nltk
.....
Successfully installed nltk-3.2.2

bash-4.2# python -c "import nltk"

Traceback (most recent call last):
File "", line 1, in
File "/var/lang/lib/python3.6/site-packages/nltk/init.py", line 137, in
from nltk.stem import *
File "/var/lang/lib/python3.6/site-packages/nltk/stem/init.py", line 29, in
from nltk.stem.snowball import SnowballStemmer
File "/var/lang/lib/python3.6/site-packages/nltk/stem/snowball.py", line 24, in
from nltk.corpus import stopwords
File "/var/lang/lib/python3.6/site-packages/nltk/corpus/init.py", line 66, in
from nltk.corpus.reader import *
File "/var/lang/lib/python3.6/site-packages/nltk/corpus/reader/init.py", line 105, in
from nltk.corpus.reader.panlex_lite import *
File "/var/lang/lib/python3.6/site-packages/nltk/corpus/reader/panlex_lite.py", line 15, in
import sqlite3
File "/var/lang/lib/python3.6/sqlite3/init.py", line 23, in
from sqlite3.dbapi2 import *
File "/var/lang/lib/python3.6/sqlite3/dbapi2.py", line 27, in
from _sqlite3 import *
ModuleNotFoundError: No module named '_sqlite3'

I found that version in the container but nothing for python3.6 or 3.4
/usr/lib64/python2.7/lib-dynload/_sqlite3.so

I have installed sqlite-devel (yum install sqlite-devel) before rebuilding python but still no luck.

I am out of ideas now.

Trying to install postgresql fails

Hi,

i try to use psycopg2 in python3.6 but i am still getting an error in Lambda.
The issue might be that there is no postgresql-devel installed on the docker linux vm.

But when i try to install i still get an error (neverless if i use the yum -y update or not)

FROM lambci/lambda:build-python3.6

RUN yum -y update \
    && yum install -y yum-plugin-ovl \
    && yum install -y postgresql-devel

CMD ["bash"]

Error is (with update):

E: Failed to install umount
mkinitrd failed
warning: %posttrans(kernel-4.9.43-17.39.amzn1.x86_64) scriptlet failed, exit status 1
Non-fatal POSTTRANS scriptlet failure in rpm package kernel-4.9.43-17.39.amzn1.x86_64

or without update:

Rpmdb checksum is invalid: dCDPT(pkg checksums): postgresql92-libs.x86_64 0:9.2.22-1.61.amzn1 - u

Example for compiling PhantomJS

This project is awesome and could not have come at a better time.

You can also use it to compile native dependencies knowing that you're linking to the same library versions that exist on AWS Lambda and then deploy using the AWS CLI.

I'd love to replace https://github.com/18F/pa11y-lambda/blob/eecdd5d283de34e437847e21eed9314f27001aba/app/phantomjs_install.js with a pre-built PhantomJS binary that I know will Just Work in the Lambda environment.

According the the PhantomJS docs, you build it by obtaining the source and then running python build.py. I'm new to both PhantomJS building and Docker, so I was wondering if you could give a rough idea of how that workflow could fit in this Docker toolchain.

AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are not being overridden by dotenv

It's quite a wired issue, anyway, I can't find it out why it is like that.

I've developed a small lambda function, which I would like to be able to test locally first.
The main goal of the lambda function is to fetch and handle messages from AWS SQS.

While I'm running that function with help this docker image lambci/lambda nothing happens, it waits for 10+ seconds and then stops it :(

$ docker run -v "$PWD/dist":/var/task lambci/lambda
START RequestId: 915db92a-f5db-11ca-e67e-d25072a4290a Version: $LATEST
END RequestId: 915db92a-f5db-11ca-e67e-d25072a4290a
REPORT RequestId: 915db92a-f5db-11ca-e67e-d25072a4290a	Duration: 11232.60 ms	Billed Duration: 11300 ms	Memory Size: 1536 MB	Max Memory Used: 37 MB
null%                                                                                                                                                                

I'm using dotenv package to load some env-wise data to be able to connect to specific queue etc..
and it looks like that .env file is loaded well (because I can see almost all variables from it), but two main variables can't be overwritten somehow, and I still see your image default values

AWS_ACCESS_KEY_ID: 'SOME_ACCESS_KEY_ID',
AWS_SECRET_ACCESS_KEY: 'SOME_SECRET_ACCESS_KEY',

Why so?

P.S. Looks like because of that my function are not able to make the connection to AWS SQS
P.P.S. Meanwhile when I'm using this package everything is working well

using docker-lambda for local function development

I'm trying to set up a basic boilerplate that would simplify getting started with developing functions locally and deploying them to aws, using the excellent work put in here. The idea is to use docker compose to start up the container but then wrapping the entry point in a nodemon call so that the function continually re-runs when code is changed. Then when a user is done developing they can go ahead and sh into the container and run zip / aws commands to deploy, or those commands could be part of npm scripts. I'm facing an issue with differences in the two images, lambci/lambda and lambci/lambda:build. Using the first image I was able to get this proof of concept working

-dockerfile-
FROM lambci/lambda

ENV HOME=/home/sbx_user1051

USER root

# create home directory for the user to make sure some node packages work
RUN mkdir -p /home/sbx_user1051 && chown -R sbx_user1051:495 /home/sbx_user1051

ADD . .

RUN npm install

USER sbx_user1051

# nodemon is defined as a devDependency in package.json 
ENTRYPOINT ./node_modules/.bin/nodemon --exec "node --max-old-space-size=1229 --max-semi-space-size=76 --max-executable-size=153 --expose-gc /var/runtime/node_modules/awslambda/index.js $HANDLER $EVENT"

-docker-compose-
version: '2'
services:
  app:
    build: "."
    environment: 
      HANDLER: "index.handler"
      EVENT: "'{\"email\": \"[email protected]\", \"id\": \"30\"}'"
    volumes:
    - ".:/var/task/"
    - "/var/task/node_modules"

The issue is if I connect to the container using docker exec none of the extra installed packages are available in /usr/bin (aws, zip). If I use lambci/lambda:build then those packages are available but the dockerfile is really complex and is basically just a clone of lambci/lambda, I would have to fork the repo to get it to work. I can't really tell from the repo how the base image for lambci/lambda:build is generated so I'm not sure what the difference in these two images is, I'm also not an adequate linux admin either (teehee). Any guidance on how to pull this off correctly would be appreciated and if any work comes out of this on my end that you want I'd certainly PR it back into this repo on your terms.

here's the second Dockerfile in case you wanted to see it (uses the same compose)

# basically a copy of lambci/lambda
FROM lambci/lambda:build

ENV PATH=$PATH:/usr/local/lib64/node-v4.3.x/bin:/usr/local/bin:/usr/bin/:/bin \
    LAMBDA_TASK_ROOT=/var/task \
    LAMBDA_RUNTIME_DIR=/var/runtime \
    LANG=en_US.UTF-8

ADD awslambda-mock.js /var/runtime/node_modules/awslambda/build/Release/awslambda.js

# Not sure why permissions don't work just by modifying the owner
RUN rm -rf /tmp && mkdir /tmp && chown -R sbx_user1051:495 /tmp && chmod 700 /tmp

# create home directory for the user to make sure some node packages work
RUN mkdir -p /home/sbx_user1051 && chown -R sbx_user1051:495 /home/sbx_user1051

WORKDIR /var/task

# install nodemon globally
RUN npm install -g nodemon

ADD . .

RUN npm install

USER sbx_user1051

ENTRYPOINT nodemon --exec "node --max-old-space-size=1229 --max-semi-space-size=76 --max-executable-size=153 --expose-gc /var/runtime/node_modules/awslambda/index.js $HANDLER $EVENT"

unable to pass EVENT BODY to RUN python27

Unable to run the example Python lambda function with docker run -v "$PWD":/var/task lambci/lambda:python2.7 -e AWS_LAMBDA_EVENT_BODY='{}' or docker run -v "$PWD":/var/task lambci/lambda:python2.7 -e AWS_LAMBDA_EVENT_BODY={} or docker run -v "$PWD":/var/task lambci/lambda:python2.7 -e AWS_LAMBDA_EVENT_BODY '{}' docker run -v "$PWD":/var/task lambci/lambda:python2.7 -e AWS_LAMBDA_EVENT_BODY {}

Fails with

START RequestId: b2d49b12-52d6-4ad0-8b21-93faf7c48dec Version: $LATEST
Unable to parse input as json: No JSON object could be decoded
Traceback (most recent call last):
  File "/usr/lib64/python2.7/json/__init__.py", line 339, in loads
    return _default_decoder.decode(s)
  File "/usr/lib64/python2.7/json/decoder.py", line 364, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "/usr/lib64/python2.7/json/decoder.py", line 382, in raw_decode
    raise ValueError("No JSON object could be decoded")
ValueError: No JSON object could be decoded

END RequestId: b2d49b12-52d6-4ad0-8b21-93faf7c48dec
REPORT RequestId: b2d49b12-52d6-4ad0-8b21-93faf7c48dec Duration: 0 ms Billed Duration: 100 ms Memory Size: 1536 MB Max Memory Used: 14 MB
{"stackTrace": [["/usr/lib64/python2.7/json/__init__.py", 339, "loads", "return _default_decoder.decode(s)"], ["/usr/lib64/python2.7/json/decoder.py", 364, "decode", "obj, end = self.raw_decode(s, idx=_w(s, 0).end())"], ["/usr/lib64/python2.7/json/decoder.py", 382, "raw_decode", "raise ValueError(\"No JSON object could be decoded\")"]], "errorType": "ValueError", "errorMessage": "No JSON object could be decoded"}

pkg-config prefix is incorrect

I'm using the lambci/lambda:build-python3.6 image to build a python C module. The prefix value in python-3.6 pkg-config file is incorrect.

The current value is:

/local/p4clients/pkgbuild-cuFpW/workspace/build/LambdaLangPython36/LambdaLangPython36-x.21.4/AL2012/DEV.STD.PTHREAD/build

It should be /var/lang.

Adding the following to my Dockerfile corrects the issue:

sed -i '/^prefix=/c\prefix=/var/lang' /var/lang/lib/pkgconfig/python-3.6.pc

Here is the full file for reference: /var/lang/lib/pkgconfig/python-3.6.pc

# See: man pkg-config
prefix=/local/p4clients/pkgbuild-cuFpW/workspace/build/LambdaLangPython36/LambdaLangPython36-x.21.4/AL2012/DEV.STD.PTHREAD/build
exec_prefix=${prefix}
libdir=${exec_prefix}/lib
includedir=${prefix}/include

Name: Python
Description: Python library
Requires:
Version: 3.6
Libs.private: -lpthread -ldl  -lutil -lrt
Libs: -L${libdir} -lpython3.6m
Cflags: -I${includedir}/python3.6m

No Python3 headers in build-python3.6 image

The build-python3.6 image seems to be missing headers for Python3:

$ sudo docker run lambci/lambda:build-python3.6 find / -iname '*python*.h'
/usr/include/python2.7/pythonrun.h
/usr/include/python2.7/Python-ast.h
/usr/include/python2.7/Python.h

Yum only shows packages for Python 3.4, not 3.6:

$ sudo docker run lambci/lambda:build-python3.6 yum search python3
============================= N/S matched: python3 =============================
mod24_wsgi-python34.x86_64 : A WSGI interface for Python web applications in
                           : Apache
postgresql92-plpython27.x86_64 : The Python3 procedural language for PostgreSQL
python34.x86_64 : Version 3.4 of the Python programming language aka Python 3000
python34-devel.x86_64 : Libraries and header files needed for Python 3.4
                      : development
python34-docs.noarch : Documentation for the Python programming language
python34-libs.i686 : Python 3.4 runtime libraries
python34-libs.x86_64 : Python 3.4 runtime libraries
python34-pip.noarch : A tool for installing and managing Python packages
python34-setuptools.noarch : Easily build and distribute Python packages
python34-test.x86_64 : The test modules from the main python 3.4 package
python34-tools.x86_64 : A collection of tools included with Python 3.4
python34-virtualenv.noarch : Tool to create isolated Python environments

  Name and summary matches only, use "search all" for everything.

How do you return a JSON result from a python handler?

In a node handler, you can return results with the passed in context.

exports.handler = function(event, context) {
  context.succeed({'Hello':'from handler'});
  return;
};

What is the equivalent to do this in Python so that I can evaluate the results coming back from a Python lambda call using dockerLambda? I can not call context.succeed() on the context passed to a python handler.

var lambdaCallbackResult = dockerLambda({
                dockerImage: "lambci/lambda:python2.7",
                event: {"some":"data"}});
console.log(lambdaCallbackResult);

Connecting to DynamoDB

I was doing some prototyping with AWS Lambda, and successfully ran the code within the docker container. However, when I wanted to extend the lambda functionality to connect to another docker container for Dynamodb, it doesn't seem to work.

This is what I've done:

docker run -d --name dynamodb deangiberson/aws-dynamodb-local
docker run --links dynamodb:dynamodb -v "$PWD":/var/task lambci/lambda index.handler

But when it attempts to connect, this is what it says:

{"errorMessage":"connect ECONNREFUSED 127.0.0.1:8000","errorType":"NetworkingError","stackTrace":["Object.exports._errnoException (util.js:870:11)","exports._exceptionWithHostPort (util.js:893:20)","TCPConnectWrap.afterConnect [as oncomplete] (net.js:1062:14)"]}

I'm running on Docker 1.13.1 (Docker for Mac)

Anyone else had this issue?

Thanks!

Unable to import module 'lambda_function'

I'm trying to use this docker container to test out a Zappa + Flask deploy and having some issues. I followed the instructions on the README, and I can't get the Lambda function to run properly.

Is it not importing my Lambda code properly? What is supposed to happen?

docker run -v $PWD:/var/task lambci/lambda:python3.6

START RequestId: de56416b-9dfb-4a9f-b5e6-687af6593b61 Version: $LATEST
Unable to import module 'lambda_function': No module named 'flask_restless'
END RequestId: de56416b-9dfb-4a9f-b5e6-687af6593b61
REPORT RequestId: de56416b-9dfb-4a9f-b5e6-687af6593b61 Duration: 7 ms Billed Duration: 100 ms Memory Size: 1536 MB Max Memory Used: 19 MB

{"errorMessage": "Unable to import module 'lambda_function'"}

Here is the docker-compose.yml file I am using:

version: '3'
services:

  lambda:
    image: lambci/lambda:python3.6
    volumes:
      - $PWD:/var/task
    environment:
      - AWS_LAMBDA_FUNCTION_NAME=application

  mariadb:
    image: mariadb:latest
    volumes:
      - ./schema.sql:/docker-entrypoint-initdb.d/load.sql
    environment:
      - MYSQL_ROOT_PASSWORD=''
      - MYSQL_DATABASE=''
      - MYSQL_USER=''
      - MYSQL_PASSWORD=''

Java test runners is not complete

Trying to run a basic hello world python lambda:

docker run -v "$PWD":/var/task lambci/lambda:python2.7

yields:

recv_start
Traceback (most recent call last):
  File "/var/runtime/awslambda/bootstrap.py", line 364, in <module>
    main()
  File "/var/runtime/awslambda/bootstrap.py", line 344, in main
    (invokeid, mode, handler, suppress_init, credentials) = wait_for_start(int(ctrl_sock))
  File "/var/runtime/awslambda/bootstrap.py", line 135, in wait_for_start
    (invokeid, mode, handler, suppress_init, credentials) = lambda_runtime.recv_start(ctrl_sock)
  File "/var/runtime/awslambda/runtime.py", line 13, in recv_start
    return (invokeid, mode, handler, suppress_init, credentials)
NameError: global name 'invokeid' is not defined

Support for dotnet?

I use C# for my lambda and want to test it in local. AWS point to here that we can test lambda locally by this docker. Would you pleased to add C# dotnet in this too?

Getting the return value out of the function?

Suppose my function is:

export function example (input, context, callback) {
  callback(null, { result: 'success' })
}

And I'm invoking it via:

let cmd = `docker run --rm -v "$PWD/build/${app}":/var/task lambci/lambda handler.example '{}'
exec(cmd, (err, stderr, stdout) => {
  if (stderr && stderr !== 'null') console.log(`λ: (err)\n${stderr}`)
  if (stdout && stdout !== 'null') console.log(`λ: (out)\n${stdout}`)
  callback(err)
})

How can I get the value returned by the handler: { result: 'success' }?

Support passing in env vars as options.

Now that Lambda support Environment Variables, it would be good to be able to pass those into the container. For example:

var dockerLambda = require('docker-lambda')

// Spawns synchronously, uses current dir – will throw if it fails
var lambdaCallbackResult = dockerLambda({
  event: {some: 'event'},
  userEnvVars: { // or a different name ? 
    MY_ENV_VAR: 'foo-bar'
  }
})

Happy to submit a PR if you'd like one.

NSS version is mismatch

I run headless chrome with puppeteer. It runs correctly on AWS Lambda, but, below error is occurred on Lambci-docker.

[0918/092739.344468:FATAL:nss_util.cc(627)] NSS_VersionCheck("3.26") failed. NSS >= 3.26 is required. Please upgrade to the latest NSS, and if you still get this error, contact your distribution maintainer.

Question: How would i use this docker-lambda to build code?

I am fairly new to docker, hence the noob question. I have installed docker and am able to execute basic lambda and it runs and exits promptly as expected.

  1. How would i bundle a bunch of code (c, js, cpp) to this docker to build and give me zip artifacts from this docker so that i can use it to deploy in my real lambdas. An example would be greatly appreciated.

  2. I tried to run to find out the gcc version in this docker, and it reports it doesnt have gcc nor does it have zip. how would i go about installing them in this docker? or am i using the wrong docker?

How am i running ( i have an index.js in the pwd)
sudo docker run -v "$PWD":/var/task lambci/lambda:nodejs6.10 (probably i need to run something else other than :nodejs6.10)

Python3.6 image version of awscli doesn't work

[:~] $ sudo docker run --rm -it lambci/lambda:build-python3.6 aws
[sudo] password for dschep: 
Traceback (most recent call last):
  File "/usr/bin/aws", line 19, in <module>
    import awscli.clidriver
  File "/usr/lib/python2.7/dist-packages/awscli/clidriver.py", line 32, in <module>
    from awscli.help import ProviderHelpCommand
  File "/usr/lib/python2.7/dist-packages/awscli/help.py", line 20, in <module>
    from docutils.core import publish_string
  File "/var/runtime/docutils/core.py", line 246
    print('\n::: Runtime settings:', file=self._stderr)
                                         ^
SyntaxError: invalid syntax
[:~] $ sudo docker run --rm -it lambci/lambda:build-python2.7 aws
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:

  aws help
  aws <command> help
  aws <command> <subcommand> help
aws: error: too few arguments

My work around for now is to remove the existing entrypoint at /usr/bin/aws and reinstall with pip3

[:~] $ sudo docker run --rm -it lambci/lambda:build-python3.6 bash -c "rm /usr/bin/aws && pip3 install awscli > /dev/null && aws"
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:

  aws help
  aws <command> help
  aws <command> <subcommand> help
aws: error: the following arguments are required: command

Unable to use yum command in lambci/lambda-base

I've created a Dockerfile built from lambci/lambda-base so I can add some custom commands to speed up developer workflow.

We'd like to install git on the image, but when I run:

yum install git

I get:

http://packages.us-east-1.amazonaws.com/2015.09/main/201509419456/x86_64/repodata/repomd.xml?instance_id=fail&region=timeout: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 403 Forbidden"
Trying other mirror.
http://packages.us-west-1.amazonaws.com/2015.09/main/201509419456/x86_64/repodata/repomd.xml?instance_id=fail&region=timeout: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 403 Forbidden"
Trying other mirror.
http://packages.us-west-2.amazonaws.com/2015.09/main/201509419456/x86_64/repodata/repomd.xml?instance_id=fail&region=timeout: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 403 Forbidden"
Trying other mirror.
http://packages.eu-west-1.amazonaws.com/2015.09/main/201509419456/x86_64/repodata/repomd.xml?instance_id=fail&region=timeout: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 403 Forbidden"
Trying other mirror.
http://packages.eu-central-1.amazonaws.com/2015.09/main/201509419456/x86_64/repodata/repomd.xml?instance_id=fail&region=timeout: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 403 Forbidden"
Trying other mirror.
http://packages.ap-southeast-1.amazonaws.com/2015.09/main/201509419456/x86_64/repodata/repomd.xml?instance_id=fail&region=timeout: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 403 Forbidden"
Trying other mirror.
http://packages.ap-northeast-1.amazonaws.com/2015.09/main/201509419456/x86_64/repodata/repomd.xml?instance_id=fail&region=timeout: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 403 Forbidden"
Trying other mirror.
http://packages.sa-east-1.amazonaws.com/2015.09/main/201509419456/x86_64/repodata/repomd.xml?instance_id=fail&region=timeout: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 403 Forbidden"
Trying other mirror.
http://packages.ap-southeast-2.amazonaws.com/2015.09/main/201509419456/x86_64/repodata/repomd.xml?instance_id=fail&region=timeout: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 403 Forbidden"
Trying other mirror.

One of the configured repositories failed (amzn-main-Base),
and yum doesn't have enough cached data to continue. At this point the only
safe thing yum can do is fail. There are a few ways to work "fix" this:

 1. Contact the upstream for the repository and get them to fix the problem.

 2. Reconfigure the baseurl/etc. for the repository, to point to a working
    upstream. This is most often useful if you are using a newer
    distribution release than is supported by the repository (and the
    packages for the previous distribution release still work).

 3. Disable the repository, so yum won't use it by default. Yum will then
    just ignore the repository until you permanently enable it again or use
    --enablerepo for temporary usage:

        yum-config-manager --disable amzn-main

 4. Configure the failing repository to be skipped, if it is unavailable.
    Note that yum will try to contact the repo. when it runs most commands,
    so will have to try and fail each time (and thus. yum will be be much
    slower). If it is a very temporary problem though, this is often a nice
    compromise:

        yum-config-manager --save --setopt=amzn-main.skip_if_unavailable=true

failure: repodata/repomd.xml from amzn-main: [Errno 256] No more mirrors to try.

yum-config-manager is not available.

Thanks!

Java support?

Should it be possible to support Java-based lambdas with this?

identical to lambda?

I figured that if I could run certain commands inside a docker container based on docker-lambda, I must also be able to run these commands on lambda itself. This does not seem to be the case for the following:

This works (docker):

docker run -v "$PWD":/var/task -it lambci/lambda:build bash
easy_install pip
pip install -U certbot

This does not work (lambda):

./lambdash easy_install pip && pip install -U certbot

Results in /bin/sh: easy_install: command not found, while it works just fine with docker-lambda.

Can this run on a local machine?

when running create_build, yum fails on access to the amazonaws repos with "The requested URL returned error: 403 Forbidden".
After lots of reading, I think these repos are off-limits for anyone NOT running in EC2.

Anyone got the build to work on a local machine?

Note: this question came from a total noob to docker - you don't NEED to build it to use it.... if happy with the content you can just run the image, and docker will download a pre-built one.
i.e. just run
docker run -it lambci/lambda:build bash
and within a couple of minute, you will have a terminal session with gcc installed.

How to modify max memory while running docker run?

I'm using the following command to run a lambda function as described in the docs.
docker run -v "$PWD":/var/task lambci/lambda index.myHandler '{"some": "event"}'

By, default it uses max memory of 1536MB. I tried modifying the max memory by using the following.
docker run -v "$PWD":/var/task lambci/lambda index.myHandler '{"some": "event"}' ['-m', '512M']

The output still shows max memory of 1536MB. I appreciate, if anyone can help me on changing max memory.

Loading github credentials

I am using the command:
docker run -v "$PWD":/var/task lambci/lambda:build-nodejs4.3 npm install

but getting the error:
Host key verification failed.

The problem lies in attempting to access some of my dependencies thru SSH'ing into github.
Where should I be putting my credentials to make this work?

Running hooked up to a local kinesis stream

I have a local kinesis stream running in docker for testing purposes. I want to make a lambda function that is called when events come through that stream.

From looking at your code here I could pretty easily use your docker image that runs a little harness that hooks up to kinesis then forwards messages into my lambda function using your library. Does that sound right?

Do you know if there is already a tool to help with this? I don't want to re-invent the wheel here if I can avoid it.

Remove default AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY env vars

First of all thanks for this project. Pretty useful to have :)

Would it be possible to remove the default AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY env vars from the images that that are for example defined here

_GLOBAL_ACCESS_KEY_ID = os.environ.get('AWS_ACCESS_KEY_ID', 'SOME_ACCESS_KEY_ID')

I'd like to check/give feedback from the lambda I'm running if these are set and error if they aren't, but currently can't do so because these default are there.

Triggers to run Lambda Function

Hello,

I'm trying to find a solution to run a Lambda function locally based on a "Dynamo Stream" Trigger. I've looked at the SAM local work, but that only allows one-off executions of a function (via the invoke command.)

This docker environment looks ideal, but I don't think there is scope here to define a trigger. Am I right? Is there a way of achieving this locally anyone can think of?

How to leverage caching?

I'd love to be able to cache compiled python wheels so we don't have to hit the network/recompile unnecessarily. My current command is as follows:

mkdir -m 777 -p ../.cache
docker run --rm \
    -v "$PWD/../.cache":/tmp/.cache \
    -v "$PWD":/var/task \
    lambci/lambda:build-python2.7 pip install -r requirements.txt --cache-dir /tmp/.cache -vv -t env

Unfortunately, I get the following error:

The directory '/tmp/.cache/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
The directory '/tmp/.cache' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.

Any idea as to what I can do here to mount a cache directory from the OS properly within the docker container?

nodejs shim is missing reportException() method

I am getting TypeError: awslambda.reportException is not a function when returning a non-null error through callback method. I suspect the nodejs shim is missing reportException() function.

I can repro on both nodejs4.3 and nodejs6.10

index.js file

exports.handler = function(context, event, callback) {
    return callback('error')
}

docker output

docker run -v "$PWD":/var/task lambci/lambda:nodejs6.10

START RequestId: fcf8ab72-c8b0-133b-2fc4-225a8173b1fe Version: $LATEST
2017-07-01T06:29:11.741Z	fcf8ab72-c8b0-133b-2fc4-225a8173b1fe	{"errorMessage":"error"}
2017-07-01T06:29:11.745Z	fcf8ab72-c8b0-133b-2fc4-225a8173b1fe	TypeError: awslambda.reportException is not a function

Error when running lambci/lambda:python3.6

Running this command docker run -v "$PWD":/var/task lambci/lambda:python3.6 with the file in examples/python/lambda_function.py, I got this error:

$ docker run -v "$PWD":/var/task lambci/lambda:python3.6

START RequestId: 5218ac6f-6b85-475c-a8e1-0574ab7f1509 Version: $LATEST
Traceback (most recent call last):
  File "/var/runtime/awslambda/bootstrap.py", line 514, in <module>
    main()
  File "/var/runtime/awslambda/bootstrap.py", line 503, in main
    init_handler, request_handler = _get_handlers(handler, mode)
  File "/var/runtime/awslambda/bootstrap.py", line 29, in _get_handlers
    lambda_runtime.report_user_init_start()
AttributeError: module 'runtime' has no attribute 'report_user_init_start'

Do you have any ideia?

npm install

Hello, I want to create a lambda function that includes some executables installed via npm, with:

npm install accesslint-cli

If I install this in my mac, the node_modules folder will contain the node modules, but the paths reference my machine (/Users/jaime/code...).

Can docker-lambda be used to generate this node_modules folder correctly for a Lambda function environment?

Thanks!

How to install python packages?

Hi I was very impressed with your work here so helpful! I was wondering how I would go about adding pip packages to these docker containers. I can't seem to find documentation on it anywhere. I am using this package as part of the serverless-plugin-simulate plugin. I was also wondering what I would have to do in order to make this jive well with the serverless-python-requirements plugin. Thanks!

CI: Invoking Lambda functions from docker image script

I am using GitLab CI to test my code and have been able to make a container that uses your docker image. How do I invoke my functions from a docker image? I haven't quite been able to figure that out.

This is what I have so far:

image: lambci/lambda:build

variables:
  AWS_DEFAULT_REGION: eu-west-1
  AWS_ACCESS_KEY_ID: YOUR_ACCESS_KEY_ID
  AWS_SECRET_ACCESS_KEY: YOUR_SECRET_ACCESS_KEY

cache:
  paths:
    - node_modules/

stages:
  - build

build_step:
  stage: build
  only:
    - /^feature\/.*$/
    - develop
    - master
  script:
    - npm install
    - npm run lint
    - docker run -v "$PWD":/var/task lambci/lambda

When I run this I just get issues finding the docker daemon ('Cannot connect to the Docker daemon. Is the docker daemon running on this host?'). I've also tried using the docker-lambda npm package and that gives me similar issues. Is it something I'm doing, or a problem with GitLab CI?

Thanks!

Image provides tmpfs on /dev/shm

The Lambda environment unfortunately does not have a tempfs mounted on /dev/shm, but it is provided by this image.

I can manually fix this by running the container with --privileged, reinstalling util-linux (because /bin/mount is missing) and unmounting /dev/shm.

Python's multiprocessing module uses /dev/shm extensively and does not work properly in AWS Lambda, this is not fully replicated in this docker image.

See issue on AWS forums.

However, this still runs on docker-lambda, but not on AWS Lambda:

from multiprocessing import Pool

def f(x):
    return x*x
    
p = Pool(5)
print(p.map(f, [1, 2, 3]))
[Errno 38] Function not implemented: OSError
Traceback (most recent call last):
  File "/var/task/lambda_function.py", line 9, in lambda_handler
    p = Pool(5)
  File "/usr/lib64/python2.7/multiprocessing/__init__.py", line 232, in Pool
    return Pool(processes, initializer, initargs, maxtasksperchild)
  File "/usr/lib64/python2.7/multiprocessing/pool.py", line 138, in __init__
    self._setup_queues()
  File "/usr/lib64/python2.7/multiprocessing/pool.py", line 234, in _setup_queues
    self._inqueue = SimpleQueue()
  File "/usr/lib64/python2.7/multiprocessing/queues.py", line 354, in __init__
    self._rlock = Lock()
  File "/usr/lib64/python2.7/multiprocessing/synchronize.py", line 147, in __init__
    SemLock.__init__(self, SEMAPHORE, 1, 1)
  File "/usr/lib64/python2.7/multiprocessing/synchronize.py", line 75, in __init__
    sl = self._semlock = _multiprocessing.SemLock(kind, value, maxvalue)
OSError: [Errno 38] Function not implemented

Python package build not working in build-python3.6

Repro:

 docker run lambci/lambda:build-python3.6 pip3 install cryptography

Fails with:

unable to execute 'x86_64-unknown-linux-gnu-gcc': No such file or directory

Full output:

Collecting cryptography
  Downloading cryptography-1.8.1.tar.gz (423kB)
Collecting idna>=2.1 (from cryptography)
  Downloading idna-2.5-py2.py3-none-any.whl (55kB)
Collecting asn1crypto>=0.21.0 (from cryptography)
  Downloading asn1crypto-0.22.0-py2.py3-none-any.whl (97kB)
Collecting packaging (from cryptography)
  Downloading packaging-16.8-py2.py3-none-any.whl
Requirement already satisfied: six>=1.4.1 in /var/runtime (from cryptography)
Requirement already satisfied: setuptools>=11.3 in /var/lang/lib/python3.6/site-packages (from cryptography)
Collecting cffi>=1.4.1 (from cryptography)
  Downloading cffi-1.10.0-cp36-cp36m-manylinux1_x86_64.whl (406kB)
Collecting pyparsing (from packaging->cryptography)
  Downloading pyparsing-2.2.0-py2.py3-none-any.whl (56kB)
Collecting pycparser (from cffi>=1.4.1->cryptography)
  Downloading pycparser-2.17.tar.gz (231kB)
Installing collected packages: idna, asn1crypto, pyparsing, packaging, pycparser, cffi, cryptography
  Running setup.py install for pycparser: started
    Running setup.py install for pycparser: finished with status 'done'
  Running setup.py install for cryptography: started
    Running setup.py install for cryptography: finished with status 'error'
    Complete output from command /var/lang//bin/python3.6 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-77kq7rsi/cryptography/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-fcct1gg2-record/install-record.txt --single-version-externally-managed --compile:
    running install
    running build
    running build_py
    creating build
    creating build/lib.linux-x86_64-3.6
    creating build/lib.linux-x86_64-3.6/cryptography
    copying src/cryptography/utils.py -> build/lib.linux-x86_64-3.6/cryptography
    copying src/cryptography/__init__.py -> build/lib.linux-x86_64-3.6/cryptography
    copying src/cryptography/fernet.py -> build/lib.linux-x86_64-3.6/cryptography
    copying src/cryptography/__about__.py -> build/lib.linux-x86_64-3.6/cryptography
    copying src/cryptography/exceptions.py -> build/lib.linux-x86_64-3.6/cryptography
    creating build/lib.linux-x86_64-3.6/cryptography/x509
    copying src/cryptography/x509/__init__.py -> build/lib.linux-x86_64-3.6/cryptography/x509
    copying src/cryptography/x509/extensions.py -> build/lib.linux-x86_64-3.6/cryptography/x509
    copying src/cryptography/x509/general_name.py -> build/lib.linux-x86_64-3.6/cryptography/x509
    copying src/cryptography/x509/oid.py -> build/lib.linux-x86_64-3.6/cryptography/x509
    copying src/cryptography/x509/name.py -> build/lib.linux-x86_64-3.6/cryptography/x509
    copying src/cryptography/x509/base.py -> build/lib.linux-x86_64-3.6/cryptography/x509
    creating build/lib.linux-x86_64-3.6/cryptography/hazmat
    copying src/cryptography/hazmat/__init__.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat
    creating build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives
    copying src/cryptography/hazmat/primitives/padding.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives
    copying src/cryptography/hazmat/primitives/__init__.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives
    copying src/cryptography/hazmat/primitives/hmac.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives
    copying src/cryptography/hazmat/primitives/hashes.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives
    copying src/cryptography/hazmat/primitives/keywrap.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives
    copying src/cryptography/hazmat/primitives/serialization.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives
    copying src/cryptography/hazmat/primitives/constant_time.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives
    copying src/cryptography/hazmat/primitives/cmac.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives
    creating build/lib.linux-x86_64-3.6/cryptography/hazmat/backends
    copying src/cryptography/hazmat/backends/__init__.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends
    copying src/cryptography/hazmat/backends/interfaces.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends
    copying src/cryptography/hazmat/backends/multibackend.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends
    creating build/lib.linux-x86_64-3.6/cryptography/hazmat/bindings
    copying src/cryptography/hazmat/bindings/__init__.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/bindings
    creating build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/twofactor
    copying src/cryptography/hazmat/primitives/twofactor/utils.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/twofactor
    copying src/cryptography/hazmat/primitives/twofactor/__init__.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/twofactor
    copying src/cryptography/hazmat/primitives/twofactor/totp.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/twofactor
    copying src/cryptography/hazmat/primitives/twofactor/hotp.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/twofactor
    creating build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/interfaces
    copying src/cryptography/hazmat/primitives/interfaces/__init__.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/interfaces
    creating build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/asymmetric
    copying src/cryptography/hazmat/primitives/asymmetric/utils.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/asymmetric
    copying src/cryptography/hazmat/primitives/asymmetric/padding.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/asymmetric
    copying src/cryptography/hazmat/primitives/asymmetric/__init__.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/asymmetric
    copying src/cryptography/hazmat/primitives/asymmetric/ec.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/asymmetric
    copying src/cryptography/hazmat/primitives/asymmetric/dh.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/asymmetric
    copying src/cryptography/hazmat/primitives/asymmetric/rsa.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/asymmetric
    copying src/cryptography/hazmat/primitives/asymmetric/dsa.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/asymmetric
    creating build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/ciphers
    copying src/cryptography/hazmat/primitives/ciphers/__init__.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/ciphers
    copying src/cryptography/hazmat/primitives/ciphers/modes.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/ciphers
    copying src/cryptography/hazmat/primitives/ciphers/algorithms.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/ciphers
    copying src/cryptography/hazmat/primitives/ciphers/base.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/ciphers
    creating build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/kdf
    copying src/cryptography/hazmat/primitives/kdf/x963kdf.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/kdf
    copying src/cryptography/hazmat/primitives/kdf/__init__.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/kdf
    copying src/cryptography/hazmat/primitives/kdf/scrypt.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/kdf
    copying src/cryptography/hazmat/primitives/kdf/kbkdf.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/kdf
    copying src/cryptography/hazmat/primitives/kdf/hkdf.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/kdf
    copying src/cryptography/hazmat/primitives/kdf/concatkdf.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/kdf
    copying src/cryptography/hazmat/primitives/kdf/pbkdf2.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/kdf
    creating build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/commoncrypto
    copying src/cryptography/hazmat/backends/commoncrypto/__init__.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/commoncrypto
    copying src/cryptography/hazmat/backends/commoncrypto/hmac.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/commoncrypto
    copying src/cryptography/hazmat/backends/commoncrypto/hashes.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/commoncrypto
    copying src/cryptography/hazmat/backends/commoncrypto/backend.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/commoncrypto
    copying src/cryptography/hazmat/backends/commoncrypto/ciphers.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/commoncrypto
    creating build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/openssl
    copying src/cryptography/hazmat/backends/openssl/utils.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/openssl
    copying src/cryptography/hazmat/backends/openssl/__init__.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/openssl
    copying src/cryptography/hazmat/backends/openssl/hmac.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/openssl
    copying src/cryptography/hazmat/backends/openssl/hashes.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/openssl
    copying src/cryptography/hazmat/backends/openssl/x509.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/openssl
    copying src/cryptography/hazmat/backends/openssl/ec.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/openssl
    copying src/cryptography/hazmat/backends/openssl/dh.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/openssl
    copying src/cryptography/hazmat/backends/openssl/backend.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/openssl
    copying src/cryptography/hazmat/backends/openssl/rsa.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/openssl
    copying src/cryptography/hazmat/backends/openssl/decode_asn1.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/openssl
    copying src/cryptography/hazmat/backends/openssl/encode_asn1.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/openssl
    copying src/cryptography/hazmat/backends/openssl/dsa.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/openssl
    copying src/cryptography/hazmat/backends/openssl/cmac.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/openssl
    copying src/cryptography/hazmat/backends/openssl/ciphers.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/openssl
    creating build/lib.linux-x86_64-3.6/cryptography/hazmat/bindings/commoncrypto
    copying src/cryptography/hazmat/bindings/commoncrypto/__init__.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/bindings/commoncrypto
    copying src/cryptography/hazmat/bindings/commoncrypto/binding.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/bindings/commoncrypto
    creating build/lib.linux-x86_64-3.6/cryptography/hazmat/bindings/openssl
    copying src/cryptography/hazmat/bindings/openssl/__init__.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/bindings/openssl
    copying src/cryptography/hazmat/bindings/openssl/binding.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/bindings/openssl
    copying src/cryptography/hazmat/bindings/openssl/_conditional.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/bindings/openssl
    running egg_info
    writing src/cryptography.egg-info/PKG-INFO
    writing dependency_links to src/cryptography.egg-info/dependency_links.txt
    writing entry points to src/cryptography.egg-info/entry_points.txt
    writing requirements to src/cryptography.egg-info/requires.txt
    writing top-level names to src/cryptography.egg-info/top_level.txt
    warning: manifest_maker: standard file '-c' not found
    
    reading manifest file 'src/cryptography.egg-info/SOURCES.txt'
    reading manifest template 'MANIFEST.in'
    no previously-included directories found matching 'docs/_build'
    warning: no previously-included files matching '*' found under directory 'vectors'
    writing manifest file 'src/cryptography.egg-info/SOURCES.txt'
    running build_ext
    generating cffi module 'build/temp.linux-x86_64-3.6/_padding.c'
    creating build/temp.linux-x86_64-3.6
    generating cffi module 'build/temp.linux-x86_64-3.6/_constant_time.c'
    generating cffi module 'build/temp.linux-x86_64-3.6/_openssl.c'
    building '_openssl' extension
    creating build/temp.linux-x86_64-3.6/build
    creating build/temp.linux-x86_64-3.6/build/temp.linux-x86_64-3.6
    x86_64-unknown-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I/local/p4clients/pkgbuild-nX_sd/workspace/build/LambdaLangPython36/LambdaLangPython36-x.8.1/AL2012/DEV.STD.PTHREAD/build/private/tmp/brazil-path/build.libfarm/include -I/local/p4clients/pkgbuild-nX_sd/workspace/build/LambdaLangPython36/LambdaLangPython36-x.8.1/AL2012/DEV.STD.PTHREAD/build/private/tmp/brazil-path/build.libfarm/include -fPIC -I/var/lang/include/python3.6m -c build/temp.linux-x86_64-3.6/_openssl.c -o build/temp.linux-x86_64-3.6/build/temp.linux-x86_64-3.6/_openssl.o
    unable to execute 'x86_64-unknown-linux-gnu-gcc': No such file or directory
    error: command 'x86_64-unknown-linux-gnu-gcc' failed with exit status 1
    
    ----------------------------------------
Command "/var/lang//bin/python3.6 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-77kq7rsi/cryptography/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-fcct1gg2-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-77kq7rsi/cryptography/

Provide event data via json file

It would be helpful to provide the event data via a file.

Current workaround:

docker run -v "$PWD":/var/task lambci/lambda index.handler "$(jq -M -c . event-create.json)"

Add dependencies for python

How do you add dependencies for python from pip?

For example, for lambda, I can do pip install ... -t lambda and my imports are included in the package and all resolve. This doesn't seem to work with docker-lambda.

Allow to invoke the lambda function multiple times

In AWS Lambda containers are not destroyed after each execution.

In my scenario (tests), I need to invoke a function multiple times. It would be much faster if, you don't need to recreate the whole container, including the NodeJS process, before each invocation.

Additionally, this can catch potential production issues as it will be closer to the way AWS Lambda works.

permissions

I currently have a lambda in production that reads and writes from /tmp.
Running `docker run -v "$PWD":/var/task lambci/lambda index.handler "{"event":"args"}"
throw EACCES: permission denied, open 'tmp/sample.pdf'
Is there an environment variable or something else I can do to change read permissions when running from this docker instance? Thank you.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.