lambci / docker-lambda Goto Github PK
View Code? Open in Web Editor NEWDocker images and test runners that replicate the live AWS Lambda environment
License: MIT License
Docker images and test runners that replicate the live AWS Lambda environment
License: MIT License
I created a Lambda function using Localstack Lambda service and triggered it using docker-lambda.
My Lambda is supposed to save objects into Local stack s3 service which is in another container. But I always got this error messages. I wonder if anyone could help me to fix it.
err: 'UnknownEndpoint: Inaccessible host: test.localstack\'. This service may not be available in the
us-east-1' region.\n
triggered Lambda by using:
docker run -d --link localstack:localstack --network mynetwork -v "/tmp/localstack/zipfile.283766df":/var/task "lambci/lambda:nodejs6.10" "test.handler"
My docker-compose file looks like following:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:5.2.1
volumes:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- mynetwork
lambci:
image: lambci/lambda:nodejs6.10
networks:
- mynetwork
localstack:
image: localstack/localstack
ports:
- "4567-4582:4567-4582"
- "8080:8080"
environment:
- DEFAULT_REGION=us-west-2
- SERVICES=${SERVICES-lambda, kinesis, s3}
- DEBUG=1
- DATA_DIR=${DATA_DIR- }
- LAMBDA_EXECUTOR=docker
- KINESIS_ERROR_PROBABILITY=${KINESIS_ERROR_PROBABILITY- }
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- "/tmp/localstack:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
networks:
- mynetwork
networks:
mynetwork:
driver: bridge
Unable to run the example Python lambda function with docker run -v "$PWD":/var/task lambci/lambda:python2.7 -e AWS_LAMBDA_EVENT_BODY='{}' or docker run -v "$PWD":/var/task lambci/lambda:python2.7 -e AWS_LAMBDA_EVENT_BODY={} or docker run -v "$PWD":/var/task lambci/lambda:python2.7 -e AWS_LAMBDA_EVENT_BODY '{}' docker run -v "$PWD":/var/task lambci/lambda:python2.7 -e AWS_LAMBDA_EVENT_BODY {}
Fails with
START RequestId: b2d49b12-52d6-4ad0-8b21-93faf7c48dec Version: $LATEST
Unable to parse input as json: No JSON object could be decoded
Traceback (most recent call last):
File "/usr/lib64/python2.7/json/__init__.py", line 339, in loads
return _default_decoder.decode(s)
File "/usr/lib64/python2.7/json/decoder.py", line 364, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib64/python2.7/json/decoder.py", line 382, in raw_decode
raise ValueError("No JSON object could be decoded")
ValueError: No JSON object could be decoded
END RequestId: b2d49b12-52d6-4ad0-8b21-93faf7c48dec
REPORT RequestId: b2d49b12-52d6-4ad0-8b21-93faf7c48dec Duration: 0 ms Billed Duration: 100 ms Memory Size: 1536 MB Max Memory Used: 14 MB
{"stackTrace": [["/usr/lib64/python2.7/json/__init__.py", 339, "loads", "return _default_decoder.decode(s)"], ["/usr/lib64/python2.7/json/decoder.py", 364, "decode", "obj, end = self.raw_decode(s, idx=_w(s, 0).end())"], ["/usr/lib64/python2.7/json/decoder.py", 382, "raw_decode", "raise ValueError(\"No JSON object could be decoded\")"]], "errorType": "ValueError", "errorMessage": "No JSON object could be decoded"}
I just found out http://docs.aws.amazon.com/lambda/latest/dg/test-sam-local.html
It seems to come with lambda docker runtimes. Has anyone looked deeper into it? Maybe it would make sense to replace current runtimes into that one in the future?
The Node.js 4.3.2 runtime was recently added to the offering. This is in addition, not in replace of 0.10.
https://aws.amazon.com/blogs/compute/node-js-4-3-2-runtime-now-available-on-lambda/
Hello, I want to create a lambda function that includes some executables installed via npm, with:
npm install accesslint-cli
If I install this in my mac, the node_modules
folder will contain the node modules, but the paths reference my machine (/Users/jaime/code...
).
Can docker-lambda be used to generate this node_modules
folder correctly for a Lambda function environment?
Thanks!
Hey,
when trying to install yum packages I get issues https://pastebin.com/H0rgSD6u
It's a known issue on centOS CentOS/sig-cloud-instance-images#15 (comment) but I can't install the needed package to make it work on alpine
Feature request: According to the Release: AWS Lambda on 2017-4-18 Python 3.6 support has finally arrived. So it would be beneficial to add support for this version in lambci
as well. Cheers!
This project is awesome and could not have come at a better time.
You can also use it to compile native dependencies knowing that you're linking to the same library versions that exist on AWS Lambda and then deploy using the AWS CLI.
I'd love to replace https://github.com/18F/pa11y-lambda/blob/eecdd5d283de34e437847e21eed9314f27001aba/app/phantomjs_install.js with a pre-built PhantomJS binary that I know will Just Work in the Lambda environment.
According the the PhantomJS docs, you build it by obtaining the source and then running python build.py
. I'm new to both PhantomJS building and Docker, so I was wondering if you could give a rough idea of how that workflow could fit in this Docker toolchain.
It would be helpful to provide the event data via a file.
Current workaround:
docker run -v "$PWD":/var/task lambci/lambda index.handler "$(jq -M -c . event-create.json)"
I run headless chrome with puppeteer. It runs correctly on AWS Lambda, but, below error is occurred on Lambci-docker.
[0918/092739.344468:FATAL:nss_util.cc(627)] NSS_VersionCheck("3.26") failed. NSS >= 3.26 is required. Please upgrade to the latest NSS, and if you still get this error, contact your distribution maintainer.
In a node handler, you can return results with the passed in context.
exports.handler = function(event, context) {
context.succeed({'Hello':'from handler'});
return;
};
What is the equivalent to do this in Python so that I can evaluate the results coming back from a Python lambda call using dockerLambda? I can not call context.succeed() on the context passed to a python handler.
var lambdaCallbackResult = dockerLambda({
dockerImage: "lambci/lambda:python2.7",
event: {"some":"data"}});
console.log(lambdaCallbackResult);
I'm using the lambci/lambda:build-python3.6
image to build a python C module. The prefix value in python-3.6 pkg-config file is incorrect.
The current value is:
/local/p4clients/pkgbuild-cuFpW/workspace/build/LambdaLangPython36/LambdaLangPython36-x.21.4/AL2012/DEV.STD.PTHREAD/build
It should be /var/lang
.
Adding the following to my Dockerfile corrects the issue:
sed -i '/^prefix=/c\prefix=/var/lang' /var/lang/lib/pkgconfig/python-3.6.pc
Here is the full file for reference: /var/lang/lib/pkgconfig/python-3.6.pc
# See: man pkg-config
prefix=/local/p4clients/pkgbuild-cuFpW/workspace/build/LambdaLangPython36/LambdaLangPython36-x.21.4/AL2012/DEV.STD.PTHREAD/build
exec_prefix=${prefix}
libdir=${exec_prefix}/lib
includedir=${prefix}/include
Name: Python
Description: Python library
Requires:
Version: 3.6
Libs.private: -lpthread -ldl -lutil -lrt
Libs: -L${libdir} -lpython3.6m
Cflags: -I${includedir}/python3.6m
I am fairly new to docker, hence the noob question. I have installed docker and am able to execute basic lambda and it runs and exits promptly as expected.
How would i bundle a bunch of code (c, js, cpp) to this docker to build and give me zip artifacts from this docker so that i can use it to deploy in my real lambdas. An example would be greatly appreciated.
I tried to run to find out the gcc version in this docker, and it reports it doesnt have gcc nor does it have zip. how would i go about installing them in this docker? or am i using the wrong docker?
How am i running ( i have an index.js in the pwd)
sudo docker run -v "$PWD":/var/task lambci/lambda:nodejs6.10 (probably i need to run something else other than :nodejs6.10)
[:~] $ sudo docker run --rm -it lambci/lambda:build-python3.6 aws
[sudo] password for dschep:
Traceback (most recent call last):
File "/usr/bin/aws", line 19, in <module>
import awscli.clidriver
File "/usr/lib/python2.7/dist-packages/awscli/clidriver.py", line 32, in <module>
from awscli.help import ProviderHelpCommand
File "/usr/lib/python2.7/dist-packages/awscli/help.py", line 20, in <module>
from docutils.core import publish_string
File "/var/runtime/docutils/core.py", line 246
print('\n::: Runtime settings:', file=self._stderr)
^
SyntaxError: invalid syntax
[:~] $ sudo docker run --rm -it lambci/lambda:build-python2.7 aws
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
aws: error: too few arguments
My work around for now is to remove the existing entrypoint at /usr/bin/aws
and reinstall with pip3
[:~] $ sudo docker run --rm -it lambci/lambda:build-python3.6 bash -c "rm /usr/bin/aws && pip3 install awscli > /dev/null && aws"
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
aws: error: the following arguments are required: command
repro:
docker run -ti lambci/lambda:build-python3.6 bash
bash-4.2# pip3 install nltk
.....
Successfully installed nltk-3.2.2
bash-4.2# python -c "import nltk"
Traceback (most recent call last):
File "", line 1, in
File "/var/lang/lib/python3.6/site-packages/nltk/init.py", line 137, in
from nltk.stem import *
File "/var/lang/lib/python3.6/site-packages/nltk/stem/init.py", line 29, in
from nltk.stem.snowball import SnowballStemmer
File "/var/lang/lib/python3.6/site-packages/nltk/stem/snowball.py", line 24, in
from nltk.corpus import stopwords
File "/var/lang/lib/python3.6/site-packages/nltk/corpus/init.py", line 66, in
from nltk.corpus.reader import *
File "/var/lang/lib/python3.6/site-packages/nltk/corpus/reader/init.py", line 105, in
from nltk.corpus.reader.panlex_lite import *
File "/var/lang/lib/python3.6/site-packages/nltk/corpus/reader/panlex_lite.py", line 15, in
import sqlite3
File "/var/lang/lib/python3.6/sqlite3/init.py", line 23, in
from sqlite3.dbapi2 import *
File "/var/lang/lib/python3.6/sqlite3/dbapi2.py", line 27, in
from _sqlite3 import *
ModuleNotFoundError: No module named '_sqlite3'
I found that version in the container but nothing for python3.6 or 3.4
/usr/lib64/python2.7/lib-dynload/_sqlite3.so
I have installed sqlite-devel (yum install sqlite-devel) before rebuilding python but still no luck.
I am out of ideas now.
Repro:
docker run lambci/lambda:build-python3.6 pip3 install cryptography
Fails with:
unable to execute 'x86_64-unknown-linux-gnu-gcc': No such file or directory
Full output:
Collecting cryptography
Downloading cryptography-1.8.1.tar.gz (423kB)
Collecting idna>=2.1 (from cryptography)
Downloading idna-2.5-py2.py3-none-any.whl (55kB)
Collecting asn1crypto>=0.21.0 (from cryptography)
Downloading asn1crypto-0.22.0-py2.py3-none-any.whl (97kB)
Collecting packaging (from cryptography)
Downloading packaging-16.8-py2.py3-none-any.whl
Requirement already satisfied: six>=1.4.1 in /var/runtime (from cryptography)
Requirement already satisfied: setuptools>=11.3 in /var/lang/lib/python3.6/site-packages (from cryptography)
Collecting cffi>=1.4.1 (from cryptography)
Downloading cffi-1.10.0-cp36-cp36m-manylinux1_x86_64.whl (406kB)
Collecting pyparsing (from packaging->cryptography)
Downloading pyparsing-2.2.0-py2.py3-none-any.whl (56kB)
Collecting pycparser (from cffi>=1.4.1->cryptography)
Downloading pycparser-2.17.tar.gz (231kB)
Installing collected packages: idna, asn1crypto, pyparsing, packaging, pycparser, cffi, cryptography
Running setup.py install for pycparser: started
Running setup.py install for pycparser: finished with status 'done'
Running setup.py install for cryptography: started
Running setup.py install for cryptography: finished with status 'error'
Complete output from command /var/lang//bin/python3.6 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-77kq7rsi/cryptography/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-fcct1gg2-record/install-record.txt --single-version-externally-managed --compile:
running install
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.6
creating build/lib.linux-x86_64-3.6/cryptography
copying src/cryptography/utils.py -> build/lib.linux-x86_64-3.6/cryptography
copying src/cryptography/__init__.py -> build/lib.linux-x86_64-3.6/cryptography
copying src/cryptography/fernet.py -> build/lib.linux-x86_64-3.6/cryptography
copying src/cryptography/__about__.py -> build/lib.linux-x86_64-3.6/cryptography
copying src/cryptography/exceptions.py -> build/lib.linux-x86_64-3.6/cryptography
creating build/lib.linux-x86_64-3.6/cryptography/x509
copying src/cryptography/x509/__init__.py -> build/lib.linux-x86_64-3.6/cryptography/x509
copying src/cryptography/x509/extensions.py -> build/lib.linux-x86_64-3.6/cryptography/x509
copying src/cryptography/x509/general_name.py -> build/lib.linux-x86_64-3.6/cryptography/x509
copying src/cryptography/x509/oid.py -> build/lib.linux-x86_64-3.6/cryptography/x509
copying src/cryptography/x509/name.py -> build/lib.linux-x86_64-3.6/cryptography/x509
copying src/cryptography/x509/base.py -> build/lib.linux-x86_64-3.6/cryptography/x509
creating build/lib.linux-x86_64-3.6/cryptography/hazmat
copying src/cryptography/hazmat/__init__.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat
creating build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives
copying src/cryptography/hazmat/primitives/padding.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives
copying src/cryptography/hazmat/primitives/__init__.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives
copying src/cryptography/hazmat/primitives/hmac.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives
copying src/cryptography/hazmat/primitives/hashes.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives
copying src/cryptography/hazmat/primitives/keywrap.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives
copying src/cryptography/hazmat/primitives/serialization.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives
copying src/cryptography/hazmat/primitives/constant_time.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives
copying src/cryptography/hazmat/primitives/cmac.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives
creating build/lib.linux-x86_64-3.6/cryptography/hazmat/backends
copying src/cryptography/hazmat/backends/__init__.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends
copying src/cryptography/hazmat/backends/interfaces.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends
copying src/cryptography/hazmat/backends/multibackend.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends
creating build/lib.linux-x86_64-3.6/cryptography/hazmat/bindings
copying src/cryptography/hazmat/bindings/__init__.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/bindings
creating build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/twofactor
copying src/cryptography/hazmat/primitives/twofactor/utils.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/twofactor
copying src/cryptography/hazmat/primitives/twofactor/__init__.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/twofactor
copying src/cryptography/hazmat/primitives/twofactor/totp.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/twofactor
copying src/cryptography/hazmat/primitives/twofactor/hotp.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/twofactor
creating build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/interfaces
copying src/cryptography/hazmat/primitives/interfaces/__init__.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/interfaces
creating build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/asymmetric
copying src/cryptography/hazmat/primitives/asymmetric/utils.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/asymmetric
copying src/cryptography/hazmat/primitives/asymmetric/padding.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/asymmetric
copying src/cryptography/hazmat/primitives/asymmetric/__init__.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/asymmetric
copying src/cryptography/hazmat/primitives/asymmetric/ec.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/asymmetric
copying src/cryptography/hazmat/primitives/asymmetric/dh.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/asymmetric
copying src/cryptography/hazmat/primitives/asymmetric/rsa.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/asymmetric
copying src/cryptography/hazmat/primitives/asymmetric/dsa.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/asymmetric
creating build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/ciphers
copying src/cryptography/hazmat/primitives/ciphers/__init__.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/ciphers
copying src/cryptography/hazmat/primitives/ciphers/modes.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/ciphers
copying src/cryptography/hazmat/primitives/ciphers/algorithms.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/ciphers
copying src/cryptography/hazmat/primitives/ciphers/base.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/ciphers
creating build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/kdf
copying src/cryptography/hazmat/primitives/kdf/x963kdf.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/kdf
copying src/cryptography/hazmat/primitives/kdf/__init__.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/kdf
copying src/cryptography/hazmat/primitives/kdf/scrypt.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/kdf
copying src/cryptography/hazmat/primitives/kdf/kbkdf.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/kdf
copying src/cryptography/hazmat/primitives/kdf/hkdf.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/kdf
copying src/cryptography/hazmat/primitives/kdf/concatkdf.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/kdf
copying src/cryptography/hazmat/primitives/kdf/pbkdf2.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/primitives/kdf
creating build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/commoncrypto
copying src/cryptography/hazmat/backends/commoncrypto/__init__.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/commoncrypto
copying src/cryptography/hazmat/backends/commoncrypto/hmac.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/commoncrypto
copying src/cryptography/hazmat/backends/commoncrypto/hashes.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/commoncrypto
copying src/cryptography/hazmat/backends/commoncrypto/backend.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/commoncrypto
copying src/cryptography/hazmat/backends/commoncrypto/ciphers.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/commoncrypto
creating build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/openssl
copying src/cryptography/hazmat/backends/openssl/utils.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/openssl
copying src/cryptography/hazmat/backends/openssl/__init__.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/openssl
copying src/cryptography/hazmat/backends/openssl/hmac.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/openssl
copying src/cryptography/hazmat/backends/openssl/hashes.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/openssl
copying src/cryptography/hazmat/backends/openssl/x509.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/openssl
copying src/cryptography/hazmat/backends/openssl/ec.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/openssl
copying src/cryptography/hazmat/backends/openssl/dh.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/openssl
copying src/cryptography/hazmat/backends/openssl/backend.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/openssl
copying src/cryptography/hazmat/backends/openssl/rsa.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/openssl
copying src/cryptography/hazmat/backends/openssl/decode_asn1.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/openssl
copying src/cryptography/hazmat/backends/openssl/encode_asn1.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/openssl
copying src/cryptography/hazmat/backends/openssl/dsa.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/openssl
copying src/cryptography/hazmat/backends/openssl/cmac.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/openssl
copying src/cryptography/hazmat/backends/openssl/ciphers.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/backends/openssl
creating build/lib.linux-x86_64-3.6/cryptography/hazmat/bindings/commoncrypto
copying src/cryptography/hazmat/bindings/commoncrypto/__init__.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/bindings/commoncrypto
copying src/cryptography/hazmat/bindings/commoncrypto/binding.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/bindings/commoncrypto
creating build/lib.linux-x86_64-3.6/cryptography/hazmat/bindings/openssl
copying src/cryptography/hazmat/bindings/openssl/__init__.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/bindings/openssl
copying src/cryptography/hazmat/bindings/openssl/binding.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/bindings/openssl
copying src/cryptography/hazmat/bindings/openssl/_conditional.py -> build/lib.linux-x86_64-3.6/cryptography/hazmat/bindings/openssl
running egg_info
writing src/cryptography.egg-info/PKG-INFO
writing dependency_links to src/cryptography.egg-info/dependency_links.txt
writing entry points to src/cryptography.egg-info/entry_points.txt
writing requirements to src/cryptography.egg-info/requires.txt
writing top-level names to src/cryptography.egg-info/top_level.txt
warning: manifest_maker: standard file '-c' not found
reading manifest file 'src/cryptography.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
no previously-included directories found matching 'docs/_build'
warning: no previously-included files matching '*' found under directory 'vectors'
writing manifest file 'src/cryptography.egg-info/SOURCES.txt'
running build_ext
generating cffi module 'build/temp.linux-x86_64-3.6/_padding.c'
creating build/temp.linux-x86_64-3.6
generating cffi module 'build/temp.linux-x86_64-3.6/_constant_time.c'
generating cffi module 'build/temp.linux-x86_64-3.6/_openssl.c'
building '_openssl' extension
creating build/temp.linux-x86_64-3.6/build
creating build/temp.linux-x86_64-3.6/build/temp.linux-x86_64-3.6
x86_64-unknown-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I/local/p4clients/pkgbuild-nX_sd/workspace/build/LambdaLangPython36/LambdaLangPython36-x.8.1/AL2012/DEV.STD.PTHREAD/build/private/tmp/brazil-path/build.libfarm/include -I/local/p4clients/pkgbuild-nX_sd/workspace/build/LambdaLangPython36/LambdaLangPython36-x.8.1/AL2012/DEV.STD.PTHREAD/build/private/tmp/brazil-path/build.libfarm/include -fPIC -I/var/lang/include/python3.6m -c build/temp.linux-x86_64-3.6/_openssl.c -o build/temp.linux-x86_64-3.6/build/temp.linux-x86_64-3.6/_openssl.o
unable to execute 'x86_64-unknown-linux-gnu-gcc': No such file or directory
error: command 'x86_64-unknown-linux-gnu-gcc' failed with exit status 1
----------------------------------------
Command "/var/lang//bin/python3.6 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-77kq7rsi/cryptography/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-fcct1gg2-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-77kq7rsi/cryptography/
I'm trying to set up a basic boilerplate that would simplify getting started with developing functions locally and deploying them to aws, using the excellent work put in here. The idea is to use docker compose to start up the container but then wrapping the entry point in a nodemon call so that the function continually re-runs when code is changed. Then when a user is done developing they can go ahead and sh into the container and run zip / aws commands to deploy, or those commands could be part of npm scripts. I'm facing an issue with differences in the two images, lambci/lambda and lambci/lambda:build. Using the first image I was able to get this proof of concept working
-dockerfile-
FROM lambci/lambda
ENV HOME=/home/sbx_user1051
USER root
# create home directory for the user to make sure some node packages work
RUN mkdir -p /home/sbx_user1051 && chown -R sbx_user1051:495 /home/sbx_user1051
ADD . .
RUN npm install
USER sbx_user1051
# nodemon is defined as a devDependency in package.json
ENTRYPOINT ./node_modules/.bin/nodemon --exec "node --max-old-space-size=1229 --max-semi-space-size=76 --max-executable-size=153 --expose-gc /var/runtime/node_modules/awslambda/index.js $HANDLER $EVENT"
-docker-compose-
version: '2'
services:
app:
build: "."
environment:
HANDLER: "index.handler"
EVENT: "'{\"email\": \"[email protected]\", \"id\": \"30\"}'"
volumes:
- ".:/var/task/"
- "/var/task/node_modules"
The issue is if I connect to the container using docker exec none of the extra installed packages are available in /usr/bin (aws, zip). If I use lambci/lambda:build then those packages are available but the dockerfile is really complex and is basically just a clone of lambci/lambda, I would have to fork the repo to get it to work. I can't really tell from the repo how the base image for lambci/lambda:build is generated so I'm not sure what the difference in these two images is, I'm also not an adequate linux admin either (teehee). Any guidance on how to pull this off correctly would be appreciated and if any work comes out of this on my end that you want I'd certainly PR it back into this repo on your terms.
here's the second Dockerfile in case you wanted to see it (uses the same compose)
# basically a copy of lambci/lambda
FROM lambci/lambda:build
ENV PATH=$PATH:/usr/local/lib64/node-v4.3.x/bin:/usr/local/bin:/usr/bin/:/bin \
LAMBDA_TASK_ROOT=/var/task \
LAMBDA_RUNTIME_DIR=/var/runtime \
LANG=en_US.UTF-8
ADD awslambda-mock.js /var/runtime/node_modules/awslambda/build/Release/awslambda.js
# Not sure why permissions don't work just by modifying the owner
RUN rm -rf /tmp && mkdir /tmp && chown -R sbx_user1051:495 /tmp && chmod 700 /tmp
# create home directory for the user to make sure some node packages work
RUN mkdir -p /home/sbx_user1051 && chown -R sbx_user1051:495 /home/sbx_user1051
WORKDIR /var/task
# install nodemon globally
RUN npm install -g nodemon
ADD . .
RUN npm install
USER sbx_user1051
ENTRYPOINT nodemon --exec "node --max-old-space-size=1229 --max-semi-space-size=76 --max-executable-size=153 --expose-gc /var/runtime/node_modules/awslambda/index.js $HANDLER $EVENT"
Trying to run a basic hello world python lambda:
docker run -v "$PWD":/var/task lambci/lambda:python2.7
yields:
recv_start
Traceback (most recent call last):
File "/var/runtime/awslambda/bootstrap.py", line 364, in <module>
main()
File "/var/runtime/awslambda/bootstrap.py", line 344, in main
(invokeid, mode, handler, suppress_init, credentials) = wait_for_start(int(ctrl_sock))
File "/var/runtime/awslambda/bootstrap.py", line 135, in wait_for_start
(invokeid, mode, handler, suppress_init, credentials) = lambda_runtime.recv_start(ctrl_sock)
File "/var/runtime/awslambda/runtime.py", line 13, in recv_start
return (invokeid, mode, handler, suppress_init, credentials)
NameError: global name 'invokeid' is not defined
I currently have a lambda in production that reads and writes from /tmp.
Running `docker run -v "$PWD":/var/task lambci/lambda index.handler "{"event":"args"}"
throw EACCES: permission denied, open 'tmp/sample.pdf'
Is there an environment variable or something else I can do to change read permissions when running from this docker instance? Thank you.
Hi,
i try to use psycopg2 in python3.6 but i am still getting an error in Lambda.
The issue might be that there is no postgresql-devel installed on the docker linux vm.
But when i try to install i still get an error (neverless if i use the yum -y update
or not)
FROM lambci/lambda:build-python3.6
RUN yum -y update \
&& yum install -y yum-plugin-ovl \
&& yum install -y postgresql-devel
CMD ["bash"]
Error is (with update):
E: Failed to install umount
mkinitrd failed
warning: %posttrans(kernel-4.9.43-17.39.amzn1.x86_64) scriptlet failed, exit status 1
Non-fatal POSTTRANS scriptlet failure in rpm package kernel-4.9.43-17.39.amzn1.x86_64
or without update:
Rpmdb checksum is invalid: dCDPT(pkg checksums): postgresql92-libs.x86_64 0:9.2.22-1.61.amzn1 - u
The build-python3.6
image seems to be missing headers for Python3:
$ sudo docker run lambci/lambda:build-python3.6 find / -iname '*python*.h'
/usr/include/python2.7/pythonrun.h
/usr/include/python2.7/Python-ast.h
/usr/include/python2.7/Python.h
Yum only shows packages for Python 3.4, not 3.6:
$ sudo docker run lambci/lambda:build-python3.6 yum search python3
============================= N/S matched: python3 =============================
mod24_wsgi-python34.x86_64 : A WSGI interface for Python web applications in
: Apache
postgresql92-plpython27.x86_64 : The Python3 procedural language for PostgreSQL
python34.x86_64 : Version 3.4 of the Python programming language aka Python 3000
python34-devel.x86_64 : Libraries and header files needed for Python 3.4
: development
python34-docs.noarch : Documentation for the Python programming language
python34-libs.i686 : Python 3.4 runtime libraries
python34-libs.x86_64 : Python 3.4 runtime libraries
python34-pip.noarch : A tool for installing and managing Python packages
python34-setuptools.noarch : Easily build and distribute Python packages
python34-test.x86_64 : The test modules from the main python 3.4 package
python34-tools.x86_64 : A collection of tools included with Python 3.4
python34-virtualenv.noarch : Tool to create isolated Python environments
Name and summary matches only, use "search all" for everything.
I use C# for my lambda and want to test it in local. AWS point to here that we can test lambda locally by this docker. Would you pleased to add C# dotnet in this too?
I am using GitLab CI to test my code and have been able to make a container that uses your docker image. How do I invoke my functions from a docker image? I haven't quite been able to figure that out.
This is what I have so far:
image: lambci/lambda:build
variables:
AWS_DEFAULT_REGION: eu-west-1
AWS_ACCESS_KEY_ID: YOUR_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: YOUR_SECRET_ACCESS_KEY
cache:
paths:
- node_modules/
stages:
- build
build_step:
stage: build
only:
- /^feature\/.*$/
- develop
- master
script:
- npm install
- npm run lint
- docker run -v "$PWD":/var/task lambci/lambda
When I run this I just get issues finding the docker daemon ('Cannot connect to the Docker daemon. Is the docker daemon running on this host?'). I've also tried using the docker-lambda
npm package and that gives me similar issues. Is it something I'm doing, or a problem with GitLab CI?
Thanks!
One important artifact that is missing from the lambda docker build image is node-gyp
when running create_build, yum fails on access to the amazonaws repos with "The requested URL returned error: 403 Forbidden".
After lots of reading, I think these repos are off-limits for anyone NOT running in EC2.
Anyone got the build to work on a local machine?
Note: this question came from a total noob to docker - you don't NEED to build it to use it.... if happy with the content you can just run the image, and docker will download a pre-built one.
i.e. just run
docker run -it lambci/lambda:build bash
and within a couple of minute, you will have a terminal session with gcc installed.
I am getting TypeError: awslambda.reportException is not a function
when returning a non-null error through callback method. I suspect the nodejs shim is missing reportException() function.
I can repro on both nodejs4.3 and nodejs6.10
index.js file
exports.handler = function(context, event, callback) {
return callback('error')
}
docker output
docker run -v "$PWD":/var/task lambci/lambda:nodejs6.10
START RequestId: fcf8ab72-c8b0-133b-2fc4-225a8173b1fe Version: $LATEST
2017-07-01T06:29:11.741Z fcf8ab72-c8b0-133b-2fc4-225a8173b1fe {"errorMessage":"error"}
2017-07-01T06:29:11.745Z fcf8ab72-c8b0-133b-2fc4-225a8173b1fe TypeError: awslambda.reportException is not a function
I'd love to be able to cache compiled python wheels so we don't have to hit the network/recompile unnecessarily. My current command is as follows:
mkdir -m 777 -p ../.cache
docker run --rm \
-v "$PWD/../.cache":/tmp/.cache \
-v "$PWD":/var/task \
lambci/lambda:build-python2.7 pip install -r requirements.txt --cache-dir /tmp/.cache -vv -t env
Unfortunately, I get the following error:
The directory '/tmp/.cache/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
The directory '/tmp/.cache' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Any idea as to what I can do here to mount a cache directory from the OS properly within the docker container?
I've created a Dockerfile built from lambci/lambda-base so I can add some custom commands to speed up developer workflow.
We'd like to install git on the image, but when I run:
yum install git
I get:
http://packages.us-east-1.amazonaws.com/2015.09/main/201509419456/x86_64/repodata/repomd.xml?instance_id=fail®ion=timeout: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 403 Forbidden"
Trying other mirror.
http://packages.us-west-1.amazonaws.com/2015.09/main/201509419456/x86_64/repodata/repomd.xml?instance_id=fail®ion=timeout: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 403 Forbidden"
Trying other mirror.
http://packages.us-west-2.amazonaws.com/2015.09/main/201509419456/x86_64/repodata/repomd.xml?instance_id=fail®ion=timeout: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 403 Forbidden"
Trying other mirror.
http://packages.eu-west-1.amazonaws.com/2015.09/main/201509419456/x86_64/repodata/repomd.xml?instance_id=fail®ion=timeout: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 403 Forbidden"
Trying other mirror.
http://packages.eu-central-1.amazonaws.com/2015.09/main/201509419456/x86_64/repodata/repomd.xml?instance_id=fail®ion=timeout: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 403 Forbidden"
Trying other mirror.
http://packages.ap-southeast-1.amazonaws.com/2015.09/main/201509419456/x86_64/repodata/repomd.xml?instance_id=fail®ion=timeout: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 403 Forbidden"
Trying other mirror.
http://packages.ap-northeast-1.amazonaws.com/2015.09/main/201509419456/x86_64/repodata/repomd.xml?instance_id=fail®ion=timeout: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 403 Forbidden"
Trying other mirror.
http://packages.sa-east-1.amazonaws.com/2015.09/main/201509419456/x86_64/repodata/repomd.xml?instance_id=fail®ion=timeout: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 403 Forbidden"
Trying other mirror.
http://packages.ap-southeast-2.amazonaws.com/2015.09/main/201509419456/x86_64/repodata/repomd.xml?instance_id=fail®ion=timeout: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 403 Forbidden"
Trying other mirror.
One of the configured repositories failed (amzn-main-Base),
and yum doesn't have enough cached data to continue. At this point the only
safe thing yum can do is fail. There are a few ways to work "fix" this:
1. Contact the upstream for the repository and get them to fix the problem.
2. Reconfigure the baseurl/etc. for the repository, to point to a working
upstream. This is most often useful if you are using a newer
distribution release than is supported by the repository (and the
packages for the previous distribution release still work).
3. Disable the repository, so yum won't use it by default. Yum will then
just ignore the repository until you permanently enable it again or use
--enablerepo for temporary usage:
yum-config-manager --disable amzn-main
4. Configure the failing repository to be skipped, if it is unavailable.
Note that yum will try to contact the repo. when it runs most commands,
so will have to try and fail each time (and thus. yum will be be much
slower). If it is a very temporary problem though, this is often a nice
compromise:
yum-config-manager --save --setopt=amzn-main.skip_if_unavailable=true
failure: repodata/repomd.xml from amzn-main: [Errno 256] No more mirrors to try.
yum-config-manager is not available.
Thanks!
Suppose my function is:
export function example (input, context, callback) {
callback(null, { result: 'success' })
}
And I'm invoking it via:
let cmd = `docker run --rm -v "$PWD/build/${app}":/var/task lambci/lambda handler.example '{}'
exec(cmd, (err, stderr, stdout) => {
if (stderr && stderr !== 'null') console.log(`λ: (err)\n${stderr}`)
if (stdout && stdout !== 'null') console.log(`λ: (out)\n${stdout}`)
callback(err)
})
How can I get the value returned by the handler: { result: 'success' }
?
It's quite a wired issue, anyway, I can't find it out why it is like that.
I've developed a small lambda function, which I would like to be able to test locally first.
The main goal of the lambda function is to fetch and handle messages from AWS SQS
.
While I'm running that function with help this docker image lambci/lambda
nothing happens, it waits for 10+ seconds and then stops it :(
$ docker run -v "$PWD/dist":/var/task lambci/lambda
START RequestId: 915db92a-f5db-11ca-e67e-d25072a4290a Version: $LATEST
END RequestId: 915db92a-f5db-11ca-e67e-d25072a4290a
REPORT RequestId: 915db92a-f5db-11ca-e67e-d25072a4290a Duration: 11232.60 ms Billed Duration: 11300 ms Memory Size: 1536 MB Max Memory Used: 37 MB
null%
I'm using dotenv
package to load some env-wise data to be able to connect to specific queue etc..
and it looks like that .env
file is loaded well (because I can see almost all variables from it), but two main variables can't be overwritten somehow, and I still see your image default values
AWS_ACCESS_KEY_ID: 'SOME_ACCESS_KEY_ID',
AWS_SECRET_ACCESS_KEY: 'SOME_SECRET_ACCESS_KEY',
Why so?
P.S. Looks like because of that my function are not able to make the connection to AWS SQS
P.P.S. Meanwhile when I'm using this package everything is working well
In AWS Lambda containers are not destroyed after each execution.
In my scenario (tests), I need to invoke a function multiple times. It would be much faster if, you don't need to recreate the whole container, including the NodeJS process, before each invocation.
Additionally, this can catch potential production issues as it will be closer to the way AWS Lambda works.
I have a local kinesis stream running in docker for testing purposes. I want to make a lambda function that is called when events come through that stream.
From looking at your code here I could pretty easily use your docker image that runs a little harness that hooks up to kinesis then forwards messages into my lambda function using your library. Does that sound right?
Do you know if there is already a tool to help with this? I don't want to re-invent the wheel here if I can avoid it.
When the lambda function being tested contains logging, JS runner incorrectly attempts to JSON.parse the entire stdout.
I figured that if I could run certain commands inside a docker container based on docker-lambda, I must also be able to run these commands on lambda itself. This does not seem to be the case for the following:
This works (docker):
docker run -v "$PWD":/var/task -it lambci/lambda:build bash
easy_install pip
pip install -U certbot
This does not work (lambda):
./lambdash easy_install pip && pip install -U certbot
Results in /bin/sh: easy_install: command not found
, while it works just fine with docker-lambda
.
I'm trying to use this docker container to test out a Zappa + Flask deploy and having some issues. I followed the instructions on the README, and I can't get the Lambda function to run properly.
Is it not importing my Lambda code properly? What is supposed to happen?
docker run -v $PWD:/var/task lambci/lambda:python3.6
START RequestId: de56416b-9dfb-4a9f-b5e6-687af6593b61 Version: $LATEST
Unable to import module 'lambda_function': No module named 'flask_restless'
END RequestId: de56416b-9dfb-4a9f-b5e6-687af6593b61
REPORT RequestId: de56416b-9dfb-4a9f-b5e6-687af6593b61 Duration: 7 ms Billed Duration: 100 ms Memory Size: 1536 MB Max Memory Used: 19 MB
{"errorMessage": "Unable to import module 'lambda_function'"}
Here is the docker-compose.yml file I am using:
version: '3'
services:
lambda:
image: lambci/lambda:python3.6
volumes:
- $PWD:/var/task
environment:
- AWS_LAMBDA_FUNCTION_NAME=application
mariadb:
image: mariadb:latest
volumes:
- ./schema.sql:/docker-entrypoint-initdb.d/load.sql
environment:
- MYSQL_ROOT_PASSWORD=''
- MYSQL_DATABASE=''
- MYSQL_USER=''
- MYSQL_PASSWORD=''
Hi I was very impressed with your work here so helpful! I was wondering how I would go about adding pip packages to these docker containers. I can't seem to find documentation on it anywhere. I am using this package as part of the serverless-plugin-simulate plugin. I was also wondering what I would have to do in order to make this jive well with the serverless-python-requirements plugin. Thanks!
How do you add dependencies for python from pip?
For example, for lambda, I can do pip install ... -t lambda
and my imports are included in the package and all resolve. This doesn't seem to work with docker-lambda.
I'm using the go aws sdk. It allows me to customize the endpoint for lambda, e.g., localhost:8000. Will the docker container given here respond to aws lambda invoke commands, as expected, if I do this?
I am using the command:
docker run -v "$PWD":/var/task lambci/lambda:build-nodejs4.3 npm install
but getting the error:
Host key verification failed.
The problem lies in attempting to access some of my dependencies thru SSH'ing into github.
Where should I be putting my credentials to make this work?
I'm using the following command to run a lambda function as described in the docs.
docker run -v "$PWD":/var/task lambci/lambda index.myHandler '{"some": "event"}'
By, default it uses max memory of 1536MB. I tried modifying the max memory by using the following.
docker run -v "$PWD":/var/task lambci/lambda index.myHandler '{"some": "event"}' ['-m', '512M']
The output still shows max memory of 1536MB. I appreciate, if anyone can help me on changing max memory.
As per title.
First of all thanks for this project. Pretty useful to have :)
Would it be possible to remove the default AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
env vars from the images that that are for example defined here
I'd like to check/give feedback from the lambda I'm running if these are set and error if they aren't, but currently can't do so because these default are there.
AWS have published a Docker Image for Amazon Linux. It's available via ECR. I don't (yet?) see it on hub.docker.com
I assume this would be a closer base image than whatever is currently used.
Running this command docker run -v "$PWD":/var/task lambci/lambda:python3.6
with the file in examples/python/lambda_function.py
, I got this error:
$ docker run -v "$PWD":/var/task lambci/lambda:python3.6
START RequestId: 5218ac6f-6b85-475c-a8e1-0574ab7f1509 Version: $LATEST
Traceback (most recent call last):
File "/var/runtime/awslambda/bootstrap.py", line 514, in <module>
main()
File "/var/runtime/awslambda/bootstrap.py", line 503, in main
init_handler, request_handler = _get_handlers(handler, mode)
File "/var/runtime/awslambda/bootstrap.py", line 29, in _get_handlers
lambda_runtime.report_user_init_start()
AttributeError: module 'runtime' has no attribute 'report_user_init_start'
Do you have any ideia?
Should it be possible to support Java-based lambdas with this?
I was doing some prototyping with AWS Lambda, and successfully ran the code within the docker container. However, when I wanted to extend the lambda functionality to connect to another docker container for Dynamodb, it doesn't seem to work.
This is what I've done:
docker run -d --name dynamodb deangiberson/aws-dynamodb-local
docker run --links dynamodb:dynamodb -v "$PWD":/var/task lambci/lambda index.handler
But when it attempts to connect, this is what it says:
{"errorMessage":"connect ECONNREFUSED 127.0.0.1:8000","errorType":"NetworkingError","stackTrace":["Object.exports._errnoException (util.js:870:11)","exports._exceptionWithHostPort (util.js:893:20)","TCPConnectWrap.afterConnect [as oncomplete] (net.js:1062:14)"]}
I'm running on Docker 1.13.1 (Docker for Mac)
Anyone else had this issue?
Thanks!
Hello,
I'm trying to find a solution to run a Lambda function locally based on a "Dynamo Stream" Trigger. I've looked at the SAM local work, but that only allows one-off executions of a function (via the invoke
command.)
This docker environment looks ideal, but I don't think there is scope here to define a trigger. Am I right? Is there a way of achieving this locally anyone can think of?
Hi,
Can I mount my $HOME/.aws into Docker container to share my AWS config/credentials and have my code like this:
console.log('starting lambda')
var AWS = require("aws-sdk");
AWS.config.update({region: 'us-west-2' });
if (process.env.IN_DOCKER_LAMBDA) {
var credentials = new AWS.SharedIniFileCredentials({profile: 'myprofile'});
AWS.config.credentials = credentials;
AWS.config.update({region: 'us-west-2' });
}
In this case the docker-lambda will just load the credentials in shared ini while in real AWS lambda it will just retrieve credentials from the EC2 metadata. And I don't have to hardcode my credentials in the code.
idea?
The Lambda environment unfortunately does not have a tempfs mounted on /dev/shm
, but it is provided by this image.
I can manually fix this by running the container with --privileged
, reinstalling util-linux
(because /bin/mount
is missing) and unmounting /dev/shm
.
Python's multiprocessing
module uses /dev/shm
extensively and does not work properly in AWS Lambda, this is not fully replicated in this docker image.
See issue on AWS forums.
However, this still runs on docker-lambda, but not on AWS Lambda:
from multiprocessing import Pool
def f(x):
return x*x
p = Pool(5)
print(p.map(f, [1, 2, 3]))
[Errno 38] Function not implemented: OSError
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 9, in lambda_handler
p = Pool(5)
File "/usr/lib64/python2.7/multiprocessing/__init__.py", line 232, in Pool
return Pool(processes, initializer, initargs, maxtasksperchild)
File "/usr/lib64/python2.7/multiprocessing/pool.py", line 138, in __init__
self._setup_queues()
File "/usr/lib64/python2.7/multiprocessing/pool.py", line 234, in _setup_queues
self._inqueue = SimpleQueue()
File "/usr/lib64/python2.7/multiprocessing/queues.py", line 354, in __init__
self._rlock = Lock()
File "/usr/lib64/python2.7/multiprocessing/synchronize.py", line 147, in __init__
SemLock.__init__(self, SEMAPHORE, 1, 1)
File "/usr/lib64/python2.7/multiprocessing/synchronize.py", line 75, in __init__
sl = self._semlock = _multiprocessing.SemLock(kind, value, maxvalue)
OSError: [Errno 38] Function not implemented
Now that Lambda support Environment Variables, it would be good to be able to pass those into the container. For example:
var dockerLambda = require('docker-lambda')
// Spawns synchronously, uses current dir – will throw if it fails
var lambdaCallbackResult = dockerLambda({
event: {some: 'event'},
userEnvVars: { // or a different name ?
MY_ENV_VAR: 'foo-bar'
}
})
Happy to submit a PR if you'd like one.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.