Code Monkey home page Code Monkey logo

localstack-python-client's Introduction

LocalStack Python Client

PyPI version license Apache 2.0 GitHub-Actions-Build PyPi downloads

This is an easy-to-use Python client for LocalStack. The client library provides a thin wrapper around boto3 which automatically configures the target endpoints to use LocalStack for your local cloud application development.

Prerequisites

To make use of this library, you need to have LocalStack installed on your local machine. In particular, the localstack command needs to be available.

Installation

The easiest way to install LocalStack is via pip:

pip install localstack-client

Usage

This library provides an API that is identical to boto3's. A minimal way to try it out is to replace import boto3 with import localstack_client.session as boto3. This will allow your boto3 calls to work as normal.

For example, to list all s3 buckets in localstack:

import localstack_client.session as boto3
client = boto3.client('s3')
response = client.list_buckets()

Another example below shows using localstack_client directly. To list the SQS queues in your local (LocalStack) environment, use the following code:

import localstack_client.session

session = localstack_client.session.Session()
sqs = session.client('sqs')
assert sqs.list_queues() is not None

If you use boto3.client directly in your code, you can mock it.

import localstack_client.session
import pytest


@pytest.fixture(autouse=True)
def boto3_localstack_patch(monkeypatch):
    session_ls = localstack_client.session.Session()
    monkeypatch.setattr(boto3, "client", session_ls.client)
    monkeypatch.setattr(boto3, "resource", session_ls.resource)
sqs = boto3.client('sqs')
assert sqs.list_queues() is not None  # list SQS in localstack

Configuration

You can use the following environment variables for configuration:

  • AWS_ENDPOINT_URL: The endpoint URL to connect to (takes precedence over USE_SSL/LOCALSTACK_HOST below)
  • LOCALSTACK_HOST (deprecated): A <hostname>:<port> variable defining where to find LocalStack (default: localhost:4566).
  • USE_SSL (deprecated): Whether to use SSL when connecting to LocalStack (default: False).

Enabling Transparent Local Endpoints

The library contains a small enable_local_endpoints() util function that can be used to transparently run all boto3 requests against the local endpoints.

The following sample illustrates how it can be used - after calling enable_local_endpoints(), the S3 ListBuckets call will be run against LocalStack, even though we're using the default boto3 module.

import boto3
from localstack_client.patch import enable_local_endpoints()
enable_local_endpoints()
# the call below will automatically target the LocalStack endpoints
buckets = boto3.client("s3").list_buckets()

The patch can also be unapplied by calling disable_local_endpoints():

from localstack_client.patch import disable_local_endpoints()
disable_local_endpoints()
# the call below will target the real AWS cloud again
buckets = boto3.client("s3").list_buckets()

Contributing

If you are interested in contributing to LocalStack Python Client, start by reading our CONTRIBUTING.md guide. You can further navigate our codebase and open issues. We are thankful for all the contributions and feedback we receive.

Changelog

Please refer to CHANGELOG.md to see the complete list of changes for each release.

License

The LocalStack Python Client is released under the Apache License, Version 2.0 (see LICENSE).

localstack-python-client's People

Contributors

ackdav avatar ajhalaria-godaddy avatar alex0809 avatar alexrashed avatar astraluma avatar bentsku avatar brettneese avatar hamishfagg avatar harshcasper avatar josephpohlmann avatar lamarrd avatar mgagliardo avatar nickhilton avatar rmsmani avatar sfdye avatar simonrw avatar smatsumt avatar usmangani1 avatar viren-nadkarni avatar whummer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

localstack-python-client's Issues

[Request] asyncio support

Would love to see:

  • asyncio version of localstack_client
  • monkeypatch aioboto3 client and resource

Thanks :)

Possible to have multiple and different sessions?

Also related to the latest comment on 1498 in the main localstack repo.

I'm starting multiple localstack instances using the new EDGE_PORT capability with a different edge port for each localstack instance in an effort to mock multiaccount environments per your suggestion in the localstack issue above.

Is it currently possible to configure a localstack-python-client session to point at one specific instance of localstack (representing one AWS account) versus the another (representing another account)? It doesn't look like it from a look thru the code, but if there is, could I trouble you to point it out?

If not, that would be a great feature, very close to multi-account capability. Many applications are working in multi-account environments since the advent of organizations, mine included.

I suppose it might be possible to address this use case at the session client level instead, using the old port per service capability, but it sounds like EDGE_PORT is the way of the future.

Help patching boto3

I need some help trying to patch boto3. I've been trying the following:

snippet:

@unittest.mock.patch.object(boto3, 'resource', new=localstack_client.session.resource)
@unittest.mock.patch.object(boto3, 'client', new=localstack_client.session.client)
class MyTestCase(unittest.TestCase):

    @classmethod
    def setUp(cls):
        try:
            infra.start_infra(asynchronous=True)
        except Exception as e:
            infra.stop_infra()
            raise e

    @classmethod
    def tearDownClass(cls):
        infra.stop_infra()

    def test_ls(self):
        sqs = boto3.client('sqs')
        print(sqs.list_queues())

run:

nosetests -s

output:

botocore.exceptions.EndpointConnectionError: Could not connect to the endpoint URL: "http://localhost:4576/"

Suggestion: document environment variable LOCALSTACK_HOST

I am using Celery w. dynamoDB backend. The way to configure the dynDB endpoint in Celery is intercepted by localstack and localhost is injected instead.

Reading the code, I've found the environment variable LOCALSTACK_HOST that can be used for this purpose, I believe it is worth mentioning that variable somewhere in the README.

Can't work it out with sample code in README

Start localstack container

$ curl -L https://raw.githubusercontent.com/localstack/localstack/master/docker-compose.yml -o docker-compose.yml
$ docker-compose up -d
$ docker-compose ps

     Name               Command          State                                                  Ports
----------------------------------------------------------------------------------------------------------------------------------------------------
localstack_main   docker-entrypoint.sh   Up      0.0.0.0:4566->4566/tcp, 4567/tcp, 4568/tcp, 4569/tcp, 4570/tcp, 0.0.0.0:4571->4571/tcp, 4572/tcp,
                                                 4573/tcp, 4574/tcp, 4575/tcp, 4576/tcp, 4577/tcp, 4578/tcp, 4579/tcp, 4580/tcp, 4581/tcp, 4582/tcp,
                                                 4583/tcp, 4584/tcp, 4585/tcp, 4586/tcp, 4587/tcp, 4588/tcp, 4589/tcp, 4590/tcp, 4591/tcp, 4592/tcp,
                                                 4593/tcp, 4594/tcp, 4595/tcp, 4596/tcp, 4597/tcp, 0.0.0.0:8080->8080/tcp

Run the test

# run in virtualenv
$ virtualenv env
$ source env/bin/activate

$ cat test.py

import boto3
import localstack_client.session
import pytest


@pytest.fixture(autouse=True)
def boto3_localstack_patch(monkeypatch):
    session_ls = localstack_client.session.Session()
    monkeypatch.setattr(boto3, "client", session_ls.client)
    monkeypatch.setattr(boto3, "resource", session_ls.resource)

sqs = boto3.client('sqs')
assert sqs.list_queues() is not None  # list SQS in localstack

$ python test.py
Traceback (most recent call last):
  File "a.py", line 13, in <module>
    assert sqs.list_queues() is not None  # list SQS in localstack
  File "xxxx/env/lib/python3.8/site-packages/botocore/client.py", line 357, in _api_call
    return self._make_api_call(operation_name, kwargs)
  File "xxxx/env/lib/python3.8/site-packages/botocore/client.py", line 676, in _make_api_call
    raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (ExpiredToken) when calling the ListQueues operation: The security token included in the request is expired

Got same error if run with pytest

$ pytest test.py
=============================================================== test session starts ================================================================
platform darwin -- Python 3.8.6, pytest-6.1.2, py-1.9.0, pluggy-0.13.1
rootdir: xxxx
collected 0 items / 1 error

====================================================================== ERRORS ======================================================================
______________________________________________________________ ERROR collecting a.py _______________________________________________________________
a.py:13: in <module>
    assert sqs.list_queues() is not None  # list SQS in localstack
../../../env/lib/python3.8/site-packages/botocore/client.py:357: in _api_call
    return self._make_api_call(operation_name, kwargs)
../../../env/lib/python3.8/site-packages/botocore/client.py:676: in _make_api_call
    raise error_class(parsed_response, operation_name)
E   botocore.exceptions.ClientError: An error occurred (ExpiredToken) when calling the ListQueues operation: The security token included in the request is expired
============================================================= short test summary info ==============================================================
ERROR a.py - botocore.exceptions.ClientError: An error occurred (ExpiredToken) when calling the ListQueues operation: The security token included...
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
================================================================= 1 error in 0.59s =================================================================

Seems it still goes with my local default AWS_PROFILE or local ~/.aws/credentials, which token is expired.

But I can successfully run the first sample

$ cat first.py

#!/usr/bin/env python3
import localstack_client.session

session = localstack_client.session.Session()
sqs = session.client('sqs')
assert sqs.list_queues() is not None

s3 = session.client('s3')
s3.create_bucket(Bucket="test")
s3.create_bucket(Bucket="test123")
print(s3.list_buckets())

$ python first.py
{'ResponseMetadata': {'RequestId': '253C50774A750C99', 'HostId': 'MzRISOwyjmnup253C50774A750C997/JypPGXLh0OVFGcJaaO3KW/hRAqKOpIEEp', 'HTTPStatusCode': 200, 'HTTPHeaders': {'content-type': 'application/xml; charset=utf-8', 'access-control-allow-origin': '*', 'last-modified': 'Mon, 02 Nov 2020 05:33:37 GMT', 'x-amz-request-id': '253C50774A750C99', 'x-amz-id-2': 'MzRISOwyjmnup253C50774A750C997/JypPGXLh0OVFGcJaaO3KW/hRAqKOpIEEp', 'accept-ranges': 'bytes', 'content-language': 'en-US', 'cache-control': 'no-cache', 'access-control-allow-methods': 'HEAD,GET,PUT,POST,DELETE,OPTIONS,PATCH', 'access-control-allow-headers': 'authorization,content-type,content-md5,cache-control,x-amz-content-sha256,x-amz-date,x-amz-security-token,x-amz-user-agent,x-amz-target,x-amz-acl,x-amz-version-id,x-localstack-target,x-amz-tagging', 'access-control-expose-headers': 'x-amz-version-id', 'connection': 'close', 'date': 'Mon, 02 Nov 2020 05:33:37 GMT', 'server': 'hypercorn-h11', 'transfer-encoding': 'chunked'}, 'RetryAttempts': 0}, 'Buckets': [{'Name': 'test', 'CreationDate': datetime.datetime(2020, 11, 2, 5, 2, 32, tzinfo=tzutc())}, {'Name': 'test123', 'CreationDate': datetime.datetime(2020, 11, 2, 5, 2, 59, tzinfo=tzutc())}], 'Owner': {'DisplayName': 'webfile', 'ID': 'bcaf1ffd86f41161ca5fb16fd081034f'}}

when using S3fs with localstack error is thrown about keys

using enable_local_endpoints()

the following error is thrown when uploading to s3 with pandas and S3FS

024-03-06 11:28:48.862 | ERROR    | tests.integration.test_pipelines:test_analytics:124 - An error has been caught in function 'test_analytics', process 'MainProcess' (9286), thread 'MainThread' (7997807680):
Traceback (most recent call last):

  File "/Users/johnharrison/git/database_flattening/venv/lib/python3.10/site-packages/s3fs/core.py", line 113, in _error_wrapper
    return await func(*args, **kwargs)
                 โ”‚     โ”‚       โ”” {'Key': 'main/monitoring/analytics/mongo_query_date=2024-03-05 00:00:00/2024-03-05 00:00:00.csv', 'Bucket': 'ds-spectrum-tabl...
                 โ”‚     โ”” ()
                 โ”” <bound method ClientCreator._create_api_method.<locals>._api_call of <aiobotocore.client.S3 object at 0x14fe17280>>
  File "/Users/johnharrison/git/database_flattening/venv/lib/python3.10/site-packages/aiobotocore/client.py", line 408, in _make_api_call
    raise error_class(parsed_response, operation_name)
          โ”‚           โ”‚                โ”” 'PutObject'
          โ”‚           โ”” {'Error': {'Code': 'InvalidAccessKeyId', 'Message': 'The AWS Access Key Id you provided does not exist in our records.', 'AWS...
          โ”” <class 'botocore.exceptions.ClientError'>

botocore.exceptions.ClientError: An error occurred (InvalidAccessKeyId) when calling the PutObject operation: The AWS Access Key Id you provided does not exist in our records.


The above exception was the direct cause of the following exception:

sample code to test this

isna__sum.to_csv( f's3://tables/main/monitoring/analytics/{start_date}.csv')

error seems to be coming from aiobotocore

Client Doesn't Honor Region when used as documented.

Simply stated, this doesn't work (as it always assumes region of 'us-east-1'):

import localstack_client.session as boto3
lambda_client = boto3.client('lambda', region_name='us-west-2')
lambda_client.invoke(...)

But this works:

from localstack_client.session import Session
sess = Session(region_name='us-west-2')
lambda_client = sess.client('lambda')

This is because, if you look at the code,

def client(*args, **kwargs):
return _get_default_session().client(*args, **kwargs)

_get_default_session() always returns a defaulted Session item, not allowing you to modify it. Creating the client() from that doesn't honor the kwargs either.

This all appears to be designed to use a single global Session, but I must admit, I don't see a benefit to that. We should be able to create multiple sessions, into multiple regions, within a given "account".

I believe that, assuming you want to use the global space, would be to allow for multiple global sessions, specified by the kwargs passed in. Alternatively, could you just only support the global entry for a client/resourse request with no kwargs?

def client(*args, **kwargs):
if not kwargs:
return _get_default_session().client(*args, **kwargs)
else:
return Session(**kwargs).client(*args, **kwargs)

With the change above:

import localstack_client.session as boto3
c = boto3.client('lambda', region_name='us-west-2')
c._client_config.region_name
'us-west-2'
c2 = boto3.client('lambda')
c2._client_config.region_name
'us-east-1'

"InvalidClientTokenId" error with nosetests but not direct run?

Building a very simple test case:

import localstack_client.session

def test_connection():
session = localstack_client.session.Session()
kinesis = session.client('kinesis')
print("I got connection!")
assert kinesis is not None

Got an error:
"ClientError: An error occurred (InvalidClientTokenId) when calling the AssumeRole operation: The security token included in the request is invalid."

But the same code running in python REPL without any problem.

localstack supports unique port 4566 now.

https://github.com/localstack/localstack#announcements

  • 2020-09-15: A major (breaking) change has been merged in PR #2905 - starting with releases after v0.11.5, all services are now exposed via the edge service (port 4566) only! Please update your client configurations to use this new endpoint.

So all below ports can be changed to 4566

https://github.com/localstack/localstack-python-client/blob/master/localstack_client/config.py#L12-L79

Or maybe you can simplify the code directly

Overview dashboard

Hi,
is it possible to expose the web_ui in config.py?

'dashboard': '{proto}://{host}:8080',

I am not sure if that's the right way. If yes I could just do a PR.

[Feature] Support virtual host S3 API calls

After struggling for several hours with Docker Compose, I noticed my problem was actually in this library, which does not seem to support the virtual host addressing for the s3 client.

Looking at boto3 in debug mode, I noticed that when using Virtual host addressing, the HTTP request was aimed towards http://<bucket>.<endpoint_url>:<endpoint_port>. This is not a valid endpoint, and it should be http://<bucket>.s3.<endpoint_url>:<endpoint_port>, as described here.

After delving deeper I noticed localstack_client.config.get_endpoint it is not handling s3 in any special way. I manually patched the function and noticed that with a simple

def new_get_service_endpoint(
        service: str, localstack_host: Optional[str] = None
    ) -> Optional[str]:
        endpoints = localstack_client.config.get_service_endpoints(localstack_host=localstack_host)
        endpoint = endpoints.get(service)
        if service == "s3":
            endpoint = "http://s3." + endpoint.split("http://")[1]
        return endpoint

I know this doesn't handle SSL, it's just a draft. It works for both virtual and path based addressing.

I see three ways simple about this:

  1. Do not support virtual host addressing. This is a problem, as in theory AWS is deprecating path based addressing (also it has been deprecating it for 4 years, so...)
  2. Do a hack like the one proposed.
  3. Honor AWS service specific endpoints through envars (for instance AWS_ENDPOINT_URL_S3) and leave it up to the user to set it up (with appropriate documentation, at least in the README.md).

What are your thoughts on this?

Proper way to use mock client in local environment

Is there a recommended way to use this library when I am in the local environment, while using boto3 for production? aka. for the same piece of code, do I need a logic to detect the current environment and use different client for different environment, or is there a better way to do it?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.