Code Monkey home page Code Monkey logo

sqlalchemy-aurora-data-api's Introduction

sqlalchemy-aurora-data-api - An AWS Aurora Serverless Data API dialect for SQLAlchemy

This package provides a SQLAlchemy dialect for accessing PostgreSQL and MySQL databases via the AWS Aurora Data API.

Installation

pip install sqlalchemy-aurora-data-api

Prerequisites

  • Set up an AWS Aurora Serverless cluster and enable Data API access for it. If you have previously set up an Aurora Serverless cluster, you can enable Data API with the following AWS CLI command:

    aws rds modify-db-cluster --db-cluster-identifier DB_CLUSTER_NAME --enable-http-endpoint --apply-immediately
    
  • Save the database credentials in AWS Secrets Manager using a format expected by the Data API (a JSON object with the keys username and password):

    aws secretsmanager create-secret --name rds-db-credentials/MY_DB
    aws secretsmanager put-secret-value --secret-id rds-db-credentials/MY_DB --secret-string "$(jq -n '.username=env.PGUSER | .password=env.PGPASSWORD')"
    
  • Configure your AWS command line credentials using standard AWS conventions. You can verify that everything works correctly by running a test query via the AWS CLI:

    aws rds-data execute-statement --resource-arn RESOURCE_ARN --secret-arn SECRET_ARN --sql "select * from pg_catalog.pg_tables"
    
    • Here, RESOURCE_ARN refers to the Aurora RDS database ARN, which can be found in the AWS RDS Console (click on your database, then "Configuration") or in the CLI by running aws rds describe-db-clusters. SECRET_ARN refers to the AWS Secrets Manager secret created above.
    • When running deployed code (on an EC2 instance, ECS/EKS container, or Lambda), you can use the managed IAM policy AmazonRDSDataFullAccess to grant your IAM role permissions to access the RDS Data API (while this policy is convenient for testing, we recommend that you create your own scoped down least-privilege policy for production applications).

Usage

The package registers two SQLAlchemy dialects, mysql+auroradataapi:// and postgresql+auroradataapi://. Two sqlalchemy.create_engine() connect_args keyword arguments are required to connect to the database:

  • aurora_cluster_arn (also referred to as resourceArn in the Data API documentation)
    • If not given as a keyword argument, this can also be specified using the AURORA_CLUSTER_ARN environment variable
  • secret_arn (the database credentials secret)
    • If not given as a keyword argument, this can also be specified using the AURORA_SECRET_ARN environment variable

All connection string contents other than the protocol (dialect) and the database name (path component, my_db_name in the example below) are ignored.

from sqlalchemy import create_engine

cluster_arn = "arn:aws:rds:us-east-1:123456789012:cluster:my-aurora-serverless-cluster"
secret_arn = "arn:aws:secretsmanager:us-east-1:123456789012:secret:rds-db-credentials/MY_DB"

engine = create_engine('postgresql+auroradataapi://:@/my_db_name',
                       echo=True,
                       connect_args=dict(aurora_cluster_arn=cluster_arn, secret_arn=secret_arn))

with engine.connect() as conn:
    for result in conn.execute("select * from pg_catalog.pg_tables"):
        print(result)

Motivation

The RDS Data API is the link between the AWS Lambda serverless environment and the sophisticated features provided by PostgreSQL and MySQL. The Data API tunnels SQL over HTTP, which has advantages in the context of AWS Lambda:

  • It eliminates the need to open database ports to the AWS Lambda public IP address pool
  • It uses stateless HTTP connections instead of stateful internal TCP connection pools used by most database drivers (the stateful pools become invalid after going through AWS Lambda freeze-thaw cycles, causing connection errors and burdening the database server with abandoned invalid connections)
  • It uses AWS role-based authentication, eliminating the need for the Lambda to handle database credentials directly

Debugging

This package uses standard Python logging conventions. To enable debug output, set the package log level to DEBUG:

logging.basicConfig()

logging.getLogger("aurora_data_api").setLevel(logging.DEBUG)

Links

Bugs

Please report bugs, issues, feature requests, etc. on GitHub.

License

Licensed under the terms of the Apache License, Version 2.0.

https://travis-ci.org/chanzuckerberg/sqlalchemy-aurora-data-api.png https://codecov.io/github/chanzuckerberg/sqlalchemy-aurora-data-api/coverage.svg?branch=master https://readthedocs.org/projects/sqlalchemy-aurora-data-api/badge/?version=latest

sqlalchemy-aurora-data-api's People

Contributors

barnybug avatar bruceadams avatar kislyuk avatar mat100payette avatar nekonyuu avatar olinger avatar ziv-kognitos avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

sqlalchemy-aurora-data-api's Issues

Add DateTimeRange support(TSRANGE)

Hi! Thanks for the plugin!
I found some issue when i was trying to use TSRANGE data type. If you will try to use example from SqlAlchemy docs:

from psycopg2.extras import DateTimeRange
from sqlalchemy.dialects.postgresql import TSRANGE

class RoomBooking(Base):

    __tablename__ = 'room_booking'

    room = Column(Integer(), primary_key=True)
    during = Column(TSRANGE())

booking = RoomBooking(
    room=101,
    during=DateTimeRange(datetime(2013, 3, 23), None)
)

You will receive error saying "'during' is of type tsrange but expression is of type character varying"

So it's required to explicitly cast the value: during = cast(DateTimeRange(datetime(2013, 3, 23), None), TSRANGE).

How to avoid using transactions / Long running processes are using transactions that have expired

Apologies if this is poor form, but this is essentially a duplicate of cloud-utils/aurora-data-api#34 as I'm not certain if my issue lies more in the underlying aurora-data-api implementation, or the SQLAlchemy implementation that uses it.

From the other ticket:
I've been encountering an issue where I have a long-running process that primarily only reads data and is constantly returning (BadRequestException) when calling the ExecuteStatement operation: Transaction is not found.

According to AWS: A transaction times out if there are no calls that use its transaction ID in three minutes (ref)

An example to reproduce this issue:

with aurora_data_api.connect(aurora_cluster_arn=cluster_arn, secret_arn=secret_arn, database=database) as conn:
    with conn.cursor() as cursor:
        while True:
            cursor.execute("select * from pg_catalog.pg_tables")
            print(cursor.fetchall())
            time.sleep(3 * 60 + 1)

This problem stems from the following code:

    def cursor(self):
        if self._transaction_id is None:
            res = self._client.begin_transaction(database=self._dbname,
                                                 resourceArn=self._aurora_cluster_arn,
                                                 # schema="string", TODO
                                                 secretArn=self._secret_arn)
            self._transaction_id = res["transactionId"]
        cursor = AuroraDataAPICursor(client=self._client,
                                     dbname=self._dbname,
                                     aurora_cluster_arn=self._aurora_cluster_arn,
                                     secret_arn=self._secret_arn,
                                     transaction_id=self._transaction_id)

Things to note:

  • I have not started any transaction and have had no requirement or need to call .commit or .rollback as I have made no changes to the database through any UPDATE or DELETE queries
  • Transactions are deemed as optional in the rds-data API:

As per https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/data-api.html

--transaction-id (optional) โ€“ The identifier of a transaction that was started using the begin-transaction CLI command. Specify the transaction ID of the transaction that you want to include the SQL statement in.

According to this information, there is no need to actually create and use transactions to perform database queries.

Suggestion:

  • Update def cursor() to NOT automatically create a transaction if one doesn't currently exist
  • Introduce a begin method that starts a new transaction
  • Users can opt to perform their database work in transactions or not.

Do my suggestions seem to be valid? I'm happy to (try to) contribute this change back.

Filtering a field in_ a list of Enum values breaks PostgreSQL CAST operator

Basically just copy/pasting an issue from the sqlalchemy project that exists in this project at the moment. Credit to @jordan-dimov

When I have Python code like this:

Schema:

from enum import Enum
from sqlalchemy import Enum as SQLEnum
from sqlalchemy.ext.declarative import declarative_base, DeclarativeMeta

Base: DeclarativeMeta = declarative_base()


class EModelState(str, Enum):
    PENDING = "pending"
    STARTING = "starting"


class MyModel(Base):
    state = Column(SQLEnum(EModelState))

Query code:

query.filter(models.MyModel.state.in_(states))

where states is a list of Enum values, it translates to the following SQL:

WHERE models_mymodel.state IN CAST((:state_1_1, :state_1_2) AS emodelstate))]
[parameters: {'state_1_1': 'PENDING', 'state_1_2': 'STARTING'}]

which breaks with:

sqlalchemy.exc.DatabaseError: (common.aurora_data_api.exceptions.DatabaseError) An error occurred (BadRequestException) when calling the ExecuteStatement operation: ERROR: syntax error at or near "CAST"

I tried some things manually in PostgreSQL and noticed that if I do: select CAST('PENDING' AS emodelstate) that works fine and gives me an actual DB Enum value. But if I try to cast more than one value: select CAST(('PENDING', 'STARTING') AS emodelstate) this fials with ERROR: cannot cast type record to emodelstate.

Question: Asynchronous I/O (asyncio) compatibility

I'm wondering about using this plugin with the asyncio support in SQLAlchemy.

Given the +asyncpg in the connection strings in all the examples, I'm assuming that there would need to be modifications made to this project to support it, but maybe I'm wrong. It might just be helpful to document [lack of] support for it (if only in this issue).

Results to JSON

This library looks great! Thanks for the contribution. Does this export results to JSON?

Async options?

From what I can tell, this does not seem to by asynchronous.

If that's inaccurate, can someone point me at the "how to implement async" docs?

If that's accurate, would the project be interested in help implementing async?

image

Datetime deserialization requires Python 3.7+; emit useful error on earlier Python versions

When loading an object with a DateTime from the DB I get the following error:

[2020-08-06 21:27:11,628] ERROR in app: Error in user confirmation: type object 'datetime.datetime' has no attribute 'fromisoformat'
type object 'datetime.datetime' has no attribute 'fromisoformat': AttributeError
Traceback (most recent call last):
  File "/var/task/app.py", line 48, in handle_post_authentication
    raise e
  File "/var/task/app.py", line 25, in handle_post_authentication
    .filter_by(sub=event["request"]["userAttributes"]["sub"])
  File "/var/task/sqlalchemy/orm/query.py", line 3316, in one_or_none
    ret = list(self)
  File "/var/task/sqlalchemy/orm/loading.py", line 101, in instances
    util.raise_from_cause(err)
  File "/var/task/sqlalchemy/util/compat.py", line 398, in raise_from_cause
    reraise(type(exception), exception, tb=exc_tb, cause=cause)
  File "/var/task/sqlalchemy/util/compat.py", line 153, in reraise
    raise value
  File "/var/task/sqlalchemy/orm/loading.py", line 81, in instances
    rows = [proc(row) for row in fetch]
  File "/var/task/sqlalchemy/orm/loading.py", line 81, in <listcomp>
    rows = [proc(row) for row in fetch]
  File "/var/task/sqlalchemy/orm/loading.py", line 574, in _instance
    populators,
  File "/var/task/sqlalchemy/orm/loading.py", line 695, in _populate_full
    dict_[key] = getter(row)
  File "/var/task/sqlalchemy_aurora_data_api/__init__.py", line 75, in process
    return self.py_type.fromisoformat(value) if isinstance(value, str) else value
AttributeError: type object 'datetime.datetime' has no attribute 'fromisoformat'

Here is my User model:
image

Allow cluster and secret ARN in the connection string

One issue with the API at the moment is that you can't transparently swap from say psycopg postgres to Data API postgres using only environment variables. This is a desirable behaviour because it means that the underlying code is agnostic to the Data API, and will therefore work identically with native Postgres. This API means that I need to add some awareness of the Data API into my actual application.

Although the connection string would be messy, I would like to suggest allowing postgresql+auroradataapi://{secret_arn}@{cluster_arn}, which is roughly analogous to
postgresql+psycopg2://{username}:{password}@{host}:{port}/postgres.

Double casting UUID field with parameter as_uuid=True

Hello, I've got an error when trying to use something like that:

id = Column(
    UUID(as_uuid=True),  # I need to use id as UUID object
    default=uuid.uuid4,
    primary_key=True,
)

UUID: from sqlalchemy.dialects.postgresql import UUID
I've also tested this logic at dockerized PostgreSQL v10.14 with psycopg2 - all works as expected.

So somewhere it tries to make code like uuid.UUID(<uuid.UUID object>)

  File "<input>", line 1, in <module>
  File "/usr/local/lib/python3.9/uuid.py", line 174, in __init__
    hex = hex.replace('urn:', '').replace('uuid:', '')
AttributeError: 'UUID' object has no attribute 'replace'```

Inserting a Timezone aware `datetime.datetime` fails.

It seems like the code is not converting the type to a string. I see the following error:

Invalid type for parameter parameters[3].value.stringValue, value: 2019-12-21 16:59:27.415527+00:00, type: <class 'datetime.datetime'>, valid types: <class 'str'>

I see the following SQL generated:

'INSERT INTO verifications (id, verification_type, status, code_expires, number, sid) VALUES (CAST(:id AS UUID), :verification_type, :status, :code_expires, :number, :sid)'

and the params look like:

{code_expires: datetime.datetime(2019, 12, 21, 16, 59, 27, 415527, tzinfo=datetime.timezone.utc), 
id: 'ca11e967-e90e-45df-9614-71316bb3581b', 
number: '<snip>', 
status: 0, 
sid: None, 
verification_type: 1}

Everything else was successfully converted to the correct type (including id which was a UUID).

Use in Chalice app

@chanzuckerberg @kislyuk
Hi! Thank you for this package.

Because SQLAlchemy does not have a WHL, to install sqlalchemy into a lambda function to be used for chalice you must just include the contents of sqlalchemy into your project repo. However, if you do this and then run a create_engine to a serverless DB you get the following error:

{
  "errorMessage": "The 'sqlalchemy' distribution was not found and is required by the application",
  "errorType": "DistributionNotFound",
  "stackTrace": [
    "  File \"/var/lang/lib/python3.7/imp.py\", line 234, in load_module\n    return load_source(name, filename, file)\n",
    "  File \"/var/lang/lib/python3.7/imp.py\", line 171, in load_source\n    module = _load(spec)\n",
    "  File \"<frozen importlib._bootstrap>\", line 696, in _load\n",
    "  File \"<frozen importlib._bootstrap>\", line 677, in _load_unlocked\n",
    "  File \"<frozen importlib._bootstrap_external>\", line 728, in exec_module\n",
    "  File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\n",
    "  File \"/var/task/app.py\", line 10, in <module>\n    from chalicelib.session import session_scope\n",
    "  File \"/var/task/chalicelib/session.py\", line 10, in <module>\n    engine = create_engine('postgresql+auroradataapi://:@/postgres', connect_args=aws_args, echo=True)\n",
    "  File \"/var/task/sqlalchemy/engine/__init__.py\", line 479, in create_engine\n    return strategy.create(*args, **kwargs)\n",
    "  File \"/var/task/sqlalchemy/engine/strategies.py\", line 61, in create\n    entrypoint = u._get_entrypoint()\n",
    "  File \"/var/task/sqlalchemy/engine/url.py\", line 172, in _get_entrypoint\n    cls = registry.load(name)\n",
    "  File \"/var/task/sqlalchemy/util/langhelpers.py\", line 237, in load\n    return impl.load()\n",
    "  File \"/var/lang/lib/python3.7/site-packages/pkg_resources/__init__.py\", line 2410, in load\n    self.require(*args, **kwargs)\n",
    "  File \"/var/lang/lib/python3.7/site-packages/pkg_resources/__init__.py\", line 2433, in require\n    items = working_set.resolve(reqs, env, installer, extras=self.extras)\n",
    "  File \"/var/lang/lib/python3.7/site-packages/pkg_resources/__init__.py\", line 786, in resolve\n    raise DistributionNotFound(req, requirers)\n"
  ]
}

It seems that this package can not tell that SQLAlchemy does indeed exist as a directory, just not installed via pip.

Bug with json columns

After 0.2.7 version, aurora-data-api is loading json string into python dictionary and then sqlalchemy does the same thing which results in following error.

Traceback (most recent call last):
  File "/var/task/handler.py", line 106, in handler
    return app.resolve(event, context)
  File "/opt/python/aws_lambda_powertools/event_handler/api_gateway.py", line 502, in resolve
    return self._resolve().build(self.current_event, self._cors)
  File "/opt/python/aws_lambda_powertools/event_handler/api_gateway.py", line 561, in _resolve
    return self._call_route(route, match_results.groupdict())  # pass fn args
  File "/opt/python/aws_lambda_powertools/event_handler/api_gateway.py", line 615, in _call_route
    return ResponseBuilder(self._to_response(route.func(**args)), route)
  File "/var/task/handler.py", line 102, in handle_rutter
    return JsonResponse({'integration_connection_id': str(integration_connection.id)})
  File "/var/task/sqlalchemy/orm/attributes.py", line 481, in __get__
    return self.impl.get(state, dict_)
  File "/var/task/sqlalchemy/orm/attributes.py", line 941, in get
    value = self._fire_loader_callables(state, key, passive)
  File "/var/task/sqlalchemy/orm/attributes.py", line 972, in _fire_loader_callables
    return state._load_expired(state, passive)
  File "/var/task/sqlalchemy/orm/state.py", line 710, in _load_expired
    self.manager.expired_attribute_loader(self, toload, passive)
  File "/var/task/sqlalchemy/orm/loading.py", line 1451, in load_scalar_attributes
    result = load_on_ident(
  File "/var/task/sqlalchemy/orm/loading.py", line 407, in load_on_ident
    return load_on_pk_identity(
  File "/var/task/sqlalchemy/orm/loading.py", line 541, in load_on_pk_identity
    return result.one()
  File "/var/task/sqlalchemy/engine/result.py", line 1407, in one
    return self._only_one_row(
  File "/var/task/sqlalchemy/engine/result.py", line 558, in _only_one_row
    row = onerow(hard_close=True)
  File "/var/task/sqlalchemy/engine/result.py", line 1271, in _fetchone_impl
    return self._real_result._fetchone_impl(hard_close=hard_close)
  File "/var/task/sqlalchemy/engine/result.py", line 1674, in _fetchone_impl
    row = next(self.iterator, _NO_ROW)
  File "/var/task/sqlalchemy/orm/loading.py", line 147, in chunks
    fetch = cursor._raw_all_rows()
  File "/var/task/sqlalchemy/engine/result.py", line 392, in _raw_all_rows
    return [make_row(row) for row in rows]
  File "/var/task/sqlalchemy/engine/result.py", line 392, in <listcomp>
    return [make_row(row) for row in rows]
  File "/var/task/sqlalchemy/sql/sqltypes.py", line 2656, in process
    return json_deserializer(value)
  File "/var/lang/lib/python3.9/json/__init__.py", line 339, in loads
    raise TypeError(f'the JSON object must be str, bytes or bytearray, '

To fix this aurora-data-api will have to accept argument which won't do json loading in when it is used with sqlalchemy

Should we recommend SQLAlchemy NullPool?

I wonder if sqlalchemy-aurora-data-api should recommend using SQLAlchemy NullPool?

As I was figuring out cloud-utils/aurora-data-api#43 I got looking at interactions between SQLAlchemy pooling, sqlalchemy-aurora-data-api and aurora-data-api.

The documentation for SQLAlchemy NullPool says:

A Pool which does not pool connections.

Instead it literally opens and closes the underlying DB-API connection per each connection open/close.

Reconnect-related functions such as recycle and connection invalidation are not supported by this Pool implementation, since no connections are held persistently.

Since aurora-data-api is fundamentally connection-less (part of what makes it great!), having SQLAlchemy make efforts managing connections seems like unneeded effort and, especially in failure situations, potentially confusing.

Unable to create engine using Database URI

I am able to connect to my database using the example provided in Readme, but unable to do it in this way.

from sqlalchemy import create_engine

DIALECT = 'mysql'
DRIVER = 'auroradataapi'
USERNAME = 'admin'
PASSWORD = 'random-strings'
HOST = 'blog-aurora-serverless.cluster-ddddddkbe3aq.us-east-1.rds.amazonaws.com'
PORT = '3306'
DATABASE = 'my_db'

SQLALCHEMY_DATABASE_URI = f"{DIALECT}+{DRIVER}://{USERNAME}:{PASSWORD}@{HOST}:{PORT}/{DATABASE}"
engine = create_engine(SQLALCHEMY_DADTABASE_URI)

with engine.connect() as conn:
    for result in conn.execute("select * from pg_catalog.pg_tables"):
        print(result)

I am getting this error

TypeError: connect() got an unexpected keyword argument 'port'

--name argument are required

Thanks for this nice api. Any thoughts?

 % aws secretsmanager create-secret --secret-id rds-db-credentials/XXX

usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:

  aws help
  aws <command> help
  aws <command> <subcommand> help

aws: error: the following arguments are required: --name

Error when using with alembic

2021-03-24T17:46:17.249-07:00 Traceback (most recent call last):
๏ฟผ
2021-03-24T17:46:17.249-07:00 File "/var/task/aurora_data_api/init.py", line 208, in execute
๏ฟผ
2021-03-24T17:46:17.249-07:00 res = self._client.execute_statement(**execute_statement_args)
๏ฟผ
2021-03-24T17:46:17.249-07:00 File "/var/task/botocore/client.py", line 357, in _api_call
๏ฟผ
2021-03-24T17:46:17.249-07:00 return self._make_api_call(operation_name, kwargs)
๏ฟผ
2021-03-24T17:46:17.249-07:00 File "/var/task/botocore/client.py", line 676, in _make_api_call
๏ฟผ
2021-03-24T17:46:17.249-07:00 raise error_class(parsed_response, operation_name)
๏ฟผ
2021-03-24T17:46:17.249-07:00
๏ฟผ
Copy
botocore.errorfactory.BadRequestException: An error occurred (BadRequestException) when calling the ExecuteStatement operation: Database error code: 1146. Message: Table 'truth_team_db.alembic_version' doesn't exist
botocore.errorfactory.BadRequestException: An error occurred (BadRequestException) when calling the ExecuteStatement operation: Database error code: 1146. Message: Table 'truth_team_db.alembic_version' doesn't exist
๏ฟผ
2021-03-24T17:46:17.249-07:00 During handling of the above exception, another exception occurred:
๏ฟผ
2021-03-24T17:46:17.249-07:00 Traceback (most recent call last):
๏ฟผ
2021-03-24T17:46:17.249-07:00 File "alembic/env.py", line 47, in run_migrations_online
๏ฟผ
2021-03-24T17:46:17.249-07:00 context.run_migrations()
๏ฟผ
2021-03-24T17:46:17.249-07:00 File "", line 8, in run_migrations
๏ฟผ
2021-03-24T17:46:17.249-07:00 File "/var/task/alembic/runtime/environment.py", line 846, in run_migrations
๏ฟผ
2021-03-24T17:46:17.249-07:00 self.get_context().run_migrations(**kw)
๏ฟผ
2021-03-24T17:46:17.249-07:00 File "/var/task/alembic/runtime/migration.py", line 498, in run_migrations
๏ฟผ
2021-03-24T17:46:17.249-07:00 heads = self.get_current_heads()
๏ฟผ
2021-03-24T17:46:17.249-07:00 File "/var/task/alembic/runtime/migration.py", line 435, in get_current_heads
๏ฟผ
2021-03-24T17:46:17.249-07:00 if not self._has_version_table():
๏ฟผ
2021-03-24T17:46:17.249-07:00 File "/var/task/alembic/runtime/migration.py", line 448, in _has_version_table
๏ฟผ
2021-03-24T17:46:17.249-07:00 self.connection, self.version_table, self.version_table_schema
๏ฟผ
2021-03-24T17:46:17.249-07:00 File "/var/task/alembic/util/sqla_compat.py", line 54, in _connectable_has_table
๏ฟผ
2021-03-24T17:46:17.249-07:00 connectable, tablename, schemaname
๏ฟผ
2021-03-24T17:46:17.249-07:00 File "/var/task/sqlalchemy/dialects/mysql/base.py", line 2482, in has_table
๏ฟผ
2021-03-24T17:46:17.249-07:00 ).execute(st)
๏ฟผ
2021-03-24T17:46:17.249-07:00 File "/var/task/sqlalchemy/engine/base.py", line 1006, in execute
๏ฟผ
2021-03-24T17:46:17.249-07:00 return self.execute_text(object, multiparams, params)
๏ฟผ
2021-03-24T17:46:17.249-07:00 File "/var/task/sqlalchemy/engine/base.py", line 1181, in _execute_text
๏ฟผ
2021-03-24T17:46:17.249-07:00 parameters,
๏ฟผ
2021-03-24T17:46:17.249-07:00 File "/var/task/sqlalchemy/engine/base.py", line 1318, in execute_context
๏ฟผ
2021-03-24T17:46:17.249-07:00 e, statement, parameters, cursor, context
๏ฟผ
2021-03-24T17:46:17.249-07:00 File "/var/task/sqlalchemy/engine/base.py", line 1515, in handle_dbapi_exception
๏ฟผ
2021-03-24T17:46:17.249-07:00 util.raise
(exc_info[1], with_traceback=exc_info[2])
๏ฟผ
2021-03-24T17:46:17.249-07:00 File "/var/task/sqlalchemy/util/compat.py", line 178, in raise

๏ฟผ
2021-03-24T17:46:17.249-07:00 raise exception
๏ฟผ
2021-03-24T17:46:17.249-07:00 File "/var/task/sqlalchemy/engine/base.py", line 1278, in _execute_context
๏ฟผ
2021-03-24T17:46:17.249-07:00 cursor, statement, parameters, context
๏ฟผ
2021-03-24T17:46:17.249-07:00 File "/var/task/sqlalchemy/engine/default.py", line 593, in do_execute
๏ฟผ
2021-03-24T17:46:17.249-07:00 cursor.execute(statement, parameters)
๏ฟผ
2021-03-24T17:46:17.249-07:00 File "/var/task/aurora_data_api/init.py", line 218, in execute
๏ฟผ
2021-03-24T17:46:17.249-07:00 raise self._get_database_error(e) from e
๏ฟผ

Transaction lost on "Database returned more than the allowed response size limit"

Hi there,

I have a table with a json column and 20 records. The json values are large and the 20 records add up to a bit over 1 MB. This causes aurora api to throw an exception: "Database returned more than the allowed response size limit".

The AuroraDataAPICursor.execute() method catches that error and attempts to paginate using a SCROLL CURSOR, however, when it tries to execute that in _start_paginated_query(), it gets a 400 code back on line 169 of init.py (version 0.4.0)
self._client.execute_statement(**execute_statement_args)

Error is:

botocore.errorfactory.BadRequestException: An error occurred (BadRequestException) when calling the ExecuteStatement operation: Transaction [transaction id] is not found

It isn't clear to me why the transaction is lost when it tries to run execute the SCROLL CURSOR statement, but I can see in the rds data api logs that the error is being thrown from the Data API itself.

I can keep trying to dig into it, but I thought I'd post the issue since it might be useful here.

Thanks and let me know if I can provide any further detail. Thanks!

SAWarning: Dialect mysql+aurora_data_api does *not* support Decimal objects natively

The full warning message is:

SAWarning: Dialect mysql+aurora_data_api does not support Decimal objects natively, and SQLAlchemy must convert from floating point - rounding errors and other issues may occur. Please consider storing Decimal numbers as strings or integers on this platform for lossless storage.

And, yet, aurora_data_api does support native Decimal.

Experimenting, I see that setting supports_native_decimal = True works correctly and avoids the SAWarning. I will make a pull a request for this.

Transaction error in Execute StatementDatabaseError

Hello,

Firstly, many thanks for the excellent contribution ๐Ÿ‘

I'm seeing unpredictable transaction errors arising from the aurora_data_api during our commits and queries.

DatabaseError: (aurora_data_api.exceptions.DatabaseError) An error occurred (BadRequestException) when calling the ExecuteStatement operation: Transaction [...] is not found 

What I've tried:

  1. handling the DatabaseError in my application and trying to force the commit to execute in a new transaction as follows:
from aurora_data_api.exceptions import DatabaseError
from botocore.exceptions import ClientError

def handle_invalid_transaction_id(func):
    retries = 3

    @wraps(func)
    def inner(*args, **kwargs):
        for i in range(retries):
            try:
                return func(*args, **kwargs)
            except (DatabaseError, ClientError):
                if i != retries:
                    # The aim here is to try and force a new transaction 
                    # If an error occurs and retry
                    db.session.close()
                else:
                    raise

    return inner

And then in my models doing something like this:

class MyModel(db.Model):
    @classmethod
    @handle_invalid_transaction_id
    def create(cls, **kwargs):
        instance = cls(**kwargs)
        db.session.add(instance)
        db.session.commit()
        db.session.close()
        return kwargs

Unfortunately no joy here.

  1. Turning off sleeps in Aurora-serverless

I thought perhaps the issue was due to the instance hibernating mid-transaction and so experimented with turning off the sleep as per this blog. However, sadly the issue still persists :(

  1. I've attempted to fine to the commit with no_autoflush, but again I'm hitting the issue.

N.b. there's also a SO for this issue here

I don't know if this is a user error or an issue with the driver, but any help would be greatly appreciated as this is blocking a deployment.

Many thanks

NoRegionError: You must specify a region

I have an Aurora MySQL instance up and running which I can connect and interact with normally via MySQL Workbench and the database integration tool with PyCharm Pro. When I go to create my sqlalchemy engine and connect as follows:

cluster_arn="arn:aws:rds:us-east-1:123456789:cluster:sensitive-name-ue1-dev-cluster"
secret_arn="arn:aws:secretsmanager:us-east-1:123456789:secret:my-special-secret"
conn_uri='mysql+auroradataapi://myuser:[email protected]:3306/mydb'
engine=create_engine(conn_uri, connect_args=dict(aurora_cluster_arn=cluster_arn, secret_arn=secret_arn)

I get the following error:

botocore.exceptions.NoRegionError: You must specify a region.

The region is specified in all the usual places to my knowledge and I don't see a way to explicitly pass.

Foreign key conflicts on `merge` sometimes

I've experienced a problem which only happens when running code in AWS using sqlalchemy-aurora-data-api, and never when running against a real postgres served locally.

I use sqlachemy's merge to upsert a record, and I get a DatabaseError:

Traceback (most recent call last):
  File "/var/task/aurora_data_api/__init__.py", line 248, in execute
    res = self._client.execute_statement(**execute_statement_args)
  File "/var/runtime/botocore/client.py", line 530, in _api_call
    return self._make_api_call(operation_name, kwargs)
  File "/var/runtime/botocore/client.py", line 960, in _make_api_call
    raise error_class(parsed_response, operation_name)
botocore.errorfactory.BadRequestException: An error occurred (BadRequestException) when calling the ExecuteStatement operation: ERROR: duplicate key value violates unique constraint "episodes_pk"
  Detail: Key (episode_pid)=(m001q2mq) already exists.; SQLState: 23505

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/var/task/sqlalchemy/engine/base.py", line 1965, in _exec_single_context
    self.dialect.do_execute(
  File "/var/task/sqlalchemy/engine/default.py", line 921, in do_execute
    cursor.execute(statement, parameters)
  File "/var/task/aurora_data_api/__init__.py", line 258, in execute
    raise self._get_database_error(e) from e
aurora_data_api.exceptions.DatabaseError: An error occurred (BadRequestException) when calling the ExecuteStatement operation: ERROR: duplicate key value violates unique constraint "episodes_pk"
  Detail: Key (episode_pid)=(m001q2mq) already exists.; SQLState: 23505

This shouldn't be possible with a merge - it should simply run an UPDATE rather than an INSERT.

I've got one project that uses sqlalchemy-aurora-data-api and this problem almost never happens. Another project this happens on every single run. I even tried a get (SELECT) followed by add (INSERT) or merge (UPDATE) depending on the result of the get, but that still ended up trying and failing to insert.

For context, this is:

  • AWS lambda function (Python 3.9 runtime)
  • sqlalchemy 2.0.25
  • aurora_data_api 0.5.0
  • sqlalchemy_aurora_data_api-0.4.1.dist-info
  • aurora serverless v1 postgres engine v11.21

Is anyone else experiencing this?

Is there anything I can try, to avoid this problem?

Flask-SQLAlchemy integration

Trying to use this with Flask-SQLAlchemy:

    # configure options for create_engine
    engine_opts = app.config.get("SQLALCHEMY_ENGINE_OPTIONS", {})

    if app.config.get("AURORA_DATA_API_ENABLED"):
        # configure sqlalchemy-aurora-data-api
        rds_secret_arn = app.config.get("RDS_SECRET_ARN")
        aurora_cluster_arn = app.config.get("AURORA_CLUSTER_ARN")
        db_name = app.config.get("DATABASE_NAME")
        conn_url = f"postgresql+auroradataapi://:@/{db_name}"
        app.config["DATABASE_URL"] = conn_url

        # augment connect_args
        connect_args = engine_opts.get("connect_args", {})
        connect_args["aurora_cluster_arn"] = aurora_cluster_arn
        connect_args["secret_arn"] = rds_secret_arn
        engine_opts["connect_args"] = connect_args
    app.config["SQLALCHEMY_ENGINE_OPTIONS"] = engine_opts

Getting this error:

    "errorMessage": "(psycopg2.ProgrammingError) invalid dsn: invalid connection option \"aurora_cluster_arn\"\n\n(Background on this error at: http://sqlalche.me/e/f405)",
    "errorType": "ProgrammingError",
    "stackTrace": [
        "  File \"/var/task/TEMPLATE/commands.py\", line 55, in init_handler\n    from TEMPLATE.app import app\n",
        "  File \"/var/task/TEMPLATE/app.py\", line 4, in <module>\n    app = create_app()\n",
        "  File \"/var/task/TEMPLATE/create_app.py\", line 36, in create_app\n    configure_database(app)\n",
        "  File \"/var/task/TEMPLATE/create_app.py\", line 108, in configure_database\n    raise ex\n",
        "  File \"/var/task/TEMPLATE/create_app.py\", line 105, in configure_database\n    db.session.execute(\"SELECT 1\").scalar()\n",
        "  File \"/var/task/sqlalchemy/orm/scoping.py\", line 162, in do\n    return getattr(self.registry(), name)(*args, **kwargs)\n",
        "  File \"/var/task/aws_xray_sdk/ext/sqlalchemy/util/decorators.py\", line 61, in wrapper\n    res = func(*args, **kw)\n",
        "  File \"/var/task/sqlalchemy/orm/session.py\", line 1268, in execute\n    return self._connection_for_bind(bind, close_with_result=True).execute(\n",
        "  File \"/var/task/sqlalchemy/orm/session.py\", line 1130, in _connection_for_bind\n    engine, execution_options\n",
        "  File \"/var/task/sqlalchemy/orm/session.py\", line 431, in _connection_for_bind\n    conn = bind._contextual_connect()\n",
        "  File \"/var/task/sqlalchemy/engine/base.py\", line 2242, in _contextual_connect\n    self._wrap_pool_connect(self.pool.connect, None),\n",
        "  File \"/var/task/sqlalchemy/engine/base.py\", line 2280, in _wrap_pool_connect\n    e, dialect, self\n",
        "  File \"/var/task/sqlalchemy/engine/base.py\", line 1547, in _handle_dbapi_exception_noconnection\n    util.raise_from_cause(sqlalchemy_exception, exc_info)\n",
        "  File \"/var/task/sqlalchemy/util/compat.py\", line 398, in raise_from_cause\n    reraise(type(exception), exception, tb=exc_tb, cause=cause)\n",
        "  File \"/var/task/sqlalchemy/util/compat.py\", line 152, in reraise\n    raise value.with_traceback(tb)\n",
        "  File \"/var/task/sqlalchemy/engine/base.py\", line 2276, in _wrap_pool_connect\n    return fn()\n",
        "  File \"/var/task/sqlalchemy/pool/base.py\", line 363, in connect\n    return _ConnectionFairy._checkout(self)\n",
        "  File \"/var/task/sqlalchemy/pool/base.py\", line 760, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n",
        "  File \"/var/task/sqlalchemy/pool/base.py\", line 492, in checkout\n    rec = pool._do_get()\n",
        "  File \"/var/task/sqlalchemy/pool/impl.py\", line 139, in _do_get\n    self._dec_overflow()\n",
        "  File \"/var/task/sqlalchemy/util/langhelpers.py\", line 68, in __exit__\n    compat.reraise(exc_type, exc_value, exc_tb)\n",
        "  File \"/var/task/sqlalchemy/util/compat.py\", line 153, in reraise\n    raise value\n",
        "  File \"/var/task/sqlalchemy/pool/impl.py\", line 136, in _do_get\n    return self._create_connection()\n",
        "  File \"/var/task/sqlalchemy/pool/base.py\", line 308, in _create_connection\n    return _ConnectionRecord(self)\n",
        "  File \"/var/task/sqlalchemy/pool/base.py\", line 437, in __init__\n    self.__connect(first_connect_check=True)\n",
        "  File \"/var/task/sqlalchemy/pool/base.py\", line 639, in __connect\n    connection = pool._invoke_creator(self)\n",
        "  File \"/var/task/sqlalchemy/engine/strategies.py\", line 114, in connect\n    return dialect.connect(*cargs, **cparams)\n",
        "  File \"/var/task/sqlalchemy/engine/default.py\", line 482, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n",
        "  File \"/opt/python/psycopg2/__init__.py\", line 125, in connect\n    dsn = _ext.make_dsn(dsn, **kwargs)\n",
        "  File \"/opt/python/psycopg2/extensions.py\", line 174, in make_dsn\n    parse_dsn(dsn)\n"
    ]
}

Bug: Postgres Numeric type fails

First, thank you for contributing this package.

Using a Numeric(asdecimal=True) causes the following error:

Traceback (most recent call last):
  File "/home/USER/.pyenv/versions/APP/lib/python3.7/site-packages/sqlalchemy/sql/type_api.py", line 497, in _cached_result_processor
    return dialect._type_memos[self][coltype]
KeyError: <class 'str'>

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/USER/.pyenv/versions/APP/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 988, in execute
    return meth(self, multiparams, params)
  File "/home/USER/.pyenv/versions/APP/lib/python3.7/site-packages/sqlalchemy/sql/elements.py", line 287, in _execute_on_connection
    return connection._execute_clauseelement(self, multiparams, params)
  File "/home/USER/.pyenv/versions/APP/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1107, in _execute_clauseelement
    distilled_params,
  File "/home/USER/.pyenv/versions/APP/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1272, in _execute_context
    result = context.get_result_proxy()
  File "/home/USER/.pyenv/versions/APP/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 1126, in get_result_proxy
    return result.ResultProxy(self)
  File "/home/USER/.pyenv/versions/APP/lib/python3.7/site-packages/sqlalchemy/engine/result.py", line 720, in __init__
    self._init_metadata()
  File "/home/USER/.pyenv/versions/APP/lib/python3.7/site-packages/sqlalchemy/engine/result.py", line 752, in _init_metadata
    self._metadata = ResultMetaData(self, cursor_description)
  File "/home/USER/.pyenv/versions/APP/lib/python3.7/site-packages/sqlalchemy/engine/result.py", line 240, in __init__
    textual_ordered,
  File "/home/USER/.pyenv/versions/APP/lib/python3.7/site-packages/sqlalchemy/engine/result.py", line 418, in _merge_cursor_description
    for idx, (key, name, obj, type_) in enumerate(result_columns)
  File "/home/USER/.pyenv/versions/APP/lib/python3.7/site-packages/sqlalchemy/engine/result.py", line 418, in <listcomp>
    for idx, (key, name, obj, type_) in enumerate(result_columns)
  File "/home/USER/.pyenv/versions/APP/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 1089, in get_result_processor
    return type_._cached_result_processor(self.dialect, coltype)
  File "/home/USER/.pyenv/versions/APP/lib/python3.7/site-packages/sqlalchemy/sql/type_api.py", line 503, in _cached_result_processor
    d[coltype] = rp = d["impl"].result_processor(dialect, coltype)
  File "/home/USER/.pyenv/versions/APP/lib/python3.7/site-packages/sqlalchemy/sql/sqltypes.py", line 666, in result_processor
    "storage." % (dialect.name, dialect.driver)
AttributeError: 'AuroraPostgresDataAPIDialect' object has no attribute 'driver'

I didn't include a test case due to the simplicity of the situation. You should be able to add this field to your existing test cases.

Note that this error does not occur with asdecimal=False.

Support non-String ARRAYs in Postgresql dialect

Right now this project only supports string arrays. However, postgresql supports arrays of many other types [1].

A demonstrative example of this behavior is below:

import enum

from sqlalchemy import create_engine, Column, Enum, Integer, String
from sqlalchemy.dialects.postgresql import JSONB, ARRAY
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker

Base = declarative_base()


class SockColor(enum.Enum):
    red = 1
    green = 2
    black = 3


class User(Base):
    __tablename__ = "user"

    id = Column(Integer, primary_key=True)
    name = Column(String)
    friends = Column(ARRAY(String))
    lucky_numbers = Column(ARRAY(Integer))  # this is not okay
    docs = Column(ARRAY(JSONB))  # this is not okay
    socks = Column(ARRAY(Enum(SockColor)))  # this is not okay


class TestPostgresDialect:
    engine = create_engine(
        "postgresql+auroradataapi://:@/TEST_DB_NAME",
        connect_args=dict(
            aurora_cluster_arn="TEST_CLUSTER_ARN", secret_arn="TEST_SECRET_ARN"
        ),
    )

    def test_orm(self):
        friends = ["Scarlett O'Hara", 'Ada "Hacker" Lovelace']
        lucky_numbers = [2, 3, 5, 7, 11, 13, 17, 19]
        docs = [
            {"fizz": [1, 2, 3]},
            {"buzz": [4, 5, 6]},
            {"bazz": [7, 8, 9]},
        ]
        socks = [SockColor.red, SockColor.red, SockColor.green]
        Base.metadata.create_all(self.engine)

        ed_user = User(
            name="ed",
            friends=friends,
            lucky_numbers=lucky_numbers,
            docs=docs,
            socks=socks,
        )

        Session = sessionmaker(bind=self.engine)
        session = Session()

        # clear data table
        session.query(User).delete()
        session.commit()

        # add record for test user
        session.add(ed_user)
        session.commit() # <--- test breaks here

        # query user record that we just inserted
        u = session.query(User).filter(User.name.like("%ed")).first()

        self.assertEqual(u.friends, friends)
        self.assertEqual(u.lucky_numbers, lucky_numbers)
        self.assertEqual(u.docs, docs)
        self.assertEqual(u.socks, socks)

Expected Result

Test should pass

Actual Result

value = [2, 3, 5, 7, 11, 13, ...]

    def process(value):
        # FIXME: escape strings properly here
>       return "\v".join(value) if isinstance(value, list) else value
E       sqlalchemy.exc.StatementError: (builtins.TypeError) sequence item 0: expected str instance, int found
E       [SQL: INSERT INTO "user" (name, friends, lucky_numbers, docs, socks) VALUES (:name, string_to_array(:friends, :string_to_array_1), string_to_array(:lucky_numbers, :string_to_array_2), string_to_array(:docs, :string_to_array_3), string_to_array(:socks, :string_to_array_4)) RETURNING "user".id]
E       [parameters: [{'lucky_numbers': [2, 3, 5, 7, 11, 13, 17, 19], 'docs': [{'fizz': [1, 2, 3]}, {'buzz': [4, 5, 6]}, {'bazz': [7, 8, 9]}], 'name': 'ed', 'friends': ["Scarlett O'Hara", 'Ada "Hacker" Lovelace'], 'socks': [<SockColor.red: 1>, <SockColor.red: 1>, <SockColor.green: 2>]}]]

Inserting just one of the un-supported array types produces similar error messages:

(builtins.TypeError) sequence item 0: expected str instance, SockColor found
(builtins.TypeError) sequence item 0: expected str instance, dict found

Bug: NotImplementedError() when I use MySQL

Problem

from sqlalchemy import create_engine

cluster_arn = 'arn:aws:rds:ap-northeast-1:1234567:cluster:XXXXXXXXXXX'
secret_arn = 'arn:aws:secretsmanager:ap-northeast-1:1234567:secret:rds-db-credentials/zzzzzzzzzzzzz'


def sql():
    engine = create_engine('mysql+auroradataapi://:@/XXXXXXXXXXX',
                           echo=True,
                           connect_args=dict(aurora_cluster_arn=cluster_arn, secret_arn=secret_arn))

    with engine.connect() as conn:
        for result in conn.execute("select * from user"):
            print(result)

fails with NotImplementedError in sqlalchemy\dialects\mysql\base.py as follows

  File "<string>", line 1, in <module>
  File "D:\Hyuma\test_aurora_sqlalchemy.py", line 25, in sql
    with engine.connect() as conn:
  File "C:\Users\hyuma\Anaconda3\envs\general\lib\site-packages\sqlalchemy\engine\base.py", line 2206, in connect
    return self._connection_cls(self, **kwargs)
  File "C:\Users\hyuma\Anaconda3\envs\general\lib\site-packages\sqlalchemy\engine\base.py", line 103, in __init__
    else engine.raw_connection()
  File "C:\Users\hyuma\Anaconda3\envs\general\lib\site-packages\sqlalchemy\engine\base.py", line 2306, in raw_connection
    self.pool.unique_connection, _connection
  File "C:\Users\hyuma\Anaconda3\envs\general\lib\site-packages\sqlalchemy\engine\base.py", line 2275, in _wrap_pool_connect
    return fn()
  File "C:\Users\hyuma\Anaconda3\envs\general\lib\site-packages\sqlalchemy\pool\base.py", line 303, in unique_connection
    return _ConnectionFairy._checkout(self)
  File "C:\Users\hyuma\Anaconda3\envs\general\lib\site-packages\sqlalchemy\pool\base.py", line 760, in _checkout
    fairy = _ConnectionRecord.checkout(pool)
  File "C:\Users\hyuma\Anaconda3\envs\general\lib\site-packages\sqlalchemy\pool\base.py", line 492, in checkout
    rec = pool._do_get()
  File "C:\Users\hyuma\Anaconda3\envs\general\lib\site-packages\sqlalchemy\pool\impl.py", line 139, in _do_get
    self._dec_overflow()
  File "C:\Users\hyuma\Anaconda3\envs\general\lib\site-packages\sqlalchemy\util\langhelpers.py", line 68, in __exit__
    compat.reraise(exc_type, exc_value, exc_tb)
  File "C:\Users\hyuma\Anaconda3\envs\general\lib\site-packages\sqlalchemy\util\compat.py", line 153, in reraise
    raise value
  File "C:\Users\hyuma\Anaconda3\envs\general\lib\site-packages\sqlalchemy\pool\impl.py", line 136, in _do_get
    return self._create_connection()
  File "C:\Users\hyuma\Anaconda3\envs\general\lib\site-packages\sqlalchemy\pool\base.py", line 308, in _create_connection
    return _ConnectionRecord(self)
  File "C:\Users\hyuma\Anaconda3\envs\general\lib\site-packages\sqlalchemy\pool\base.py", line 437, in __init__
    self.__connect(first_connect_check=True)
  File "C:\Users\hyuma\Anaconda3\envs\general\lib\site-packages\sqlalchemy\pool\base.py", line 649, in __connect
    ).exec_once_unless_exception(self.connection, self)
  File "C:\Users\hyuma\Anaconda3\envs\general\lib\site-packages\sqlalchemy\event\attr.py", line 314, in exec_once_unless_exception
    self._exec_once_impl(True, *args, **kw)
  File "C:\Users\hyuma\Anaconda3\envs\general\lib\site-packages\sqlalchemy\event\attr.py", line 285, in _exec_once_impl
    self(*args, **kw)
  File "C:\Users\hyuma\Anaconda3\envs\general\lib\site-packages\sqlalchemy\event\attr.py", line 322, in __call__
    fn(*args, **kw)
  File "C:\Users\hyuma\Anaconda3\envs\general\lib\site-packages\sqlalchemy\util\langhelpers.py", line 1485, in go
    return once_fn(*arg, **kw)
  File "C:\Users\hyuma\Anaconda3\envs\general\lib\site-packages\sqlalchemy\engine\strategies.py", line 199, in first_connect
    dialect.initialize(c)
  File "C:\Users\hyuma\Anaconda3\envs\general\lib\site-packages\sqlalchemy\dialects\mysql\base.py", line 2374, in initialize
    self._connection_charset = self._detect_charset(connection)
  File "C:\Users\hyuma\Anaconda3\envs\general\lib\site-packages\sqlalchemy\dialects\mysql\base.py", line 2784, in _detect_charset
    raise NotImplementedError()
NotImplementedError

My Environment:

  • Windows 10 with Python 3.7.3 64-bit
  • sqlalchemy-aurora-data-api==0.1.2
  • SQLAlchemy==1.3.10
  • aurora-data-api==0.1.1
  • boto3==1.10.5
  • botocore==1.13.5
  • mysqlclient==1.4.4

Support for Flask-SQLAlchemy

Hey, This is an awesome library and works flawlessly with SQLAlchemy.
But if you could also give an example on how to use this with Flask-SQLAlchemy, it would be very helpful.

Use alembic with sqlalchemy-aurora-data-api

Hey!

Is it possible to use alembic migration with this tool? I have an Aurora serverless cluster with generated secret, but serverless clusters are not public. It would be a huge step to migrate my db using alembic and data api.

Thanks, Tamas

continueAfterTimeout support

Is there a way to add continue_after_timeout flag? Something like:

engine = create_engine(
    f'postgresql+auroradataapi://:@/{db.name}',
    echo=False,
    connect_args={
        'aurora_cluster_arn': aurora_cluster_arn,
        'secret_arn': aurora_secret_arn,
        'continue_after_timeout': True
    },
    future=True,
)

SQLAlchemy Version 2.0.0 breaks reflected tables

I am having an issue since SQLAlchemy was updated yesterday as part of dependency discovery, since then it appears to install version 2.0.0 of the main library

sqlalchemy-2.0.0 
sqlalchemy-aurora-data-api-0.4.1 

vs what it was doing a day ago

sqlalchemy-1.4.46 
sqlalchemy-aurora-data-api-0.4.1 

After this we have been forced to update from the old table reflection method to the new

OLD

metadata = MetaData(bind=None)
firm_table = Table(
    'firm',
    metadata,
    autoload=True,
    autoload_with=engine
)

NEW (from what i could gather from the docs:

metadata = MetaData()
metadata.reflect(engine)
firm_table = Table(
     'firm',
     metadata,
     autoload_with=engine
 )

However its causing this big long error:

chalice local --host=0.0.0.0 --no-autoreload
Found credentials in shared credentials file: ~/.aws/credentials
Traceback (most recent call last):
  File "/root/helloworld/venv/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1410, in execute
    meth = statement._execute_on_connection
AttributeError: 'str' object has no attribute '_execute_on_connection'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/root/helloworld/venv/lib/python3.8/site-packages/chalice/cli/__init__.py", line 636, in main
    return cli(obj={})
  File "/root/helloworld/venv/lib/python3.8/site-packages/click/core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
  File "/root/helloworld/venv/lib/python3.8/site-packages/click/core.py", line 1055, in main
    rv = self.invoke(ctx)
  File "/root/helloworld/venv/lib/python3.8/site-packages/click/core.py", line 1657, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/root/helloworld/venv/lib/python3.8/site-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/root/helloworld/venv/lib/python3.8/site-packages/click/core.py", line 760, in invoke
    return __callback(*args, **kwargs)
  File "/root/helloworld/venv/lib/python3.8/site-packages/click/decorators.py", line 26, in new_func
    return f(get_current_context(), *args, **kwargs)
  File "/root/helloworld/venv/lib/python3.8/site-packages/chalice/cli/__init__.py", line 135, in local
    run_local_server(factory, host, port, stage)
  File "/root/helloworld/venv/lib/python3.8/site-packages/chalice/cli/__init__.py", line 154, in run_local_server
    server = create_local_server(factory, host, port, stage)
  File "/root/helloworld/venv/lib/python3.8/site-packages/chalice/cli/__init__.py", line 143, in create_local_server
    app_obj = config.chalice_app
  File "/root/helloworld/venv/lib/python3.8/site-packages/chalice/config.py", line 141, in chalice_app
    app = v()
  File "/root/helloworld/venv/lib/python3.8/site-packages/chalice/cli/factory.py", line 277, in load_chalice_app
    app = importlib.import_module('app')
  File "/usr/local/lib/python3.8/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
  File "<frozen importlib._bootstrap>", line 991, in _find_and_load
  File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 843, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "/root/helloworld/app.py", line 4, in <module>
    from chalicelib import firm_private, firm_public, common, solicitation_public, award_private, award_public, comm_private, user_private
  File "/root/helloworld/chalicelib/firm_private.py", line 1, in <module>
    from chalicelib.rei_utils import *
  File "/root/helloworld/chalicelib/rei_utils.py", line 18, in <module>
    metadata.reflect(engine)
  File "/root/helloworld/venv/lib/python3.8/site-packages/sqlalchemy/sql/schema.py", line 5452, in reflect
    with inspection.inspect(bind)._inspection_context() as insp:
  File "/root/helloworld/venv/lib/python3.8/site-packages/sqlalchemy/inspection.py", line 111, in inspect
    ret = reg(subject)
  File "/root/helloworld/venv/lib/python3.8/site-packages/sqlalchemy/engine/reflection.py", line 304, in _engine_insp
    return Inspector._construct(Inspector._init_engine, bind)
  File "/root/helloworld/venv/lib/python3.8/site-packages/sqlalchemy/engine/reflection.py", line 237, in _construct
    init(self, bind)
  File "/root/helloworld/venv/lib/python3.8/site-packages/sqlalchemy/engine/reflection.py", line 248, in _init_engine
    engine.connect().close()
  File "/root/helloworld/venv/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 3245, in connect
    return self._connection_cls(self)
  File "/root/helloworld/venv/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 145, in __init__
    self._dbapi_connection = engine.raw_connection()
  File "/root/helloworld/venv/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 3269, in raw_connection
    return self.pool.connect()
  File "/root/helloworld/venv/lib/python3.8/site-packages/sqlalchemy/pool/base.py", line 452, in connect
    return _ConnectionFairy._checkout(self)
  File "/root/helloworld/venv/lib/python3.8/site-packages/sqlalchemy/pool/base.py", line 1255, in _checkout
    fairy = _ConnectionRecord.checkout(pool)
  File "/root/helloworld/venv/lib/python3.8/site-packages/sqlalchemy/pool/base.py", line 716, in checkout
    rec = pool._do_get()
  File "/root/helloworld/venv/lib/python3.8/site-packages/sqlalchemy/pool/impl.py", line 169, in _do_get
    self._dec_overflow()
  File "/root/helloworld/venv/lib/python3.8/site-packages/sqlalchemy/util/langhelpers.py", line 147, in __exit__
    raise exc_value.with_traceback(exc_tb)
  File "/root/helloworld/venv/lib/python3.8/site-packages/sqlalchemy/pool/impl.py", line 166, in _do_get
    return self._create_connection()
  File "/root/helloworld/venv/lib/python3.8/site-packages/sqlalchemy/pool/base.py", line 393, in _create_connection
    return _ConnectionRecord(self)
  File "/root/helloworld/venv/lib/python3.8/site-packages/sqlalchemy/pool/base.py", line 678, in __init__
    self.__connect()
  File "/root/helloworld/venv/lib/python3.8/site-packages/sqlalchemy/pool/base.py", line 914, in __connect
    pool.dispatch.connect.for_modify(
  File "/root/helloworld/venv/lib/python3.8/site-packages/sqlalchemy/event/attr.py", line 473, in _exec_w_sync_on_first_run
    self(*args, **kw)
  File "/root/helloworld/venv/lib/python3.8/site-packages/sqlalchemy/event/attr.py", line 487, in __call__
    fn(*args, **kw)
  File "/root/helloworld/venv/lib/python3.8/site-packages/sqlalchemy/util/langhelpers.py", line 1912, in go
    return once_fn(*arg, **kw)
  File "/root/helloworld/venv/lib/python3.8/site-packages/sqlalchemy/engine/create.py", line 746, in first_connect
    dialect.initialize(c)
  File "/root/helloworld/venv/lib/python3.8/site-packages/sqlalchemy/dialects/mysql/base.py", line 2748, in initialize
    self._connection_charset = self._detect_charset(connection)
  File "/root/helloworld/venv/lib/python3.8/site-packages/sqlalchemy_aurora_data_api/__init__.py", line 138, in _detect_charset
    return connection.execute("SHOW VARIABLES LIKE 'character_set_client'").fetchone()[1]
  File "/root/helloworld/venv/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1412, in execute
    raise exc.ObjectNotExecutableError(statement) from err
sqlalchemy.exc.ObjectNotExecutableError: Not an executable object: "SHOW VARIABLES LIKE 'character_set_client'"

Any ideas as to why it fails now? Are we performing the table reflections incorrectly or is there some kind of library issue with the new version?

Docs:
https://docs.sqlalchemy.org/en/20/core/reflection.html#reflecting-all-tables-at-once
https://docs.sqlalchemy.org/en/14/changelog/migration_20.html

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.