Code Monkey home page Code Monkey logo

advanced-alchemy's Introduction

Advanced Alchemy

Project Status
CI/CD Latest Release Tests And Linting Documentation Building
Quality Coverage Quality Gate Status Maintainability Rating Reliability Rating Security Rating
Community Discord
Meta Jolt Project types - Mypy License - MIT Jolt Sponsors linting - Ruff code style - Black

Check out the project documentation 📚 for more information.

About

A carefully crafted, thoroughly tested, optimized companion library for SQLAlchemy, offering features such as:

  • Sync and async repositories, featuring common CRUD and highly optimized bulk operations

  • Integration with major web frameworks including Litestar, Starlette, FastAPI, Sanic.

  • Custom-built alembic configuration and CLI with optional framework integration

  • Utility base classes with audit columns, primary keys and utility functions

  • Optimized JSON types including a custom JSON type for Oracle.

  • Integrated support for UUID6 and UUID7 using uuid-utils (install with the uuid extra)

  • Pre-configured base classes with audit columns UUID or Big Integer primary keys and a sentinel column.

  • Synchronous and asynchronous repositories featuring:

    • Common CRUD operations for SQLAlchemy models
    • Bulk inserts, updates, upserts, and deletes with dialect-specific enhancements
    • lambda_stmt when possible for improved query building performance
    • Integrated counts, pagination, sorting, filtering with LIKE, IN, and dates before and/or after.
  • Tested support for multiple database backends including:

Usage

Installation

pip install advanced-alchemy

Important

Check out the installation guide in our official documentation!

Repositories

Advanced Alchemy includes a set of asynchronous and synchronous repository classes for easy CRUD operations on your SQLAlchemy models.

Click to expand the example
from advanced_alchemy.base import UUIDBase
from advanced_alchemy.filters import LimitOffset
from advanced_alchemy.repository import SQLAlchemySyncRepository
from sqlalchemy import create_engine
from sqlalchemy.orm import Mapped, sessionmaker


class User(UUIDBase):
    # you can optionally override the generated table name by manually setting it.
    __tablename__ = "user_account"  # type: ignore[assignment]
    email: Mapped[str]
    name: Mapped[str]


class UserRepository(SQLAlchemySyncRepository[User]):
    """User repository."""

    model_type = User


# use any compatible sqlalchemy engine.
engine = create_engine("duckdb:///:memory:")
session_factory = sessionmaker(engine, expire_on_commit=False)

# Initializes the database.
with engine.begin() as conn:
    User.metadata.create_all(conn)

with session_factory() as db_session:
    repo = UserRepository(session=db_session)
    # 1) Create multiple users with `add_many`
    bulk_users = [
        {"email": '[email protected]', 'name': 'Cody'},
        {"email": '[email protected]', 'name': 'Janek'},
        {"email": '[email protected]', 'name': 'Peter'},
        {"email": '[email protected]', 'name': 'Jacob'}
    ]
    objs = repo.add_many([User(**raw_user) for raw_user in bulk_users])
    db_session.commit()
    print(f"Created {len(objs)} new objects.")

    # 2) Select paginated data and total row count.  Pass additional filters as kwargs
    created_objs, total_objs = repo.list_and_count(LimitOffset(limit=10, offset=0), name="Cody")
    print(f"Selected {len(created_objs)} records out of a total of {total_objs}.")

    # 3) Let's remove the batch of records selected.
    deleted_objs = repo.delete_many([new_obj.id for new_obj in created_objs])
    print(f"Removed {len(deleted_objs)} records out of a total of {total_objs}.")

    # 4) Let's count the remaining rows
    remaining_count = repo.count()
    print(f"Found {remaining_count} remaining records after delete.")

For a full standalone example, see the sample here

Services

Advanced Alchemy includes an additional service class to make working with a repository easier. This class is designed to accept data as a dictionary or SQLAlchemy model, and it will handle the type conversions for you.

Here's the same example from above but using a service to create the data:
from advanced_alchemy.base import UUIDBase
from advanced_alchemy.filters import LimitOffset
from advanced_alchemy import SQLAlchemySyncRepository, SQLAlchemySyncRepositoryService
from sqlalchemy import create_engine
from sqlalchemy.orm import Mapped, sessionmaker


class User(UUIDBase):
    # you can optionally override the generated table name by manually setting it.
    __tablename__ = "user_account"  # type: ignore[assignment]
    email: Mapped[str]
    name: Mapped[str]


class UserRepository(SQLAlchemySyncRepository[User]):
    """User repository."""

    model_type = User


class UserService(SQLAlchemySyncRepositoryService[User]):
    """User repository."""

    repository_type = UserRepository


# use any compatible sqlalchemy engine.
engine = create_engine("duckdb:///:memory:")
session_factory = sessionmaker(engine, expire_on_commit=False)

# Initializes the database.
with engine.begin() as conn:
    User.metadata.create_all(conn)

with session_factory() as db_session:
    service = UserService(session=db_session)
    # 1) Create multiple users with `add_many`
    objs = service.create_many([
        {"email": '[email protected]', 'name': 'Cody'},
        {"email": '[email protected]', 'name': 'Janek'},
        {"email": '[email protected]', 'name': 'Peter'},
        {"email": '[email protected]', 'name': 'Jacob'}
    ])
    print(objs)
    print(f"Created {len(objs)} new objects.")

    # 2) Select paginated data and total row count.  Pass additional filters as kwargs
    created_objs, total_objs = service.list_and_count(LimitOffset(limit=10, offset=0), name="Cody")
    print(f"Selected {len(created_objs)} records out of a total of {total_objs}.")

    # 3) Let's remove the batch of records selected.
    deleted_objs = service.delete_many([new_obj.id for new_obj in created_objs])
    print(f"Removed {len(deleted_objs)} records out of a total of {total_objs}.")

    # 4) Let's count the remaining rows
    remaining_count = service.count()
    print(f"Found {remaining_count} remaining records after delete.")

Web Frameworks

Advanced Alchemy works with nearly all Python web frameworks. Several helpers for popular libraries are included, and additional PRs to support others are welcomed.

Litestar

Advanced Alchemy is the official SQLAlchemy integration for Litestar.

In addition to installing with pip install advanced-alchemy, it can also be installed as a Litestar extra with pip install litestar[sqlalchemy].

Litestar Example
from litestar import Litestar
from litestar.plugins.sqlalchemy import SQLAlchemyPlugin, SQLAlchemyAsyncConfig
# alternately...
# from advanced_alchemy.extensions.litestar.plugins import SQLAlchemyPlugin
# from advanced_alchemy.extensions.litestar.plugins.init.config import SQLAlchemyAsyncConfig

alchemy = SQLAlchemyPlugin(
  config=SQLAlchemyAsyncConfig(connection_string="sqlite+aiosqlite:///test.sqlite"),
)
app = Litestar(plugins=[alchemy])

For a full Litestar example, check here

FastAPI

FastAPI Example
from fastapi import FastAPI

from advanced_alchemy.config import SQLAlchemyAsyncConfig
from advanced_alchemy.extensions.starlette import StarletteAdvancedAlchemy

app = FastAPI()
alchemy = StarletteAdvancedAlchemy(
    config=SQLAlchemyAsyncConfig(connection_string="sqlite+aiosqlite:///test.sqlite"), app=app,
)

For a full FastAPI example, see here

Starlette

Pre-built Example Apps
from starlette.applications import Starlette

from advanced_alchemy.config import SQLAlchemyAsyncConfig
from advanced_alchemy.extensions.starlette import StarletteAdvancedAlchemy

app = Starlette()
alchemy = StarletteAdvancedAlchemy(
    config=SQLAlchemyAsyncConfig(connection_string="sqlite+aiosqlite:///test.sqlite"), app=app,
)

Sanic

Pre-built Example Apps
from sanic import Sanic
from sanic_ext import Extend

from advanced_alchemy.config import SQLAlchemyAsyncConfig
from advanced_alchemy.extensions.sanic import SanicAdvancedAlchemy

app = Sanic("AlchemySanicApp")
alchemy = SanicAdvancedAlchemy(
    sqlalchemy_config=SQLAlchemyAsyncConfig(connection_string="sqlite+aiosqlite:///test.sqlite"),
)
Extend.register(alchemy)

Contributing

All Jolt projects will always be a community-centered, available for contributions of any size.

Before contributing, please review the contribution guide.

If you have any questions, reach out to us on Discord, our org-wide GitHub discussions page, or the project-specific GitHub discussions page.


Litestar Logo - Light
A Jolt Organization Project

advanced-alchemy's People

Contributors

abdulhaq-e avatar alc-alc avatar cbscsm avatar cemrehancavdar avatar cofin avatar darinkishore avatar gazorby avatar geeshta avatar guacs avatar jacobcoffee avatar mbeijen avatar peterschutt avatar provinzkraut avatar rseeley avatar sfermigier avatar tspnn avatar wer153 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

advanced-alchemy's Issues

ConflictError is a misleading name

Description

I get confused every time I see a ConflictError. This is usually the result of an uninitialised field, not of any kind of "conflict".

Per the source code:

class ConflictError(RepositoryError):
    """Data integrity error."""

I believe ConflictError should be renamed IntegrityError or DataIntegrityError.

URL to code causing the issue

No response

MCVE

No response

Steps to reproduce

No response

Screenshots

No response

Logs

No response

Jolt Project Version

advanced-alchemy 0.6.1

Platform

  • Linux
  • Mac
  • Windows
  • Other (Please specify in the description above)

Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

Enhancement: `upsert_many` should use a `merge` statement when possible.

Summary

Implement the Merge operation for Oracle and Postgres 15+

Basic Example

No response

Drawbacks and Impact

No response

Unresolved questions

No response


Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

Bug: select queries on repositories are not always idempotent

Description

When executing twice the same select query with a whereclause on a relationship, the second query gives an empty result.

Ex select(State).where(State.country == usa) (see below for complete reproducible code).

URL to code causing the issue

No response

MCVE

from advanced_alchemy.base import UUIDBase
from advanced_alchemy.repository import SQLAlchemySyncRepository
from sqlalchemy import create_engine, select, ForeignKey, func
from sqlalchemy.orm import Mapped, Session, sessionmaker, mapped_column, relationship


class Country(UUIDBase):
    name: Mapped[str]


class State(UUIDBase):
    name: Mapped[str]
    country_id: Mapped[str] = mapped_column(ForeignKey(Country.id))

    country = relationship(Country)


class USStateRepository(SQLAlchemySyncRepository[State]):
    model_type = State


engine = create_engine("sqlite:///:memory:", future=True, echo=True)
# engine = create_engine("postgresql://localhost/sandbox", future=True)
session_factory: sessionmaker[Session] = sessionmaker(engine, expire_on_commit=False)


def run_script() -> None:
    with engine.begin() as conn:
        State.metadata.create_all(conn)

    with session_factory() as db_session:
        usa = Country(name="United States of America")
        france = Country(name="France")
        db_session.add(usa)
        db_session.add(france)

        california = State(name="California", country=usa)
        oregon = State(name="Oregon", country=usa)
        ile_de_france = State(name="Île-de-France", country=france)

        repo = USStateRepository(session=db_session)
        repo.add(california)
        repo.add(oregon)
        repo.add(ile_de_france)
        db_session.commit()

        print("\n" + "-" * 80 + "\n")

        # Using only the ORM, this works fine:

        stmt = select(State).where(State.country_id == usa.id).with_only_columns(func.count())
        count = db_session.execute(stmt).scalar_one()
        assert count == 2, f"Expected 2, got {count}"
        count = db_session.execute(stmt).scalar_one()
        assert count == 2, f"Expected 2, got {count}"

        stmt = select(State).where(State.country == usa).with_only_columns(func.count())
        count = db_session.execute(stmt).scalar_one()
        assert count == 2, f"Expected 2, got {count}"
        count = db_session.execute(stmt).scalar_one()
        assert count == 2, f"Expected 2, got {count}"

        print("\n" + "-" * 80 + "\n")

        # Using the repository, this works:
        stmt1 = select(State).where(State.country_id == usa.id)

        print("First query")
        count = repo.count(statement=stmt1)
        assert count == 2, f"Expected 2, got {count}"

        print("Second query")
        count = repo.count(statement=stmt1)
        assert count == 2, f"Expected 2, got {count}"

        print("\n" + "-" * 80 + "\n")

        # But this fails (only after the second query):
        stmt2 = select(State).where(State.country == usa)

        print("First query")
        count = repo.count(statement=stmt2)
        assert count == 2, f"Expected 2, got {count}"

        print("Second query")
        count = repo.count(statement=stmt2)
        assert count == 2, f"Expected 2, got {count}"

        # It also fails with
        states = repo.list(statement=stmt2)
        count = len(states)
        assert count == 2, f"Expected 2, got {count}"



if __name__ == "__main__":
    run_script()

Steps to reproduce

No response

Screenshots

No response

Logs

First query
2024-02-01 14:12:48,387 INFO sqlalchemy.engine.Engine SELECT count(state.id) AS count_1
FROM state
WHERE ? = state.country_id
2024-02-01 14:12:48,387 INFO sqlalchemy.engine.Engine [generated in 0.00005s] (<memory at 0x1072a13c0>,)
Second query
2024-02-01 14:12:48,387 INFO sqlalchemy.engine.Engine SELECT count(state.id) AS count_1
FROM state
WHERE ? = state.country_id
2024-02-01 14:12:48,387 INFO sqlalchemy.engine.Engine [cached since 0.0002423s ago] (None,)
2024-02-01 14:12:48,387 INFO sqlalchemy.engine.Engine ROLLBACK
Traceback (most recent call last):
  File "/Users/fermigier/projects/abilian-analytics/sandbox/debug_aa.py", line 97, in <module>
    run_script()
  File "/Users/fermigier/projects/abilian-analytics/sandbox/debug_aa.py", line 87, in run_script
    assert count == 2, f"Expected 2, got {count}"
AssertionError: Expected 2, got 0

Jolt Project Version

0.7.0

Platform

  • Linux
  • Mac
  • Windows
  • Other (Please specify in the description above)

Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

Enhancement: Add a `drop_all` function

Summary

There should be a way to drop all current tables in your database including the alembic_versioning table. This is really useful for rapid prototyping.

Basic Example

No response

Drawbacks and Impact

No response

Unresolved questions

No response


Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

Bug: CollectionFilter returns all entries if values is empty

Description

Basically what the title says, when initializing a CollectionFilter with an empty list, the resulting query returns all entries instead of no entries, which I'd expect.
I know it's a special case and needs to be handled differently, but this seems very inconsistent behaviour.
Here's the specific line which leads to that bug (currently in advanced-alchemy project, although the code is unchanged since it was part of litestar)

Issue was first raised on the litestar discord
The bug is replicated in the sync repo as well

URL to code causing the issue

No response

MCVE

No response

Steps to reproduce

No response

Screenshots

No response

Logs

No response

Jolt Project Version

master

Platform

  • Linux
  • Mac
  • Windows
  • Other (Please specify in the description above)

Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

Bug: `touch_updated_timestamp` event handler gets registered unkowingly via unrelated import

Description

I'm making use of the SQLAlchemyPlugin for Litestar without making use of the provided bases (UUIDBase, BigIntBase). The plugin gets imported from advanced_alchemy.extensions.litestar.plugins

Through an internal chain of imports, eventually advanced_alchemy.base gets loaded and registers the touch_updated_timestamp event handler, which in turn modified my own updated_at columns which are timezone naive and caused flush operations to fail.

URL to code causing the issue

No response

MCVE

No response

Steps to reproduce

No response

Screenshots

No response

Logs

No response

Jolt Project Version

0.5.5

Platform

  • Linux
  • Mac
  • Windows
  • Other (Please specify in the description above)

Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

Enhancement: ChoicesFilter/BooleanFilter

Summary

It would be nice to have these filters

Basic Example

def provide_boolean_filter(
   some_field: bool = Parameter(title="Boolean filter", query="Boolean", default=True),
) -> bool:
    return Boolean

Drawbacks and Impact

No response

Unresolved questions

No response


Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

Enhancement: Add Big Query support to the repository

Summary

Add formal tests for Big Query to the repository.

This enhancement is currently blocked by this issue

Basic Example

No response

Drawbacks and Impact

No response

Unresolved questions

No response


Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

Bug: Incorrect datetime in Postgresql for AuditColumns class

Description

Hello,

I do not know if it's an issue or not.

In the AuditColumns class in the project when adding the fields created_at, it does insert the datetime in the database with the timezone configured in the database

updated_at: Mapped[datetime] = mapped_column(
        DateTimeUTC(timezone=True),
        default=lambda: datetime.now(timezone.utc),
    )

I got the following entry in postgres:
2023-10-23 17:16:36.71099+02 with the timezone +2

What I expect to have is the following in the database 2023-10-23 15:16:36.71099

In postgres to have that, there is a function timezone('utc', now())

URL to code causing the issue

No response

MCVE

from advanced_alchemy.base import UUIDAuditBase as TimestampedDatabaseModel

class Customer(TimestampedDatabaseModel):

    """Customer Model."""

    __tablename__ = "customer"  # type: ignore[assignment]
    __table_args__ = {"comment": "Customer for data retrieval"}

Steps to reproduce

1. Create a class that inherits AuditColumns
2. Create a migrations schema
3. Create a new entity through this model
4. The database entry got the timezone of the server instead of the timezone defined in the AuditColumns (UTC)

Screenshots

psql

Logs

No response

Jolt Project Version

advanced-alchemy 0.3.0

Platform

  • Linux
  • Mac
  • Windows
  • Other (Please specify in the description above)

Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

Unintuitive API

Description

I have recently been bitten by the two following issues, in sequence:

  • With repository.get(): in Python, get is one of the most commonly used methods, and it's easy to assume that it means "return something if found, or None (or some other default value) otherwise". In advanced-alchemy's case, it raises an exception when nothing is found.

  • With repository.get_one_or_none(), it's easy to assume that it has a similar signature than get. But no, one has to provide the primary key by name (e.g. get(id) v.s get_one_or_none(id=id).). This can also lead to confusion.

In other words, there is a lack of consistency between:

  • repository.get() and dict.get()
  • repository.get() and repository.get_one_or_none()

It's probably too late to change the API now, but I suggest anyway:

  • renaming get to get_one and introducing a get method similar to the current version of get that return None instead of raising a exception.
  • Renaming the current get_* methods using a different verb ("fetch"?, "retrieve"? ...)

URL to code causing the issue

No response

MCVE

No response

Steps to reproduce

No response

Screenshots

No response

Logs

No response

Jolt Project Version

0.6.1.

Platform

  • Linux
  • Mac
  • Windows
  • Other (Please specify in the description above)

Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

Enhancement: Add Quart

Summary

Quart is Flask made async: https://quart.palletsprojects.com/en/latest/

Wondering if there is any interest in adding it as a potential target?

Basic Example

No response

Drawbacks and Impact

More development maintenance

Unresolved questions

What other targets could we add?


Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

Bug: Impossible to use unique() method for results after executing

Description

Description
I am encountering a sqlalchemy.exc.InvalidRequestError when using lazy='joined' loading strategy across my models. The error states: "The unique() method must be invoked on this Result, as it contains results that include joined eager loads against collections." However, applying unique() is not feasible in my current implementation.
There is no way to make results unique() considering using lazy='joined'

URL to code causing the issue

No response

MCVE

# Here you would put a simplified version of your code that still produces the error.
# For example:

from sqlalchemy.orm import joinedload
from myapp.models import Parent, Child

# Example query that leads to the error
session.execute(select(Parent).options(joinedload(Parent.children))).scalars()

Steps to reproduce

No response

Screenshots

No response

Logs

No response

Jolt Project Version

0.0.9

Platform

  • Linux
  • Mac
  • Windows
  • Other (Please specify in the description above)

Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

Enhancement: Consider supporting UUIDv7 and/or ULID

Summary

Determine if adding a ULID data type is feasible at this point.

Basic Example

No response

Drawbacks and Impact

No response

Unresolved questions

No response


Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

Enhancement: litestar `create_all` config attribute

Summary

We can probably abstract the:

    async with engine.begin() as conn:
        await conn.run_sync(Base.metadata.create_all)

boilerplate away.

Basic Example

db_config = SQLAlchemyAsyncConfig(connection_string="sqlite+aiosqlite:///todo.sqlite", create_all=metadata)

Drawbacks and Impact

No response

Unresolved questions

No response


Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

Bug: If a column has positional name model_from_dict can't find column

Description

This is our Model definition for some reason :)

class InventoryModel(BigIntBase):
    __tablename__ = "MV_ERP_INVENTORY"

    id: Mapped[int] = mapped_column(Integer, primary_key=True, autoincrement=True)
    building_name: Mapped[str] = mapped_column(
        "BUILDINGNAME", String(45), nullable=False
    )
# .../advanced_alchemy/repository/_util.py

def model_from_dict(model: ModelT, **kwargs: Any) -> ModelT:
    """Return ORM Object from Dictionary."""
    data = {}
    for column in model.__table__.columns:
        column_val = kwargs.get(column.name, None)
        if column_val is not None:
            data[column.name] = column_val
    return model(**data) 

columns in list[model.__table__.columns] returns

[ 
 Column('id', Integer(), table=<MV_ERP_INVENTORY>, primary_key=True, nullable=False),
 Column('BUILDINGNAME', String(length=45), table=<MV_ERP_INVENTORY>, nullable=False)
]

so column.name return BUILDINGNAME for building_name column.

So whenever we pass dict as:

await self.update(
    data={"building_name": "some_building_name"},
    item_id=inventory_id,
    auto_commit=True,
)

As a solution we tried BUILDINGNAME in data dict it throws some other errors (invalid key for model) as expected.

So changing model.__table__.columns to model.__mapper__.columns.keys() would get building_name instead of and could create model exactly for models with column has positional name.

We can also create pull request for this.

w/ @ysnbyzli

URL to code causing the issue

No response

MCVE

No response

Steps to reproduce

1. Define a model with one or more columns with positional name
2. Use repositoy.update(data={"column_with_positional_name": "updated_data"}) to update data
3. Debug through model_from_dict 
4. See column name comes from positional_name instead key of model
5. It won't update the column

Screenshots

"In the format of: ![SCREENSHOT_DESCRIPTION](SCREENSHOT_LINK.png)"

Logs

None

Jolt Project Version

advanced-alchemy = "^0.3.3"

Platform

  • Linux
  • Mac
  • Windows
  • Other (Please specify in the description above)

Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

Bug(docs): Documentation changes cause Litestar docs to encounter buildtime errors

Description

When building docs, changes in AA are sometimes causing Litestar build steps to warn/fail.

Solution would be to set up a CI workflow to check out the develop branch of Litestar and build the docs against the latest AA codebase, and fail if any build errors are encountered.

This way we can ensure (up?) downstream build success.

URL to code causing the issue

https://github.com/litestar-org/litestar/actions/runs/8206934469/job/22447123447?pr=3169

Bug: unsupported operand for lambda

Description

Using the latest version, I encounter the following error:

advanced_alchemy/repository/_async.py", line 1225, in _filter_by_where
    statement += lambda s: s.where(field == value)
TypeError: unsupported operand type(s) for +=: 'Select' and 'function'

    def _filter_by_where(
        self,
        statement: StatementLambdaElement,
        field_name: str | InstrumentedAttribute,
        value: Any,
    ) -> StatementLambdaElement:
        field = get_instrumented_attr(self.model_type, field_name)
        statement += lambda s: s.where(field == value)
        return statement

URL to code causing the issue

No response

MCVE

# Your MCVE code here

Steps to reproduce

1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error

Screenshots

"In the format of: ![SCREENSHOT_DESCRIPTION](SCREENSHOT_LINK.png)"

Logs

No response

Jolt Project Version

0.3.4

Platform

  • Linux
  • Mac
  • Windows
  • Other (Please specify in the description above)

Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

Enhancement: Create a custom encrypted field SQLAlchemy type.

Summary

Create a new encrypted field that supports multiple backends included cryptography and tink

Basic Example

No response

Drawbacks and Impact

No response

Unresolved questions

No response


Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

Bug: queries using cached argument values

Description

I've tried the following with both the get_one and get_one_or_none methods, in the context of a Litestar dev (local) environment.
The first query from my user table with an arbitrary kwarg results in fetching the correct result. Querying for a different user thereafter results in SQLAlchemy using a cached argument value. The log below might explain this better:

# initial query
# await user_repository.get_one(id_number="1234567890")

2024-01-16 09:38:45,451 INFO sqlalchemy.engine.Engine SELECT "user".name
FROM "user" 
WHERE "user".id_number = $1::VARCHAR
2024-01-16 09:38:45,453 INFO sqlalchemy.engine.Engine [generated in 0.00203s] ('1234567890',)
------
# second query
# await user_repository.get_one(id_number="789")

2024-01-16 09:41:39,580 INFO sqlalchemy.engine.Engine SELECT "user".name
FROM "user" 
WHERE "user".id_number = $1::VARCHAR
2024-01-16 09:41:39,581 INFO sqlalchemy.engine.Engine [cached since 174.1s ago] ('1234567890',)

This is across two separate requests with their own AsyncSession instances.

Overriding the method with my own select() resolves the issue.

I don't have a MCVE as this is all part of a greater project, I'll work on something

URL to code causing the issue

No response

MCVE

No response

Steps to reproduce

No response

Screenshots

No response

Logs

No response

Jolt Project Version

0.6.2

Platform

  • Linux
  • Mac
  • Windows
  • Other (Please specify in the description above)

Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

Bug: TestClient used in pytest fails with `KeyError: 'session_maker_class'`

Description

I've developed a failing test by writing a test that tries to test the litestar example in this repo: main...sherbang:advanced-alchemy:litestar_test

URL to code causing the issue

https://github.com/sherbang/advanced-alchemy/tree/litestar_test

MCVE

# Your MCVE code here

Steps to reproduce

1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error

Screenshots

"In the format of: ![SCREENSHOT_DESCRIPTION](SCREENSHOT_LINK.png)"

Logs

/home/sherbang/devel/advanced-alchemy/tests/examples/test_litestar.py::test_create failed: test_client = <litestar.testing.client.sync_client.TestClient object at 0x7582d98e9c50>

    async def test_create(test_client: TestClient[Litestar]) -> None:
        author = AuthorCreate(name="foo")
    
        response = test_client.post(
            "/authors",
            json=author.model_dump(mode="json"),
        )
>       assert response.status_code == 200, response.text
E       AssertionError: Traceback (most recent call last):
E           File "/home/sherbang/devel/advanced-alchemy/.venv/lib/python3.12/site-packages/litestar/middleware/exceptions/middleware.py", line 218, in __call__
E             await self.app(scope, receive, send)
E           File "/home/sherbang/devel/advanced-alchemy/.venv/lib/python3.12/site-packages/litestar/routes/http.py", line 82, in handle
E             response = await self._get_response_for_request(
E                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E           File "/home/sherbang/devel/advanced-alchemy/.venv/lib/python3.12/site-packages/litestar/routes/http.py", line 134, in _get_response_for_request
E             return await self._call_handler_function(
E                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E           File "/home/sherbang/devel/advanced-alchemy/.venv/lib/python3.12/site-packages/litestar/routes/http.py", line 154, in _call_handler_function
E             response_data, cleanup_group = await self._get_response_data(
E                                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E           File "/home/sherbang/devel/advanced-alchemy/.venv/lib/python3.12/site-packages/litestar/routes/http.py", line 191, in _get_response_data
E             cleanup_group = await parameter_model.resolve_dependencies(request, kwargs)
E                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E           File "/home/sherbang/devel/advanced-alchemy/.venv/lib/python3.12/site-packages/litestar/_kwargs/kwargs_model.py", line 394, in resolve_dependencies
E             await resolve_dependency(next(iter(batch)), connection, kwargs, cleanup_group)
E           File "/home/sherbang/devel/advanced-alchemy/.venv/lib/python3.12/site-packages/litestar/_kwargs/dependencies.py", line 65, in resolve_dependency
E             value = await dependency.provide(**dependency_kwargs)
E                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E           File "/home/sherbang/devel/advanced-alchemy/.venv/lib/python3.12/site-packages/litestar/di.py", line 101, in __call__
E             value = self.dependency(**kwargs)
E                     ^^^^^^^^^^^^^^^^^^^^^^^^^
E           File "/home/sherbang/devel/advanced-alchemy/advanced_alchemy/extensions/litestar/plugins/init/config/asyncio.py", line 193, in provide_session
E             session_maker = cast("Callable[[], AsyncSession]", state[self.session_maker_app_state_key])
E                                                                ~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E           File "/home/sherbang/devel/advanced-alchemy/.venv/lib/python3.12/site-packages/litestar/datastructures/state.py", line 90, in __getitem__
E             return self._state[key]
E                    ~~~~~~~~~~~^^^^^
E         KeyError: 'session_maker_class'
E         
E       assert 500 == 200
E        +  where 500 = <Response [500 Internal Server Error]>.status_code

tests/examples/test_litestar.py:28: AssertionError

Jolt Project Version

0.8.1

Platform

  • Linux
  • Mac
  • Windows
  • Other (Please specify in the description above)

Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

Bug: UUIDPrimaryKey.id type is `Unknown | UUID` when uuid_utils is not installed.

Description

  • It's a type checking issue. I'm using pyright for type checking.
  • When uuid_utils is not installed, pyright can't understand type of uuid_utils.UUID. So it is treated as Unknown
  • UUID_UTILS_INSTALLED = find_spec("uuid_utils")
    if UUID_UTILS_INSTALLED:
    from uuid_utils import UUID, uuid4, uuid6, uuid7
    else:
    from uuid import UUID, uuid4 # type: ignore[assignment]
    uuid6 = uuid4 # type: ignore[assignment]
    uuid7 = uuid4 # type: ignore[assignment]
  • Then type of UUIDPrimaryKey.id: Mapped[UUID] is inferred Mapped[Unknown | uuid.UUID]
  • When I access ORM's id field, it causes type checking error reportUnknownMemberType.
error: Type of "id" is partially unknown
    Type of "id" is "Unknown | UUID" (reportUnknownMemberType)

URL to code causing the issue

No response

MCVE

from advanced_alchemy.base import UUIDAuditBase


class User(UUIDAuditBase):
    pass


def foo(user: User) -> None:
    user.id  # pyright error: reportUnknownMemberType

Steps to reproduce

  1. install pyright(I'm using 1.1.350) and advanced-alchemy(>0.7.0).
  2. run pyright type checking on above code.

Screenshots

No response

Logs

No response

Jolt Project Version

0.7.3

Platform

  • Linux
  • Mac
  • Windows
  • Other (Please specify in the description above)

Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

refactor: remove reliance on deprecated litestar utils

We recently refactored the way that litestar is manages state within the asgi connection scope.

Part of that was deprecation of the {get,set,delete}_litestar_scope_state() utility functions.

The reason for this deprecation is that the namespace we use in scope state, and the things that we store within it are meant to be an implementation detail.

If plugins need to store state for their own operations, it is better to do this inside their own namespace to reduce coupling and future breakages.


Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

Enhancement: Create a file field type for SQLAlchemy

Summary

Create a custom field to type for storing file references. It should support multiple cloud backends and local filesystem.

Basic Example

No response

Drawbacks and Impact

No response

Unresolved questions

No response


Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

Bug: RepositoryServices and Repositories have different list_and_count/count filter signatures

Description

Hi,

First of all thanks, for the package. I've encountered a mypy error with filters as list_and_count and count methods in *RepositoryService (*filters: FilterTypes) and *Repository (*filters: FilterTypes | ColumnElement[bool]) classes have different signatures.

Is there a specific reason to have a drift in signatures as they are only forwarding them? If not I can prepare a PR with a fix. Thanks.

URL to code causing the issue

No response

MCVE

filters: list[FilterTypes | ColumnElement[bool]] = []

repository = UserReposiotry(session=db_session)
repository.list_and_count(*filters) # OK
repository.count(*filters) # OK

service = UserService(session=db_session)
service.list_and_count(*filters) # Error, expects: *filters: FilterTypes
service.count(*filters) # Error, expects: *filters: FilterTypes

Steps to reproduce

No response

Screenshots

No response

Logs

No response

Jolt Project Version

v0.5.5

Platform

  • Linux
  • Mac
  • Windows
  • Other (Please specify in the description above)

Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.