seaql / seaography Goto Github PK
View Code? Open in Web Editor NEW🧭 GraphQL framework for SeaORM
License: Apache License 2.0
🧭 GraphQL framework for SeaORM
License: Apache License 2.0
It's alright to generate code for Poem web API as a proof-of-concept. But the code generator should be structure in a way that's easy to generate code for various web API listed below:
I think we don't need to support all at this stage. Just pick 2 would be enough.
Right now the generator
crate depends on discoverer
crate.
Ideally, we should cut the dependency and only glue them in the cli crate.
Reason being, we might want to utilize the generator without doing a schema discovery. e.g. 1) reuse existing SeaORM entities 2) the schema comes from another source, say being transpiled from prisma schema
Hey @karatakis, feel free to add your name and email inside Cargo.toml
(all core & root crates) under the authors section.
https://doc.rust-lang.org/cargo/reference/manifest.html#the-authors-field
Cheers!!
The expanded generator produce all Rust code. Its easier to modify, but harder to maintain long term. In case a new version of the generator exists you have to regenerate the whole project and apply the your custom modifications again.
I saw the line above from the docs, SeaQL/seaql.github.io#41.
I agree we should generate the async-graphql types in expanded format. However, for SeaORM entity, compact format should be the default (of cause we should make it configurable: generating compact / expanded format).
I'm just brainstorming parts in seaography
that worth testing. I don't have a concrete plan on how and what to test. So, this is rather a discussion more than a actionable test plan.
Parts that's worth testing:
async-graphql
types
sea_query::TableCreateStatement
proc_macro2::TokenStream
Feel free to edit the list. And comments wanted @karatakis @tyt2y3
Currently the projects supports only sqlte and partial mysql/pgsql support.
The aim is to support mysql and pgsql to expand use cases the tool can be used.
Complete the support for enumeration types generation and mysql, pgsql generator support.
async-graphql
support an array of common types guarded by feature flags. Ex. chrono
, time
, uuid
, decimal
...etc.
In seaography
, we should respect these feature flags as well. Because we need these type crates to implement our internal types. While we can't assumed all feature flag are enabled.
Lines 39 to 59 in f5a91c7
So, we can "pass on" the feature flag we received in seaography
to async-graphql
. Just like what we did below in SeaORM. Where we "pass on" the with-*
feature flag to sea-query
and sqlx
.
SeaORM added support of cursor based pagination in 0.9 SeaQL/sea-orm#822
Which I think is generally the preferred way to perform pagination in GraphQL
I think we should handle either offset-based or cursor-based pagination in a single query (make them mutually exclusive).
And ideally we only allow cursoring on an indexed column.
Currently the project has some unit tests and some integration tests.
The drawbacks are that unit tests are simple and integration tests require infrastructure.
The advantage is that we can inspect SQL queries without needing a database to execute them.
Sorting plays an important role on various scenarios, and SQL spec supports it for that reason.
Example query:
query QueryZoo {
queryZoo(pagination: {limit: 10, page: 7}, sorting: { name: 'asc', year: 'desc'}) {
data {
name
year
animals {
name
}
}
pages
current
}
}
Bring code from old repository
I'm trying Seaography for the first time.
On the first run it creates this file which makes me think that it is also considering the migrations tables.
#[derive(Debug, seaography::macros::QueryRoot)]
#[seaography(entity = "crate::entities::player")]
#[seaography(entity = "crate::entities::seaql_migrations")]
#[seaography(entity = "crate::entities::sqlx_migrations")]
pub struct QueryRoot;
How can they be excluded?
Can we exclude them by default?
You are doing a wonderful job! Congratulations to all!
I think after an initial but comprehensive documentation this project could be made public so as to receive feedback, PRs and more from Rust lovers so as to proceed even faster.
What do you think?
I could help write some docs but I'm still stuck with projects and haven't started using this yet.
Following command should output the source code to mysql
folder in the current directory. However, if the directory doesn't exist, it will result in an error.
➜ cargo r mysql://root:root@localhost/sakila mysql mysql
Finished dev [unoptimized + debuginfo] target(s) in 0.80s
Running `target/debug/seaography 'mysql://root:root@localhost/sakila' mysql mysql`
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: IoError(Os { code: 2, kind: NotFound, message: "No such file or directory" })', src/main.rs:19:10
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
I think we should always check the existence of the target directory before writing Cargo.toml
file to it. Perhaps adding std::fs::create_dir_all(path.as_ref())?;
before writer::write_cargo_toml(path, crate_name, &sql_version)?;
at:
seaography/generator/src/lib.rs
Lines 80 to 85 in 28ae5ce
In its current state the project generates lots of code for every entity in order to work. This generates lots of entropy (easier for errors to hide in plain sight), its harder to maintain (because you have to re-generate the project and compile it), and our work cannot be used outside of the current project.
Allow API to generate pagination for nested queries
query QueryZoo {
zoo {
data {
name
year
animals(pagination: {limit: 4, page: 2}) {
name
}
}
pages
current
}
}
We could run cargo fmt
on the generated files just like below
Seems to me all these are belongs to the core of seaography
. I think we could move it into seaography
instead of placing it on user's application crate.
use sea_orm::prelude::*;
pub mod entities;
pub mod query_root;
pub use query_root::QueryRoot;
pub struct OrmDataloader {
pub db: DatabaseConnection,
}
#[derive(Debug, Clone, Copy, PartialEq, Eq, async_graphql :: Enum)]
pub enum OrderByEnum {
Asc,
Desc,
}
pub type BinaryVector = Vec<u8>;
#[derive(async_graphql :: InputObject, Debug)]
#[graphql(concrete(name = "StringFilter", params(String)))]
#[graphql(concrete(name = "TinyIntegerFilter", params(i8)))]
#[graphql(concrete(name = "SmallIntegerFilter", params(i16)))]
#[graphql(concrete(name = "IntegerFilter", params(i32)))]
#[graphql(concrete(name = "BigIntegerFilter", params(i64)))]
#[graphql(concrete(name = "TinyUnsignedFilter", params(u8)))]
#[graphql(concrete(name = "SmallUnsignedFilter", params(u16)))]
#[graphql(concrete(name = "UnsignedFilter", params(u32)))]
#[graphql(concrete(name = "BigUnsignedFilter", params(u64)))]
#[graphql(concrete(name = "FloatFilter", params(f32)))]
#[graphql(concrete(name = "DoubleFilter", params(f64)))]
#[graphql(concrete(name = "DateFilter", params(Date)))]
#[graphql(concrete(name = "DateTimeFilter", params(DateTime)))]
#[graphql(concrete(name = "DateTimeUtcFilter", params(DateTimeUtc)))]
#[graphql(concrete(name = "DecimalFilter", params(Decimal)))]
#[graphql(concrete(name = "BinaryFilter", params(BinaryVector)))]
#[graphql(concrete(name = "BooleanFilter", params(bool)))]
pub struct TypeFilter<T: async_graphql::InputType> {
pub eq: Option<T>,
pub ne: Option<T>,
pub gt: Option<T>,
pub gte: Option<T>,
pub lt: Option<T>,
pub lte: Option<T>,
pub is_in: Option<Vec<T>>,
pub is_not_in: Option<Vec<T>>,
pub is_null: Option<bool>,
}
The async-graphql enum is just a wrapper of the SeaORM enum. So, it's confusing at a glance to see SeaORM attributes appear on the it.
seaography/examples/mysql/src/graphql/enums/rating.rs
Lines 4 to 18 in 44283e1
The above can be simplify to
use crate::orm::sea_orm_active_enums;
use async_graphql::*;
use sea_orm::entity::prelude::*;
#[derive(Debug, Copy, Clone, Eq, PartialEq, Enum)]
#[graphql(remote = "sea_orm_active_enums::Rating")]
pub enum Rating {
G,
Pg,
Pg13,
R,
Nc17,
}
Code is a single monolith. A proposal is to separate it into 3 crates.
Imagine you have the following schema
struct MovieCategory {
id: String,
title: String,
movies: Vec<Movie>
}
struct Movie {
id: String,
category_id: String,
duration: u64,
rating i64
}
I want to find the categories where (category.title === 'Romance' || category.title === 'Comedy') and get movies where (category.movies.duration >120).
Add filters on related queries.
none
Personally, the generated Cargo.toml
is a bit difficult to read.
[package]
edition = '2021'
name = 'generated_expanded'
version = '0.1.0'
[dependencies.async-graphql]
version = '4.0.10'
features = [
'decimal',
'chrono',
'dataloader',
]
[dependencies.async-graphql-poem]
version = '4.0.10'
[dependencies.async-trait]
version = '0.1.53'
[dependencies.heck]
version = '0.4.0'
[dependencies.itertools]
version = '0.10.3'
[dependencies.poem]
version = '1.3.29'
[dependencies.sea-orm]
version = '0.7.0'
features = [
'sqlx-sqlite',
'runtime-async-std-native-tls',
]
[dependencies.seaography_derive]
path = '../derive'
[dependencies.tokio]
version = '1.17.0'
features = [
'macros',
'rt-multi-thread',
]
[dependencies.tracing]
version = '0.1.34'
[dependencies.tracing-subscriber]
version = '0.3.11'
[dev-dependencies.serde_json]
version = '1.0.82'
[workspace]
members = []
I'd prefer it to define dependency in the short form (one line format):
Lines 8 to 11 in ea575d6
I'd suggest you to generate it with a template:
Cargo.toml
)Add new CLI parameters that limit query depth and query complexity.
Both parameters by default will be None.
-depth
: (default: None) positive integer, when applied we limit the depth of queries-complexity
: (default: None) positive integer, when applied we limit the complexity of queriesMore more information here: https://async-graphql.github.io/async-graphql/en/depth_and_complexity.html
As discussed on Discord chat
The background:
I did a hacky version of partial eq here
seaography/derive/src/relation.rs
Lines 207 to 262 in bc36f4c
The proposed solution:
Can we hop into
sea-query
and implementEq
forsea_query::Value
?
A PR that might be helpful for implementing Eq
for the generic K
(representing key) in Loader
(https://docs.rs/async-graphql/latest/async_graphql/dataloader/trait.Loader.html)
Add generation path for update mutations
TODO: future work is to support update of related entities using filters
Currently, I see the seaography assume there exists a schema reside in one of the supported database engine. Then, SeaORM entities and async-graphql types will be generated by seaography.
But what if a existing SeaORM user wish to introduce async-graphql support into their existing codebase? What's the suggested way of doing it? Should we provide a way to generate async-graphql types only? Instead of only provide a CLI that generate both at the same time?
I saw the examples, Cargo.toml
template, seaography root and codegen all depends on sea-orm
v0.7.
We might well update it to latest version, aka v0.9.
SeaORM supports chrono
as well as time
, and a bunch of others. Would be great if Seaography supports them too.
Currently the project supports page driven pagination. The problem is that when a new item is added/removed and we fetch the next page we might skip something or see a duplicate item.
We depend on this feature to be merged SeaQL/sea-orm#822
In SeaORM codegen there is a heuristic to check whether a table is a junction table https://github.com/SeaQL/sea-orm/blob/4475a662c123fe5a3f9ed6453d30325b96aac7be/sea-orm-codegen/src/entity/transformer.rs#L138
My question is, is it possible to remove the filmActor
junction so we can simply:
{
film {
data {
title
actor {
firstName
lastName
}
}
}
}
Given that we already detected it is a many-to-many relation? We already has some support (via
) for it in SeaORM.
Hey @karatakis, I saw we've two examples now. One for MySQL and the other for SQLite. But I'm wondering why the first one is based on sakila
sample schema while the later is based on chinook
? Is there any reason behind that?
Btw... great work! :D
There's an operator in SeaORM called contains
, which basically is '%WORD%'. We should add an attribute contains
.
And by the way, why there is isNull
but lacks a isNotNull
? (for completeness's sake)
currently many project crates require basic dummy types from the types crate to do some unit testing
those types are duplicate around the project, and duplicate code sometimes is harder to maintain
pick types cargo associated data structures and place them under types/test_cfg module
Write integration tests with real database and queries and check if return expected results
Write github actions for better quality assurance
The steps will be:
async-graphql support cache_control
#[Object(cache_control(max_age = 60))]
impl Query {
but seaography don't do it.
#[seaography(entity = "crate::entities::store",cache_control(max_age = 60))]
pub struct QueryRoot;
https://github.com/SeaQL/seaography/blob/main/derive/src/root_query.rs#L101
Add generator path for delete mutation using filters
We have to do the following tasks in order to have a working prototype:
I think we need an "adoption guide" (feel free to rename it) for existing SeaORM users.
For example,
query_root.rs
has to be added as welluse sea_orm::entity::prelude::*;
#[derive(
Clone,
Debug,
PartialEq,
DeriveEntityModel,
+ async_graphql::SimpleObject,
+ seaography::macros::Filter,
)]
#[sea_orm(table_name = "albums")]
#[graphql(complex)]
#[graphql(name = "Albums")]
pub struct Model {
#[sea_orm(column_name = "AlbumId", primary_key)]
pub album_id: i32,
#[sea_orm(column_name = "Title")]
pub title: String,
#[sea_orm(column_name = "ArtistId")]
pub artist_id: i32,
}
#[derive(
Copy,
Clone,
Debug,
EnumIter,
+ DeriveRelation,
+ seaography::macros::RelationsCompact,
)]
pub enum Relation {
#[sea_orm(
belongs_to = "super::artists::Entity",
from = "Column::ArtistId",
to = "super::artists::Column::ArtistId",
on_update = "NoAction",
on_delete = "NoAction"
)]
Artists,
#[sea_orm(has_many = "super::tracks::Entity")]
Tracks,
}
impl Related<super::artists::Entity> for Entity {
fn to() -> RelationDef {
Relation::Artists.def()
}
}
impl Related<super::tracks::Entity> for Entity {
fn to() -> RelationDef {
Relation::Tracks.def()
}
}
impl ActiveModelBehavior for ActiveModel {}
As a result some generated relation method are not prefixed with its own table name
For example,
seaography/examples/sqlite/src/graphql/entities/albums.rs
Lines 72 to 82 in 44283e1
Such behaviour can be changed at
seaography/generator/src/generator/entity.rs
Lines 204 to 209 in 44283e1
Some dependency shouldn't be imported in user space. It's only used inside derive macros. So, we can re-export those dependencies in seaography
and avoid leaking it into userspace.
sea-orm = { version = "^0.9", features = ["sqlx-sqlite", "runtime-async-std-native-tls"] }
seaography = { version = "^0.1", features = [ "with-decimal", "with-chrono" ] }
async-graphql = { version = "4.0.10", features = ["decimal", "chrono", "dataloader"] }
async-graphql-poem = { version = "4.0.10" }
async-trait = { version = "0.1.53" }
dotenv = { version = "0.15.0" }
heck = { version = "0.4.0" } # Should be re-export by `seaography`
itertools = { version = "0.10.3" } # Should be re-export by `seaography`
poem = { version = "1.3.29" }
tokio = { version = "1.17.0", features = ["macros", "rt-multi-thread"] }
tracing = { version = "0.1.34" }
tracing-subscriber = { version = "0.3.11" }
The release_year
column of the table film
is to blame.
I think it's more a problem in SeaORM though.
I monkey patched sakila-data.sql
to make it work for now.
Add generation path for create mutations
TODO: future work is to support related entities creation too
Documentation is essential to showcase our tool and make it easier for users to understand how to use it.
Use https://www.sea-ql.org/StarfishQL/docs/index/ as reference and write the documentation for the project
Test that code generator is predictable and deterministic
I saw we're using some magic to inject graphql derives to Model
, Relation
and ActiveEnum
. It's not future proof and depends heavily on the output of SeaORM codegen. Let say, the sea_orm
attribute comes before derive
, then the inject_graphql
code will fail.
// `inject_graphql` expect this as the input
#[derive(Clone, Debug, PartialEq, DeriveEntityModel)]
#[sea_orm(table_name = "actor")]
pub struct Model {
#[sea_orm(primary_key)]
pub actor_id: u16,
pub first_name: String,
pub last_name: String,
pub last_update: DateTimeUtc,
}
// How about this?
#[sea_orm(table_name = "actor")]
#[derive(Clone, Debug, PartialEq, DeriveEntityModel)]
pub struct Model {
#[sea_orm(primary_key)]
pub actor_id: u16,
pub first_name: String,
pub last_name: String,
pub last_update: DateTimeUtc,
}
We can provide extension inside SeaORM codegen. For example, EntityWriterContext
can store a Option<dyn ExtendModelStructWriter>
where we can specify additional derives and attributes for the generated Model
. The trait will take some context provided by the SeaORM codegen. Feel free to rename it or even take the context as &EntityWriterContext
.
pub trait ExtendModelStructWriter {
fn expended_model_extra_derives(entity: &Entity, with_serde: &WithSerde, date_time_crate: &DateTimeCrate) -> TokenStream;
fn compact_model_extra_attributes(entity: &Entity, with_serde: &WithSerde, date_time_crate: &DateTimeCrate, schema_name: &Option<String>) -> TokenStream;
}
Now we read plain token streams from sea orm generator.
We need to read json produced from sea schema to be more future proof.
Remove sea orm generator dependency and easier to test
Right now, running the code gen multiple time would result in different ordering of things, e.g. in query_root.rs.
I think it is due to the pseudo random nature of HashMap. We should replace hashmap with btreemap which is ordered.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.