Code Monkey home page Code Monkey logo

seaography's People

Contributors

billy1624 avatar jsievenpiper avatar karatakis avatar nicompte avatar skopz356 avatar tyt2y3 avatar xiniha avatar yinnx avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

seaography's Issues

Decouple discoverer from generator

Right now the generator crate depends on discoverer crate.

Ideally, we should cut the dependency and only glue them in the cli crate.

Reason being, we might want to utilize the generator without doing a schema discovery. e.g. 1) reuse existing SeaORM entities 2) the schema comes from another source, say being transpiled from prisma schema

Why the generated SeaORM entity files are in expanded format?

The expanded generator produce all Rust code. Its easier to modify, but harder to maintain long term. In case a new version of the generator exists you have to regenerate the whole project and apply the your custom modifications again.

I saw the line above from the docs, SeaQL/seaql.github.io#41.

I agree we should generate the async-graphql types in expanded format. However, for SeaORM entity, compact format should be the default (of cause we should make it configurable: generating compact / expanded format).

Test Suite

I'm just brainstorming parts in seaography that worth testing. I don't have a concrete plan on how and what to test. So, this is rather a discussion more than a actionable test plan.

Parts that's worth testing:

  1. Codegen: generating async-graphql types
    • input: sea_query::TableCreateStatement
    • output: proc_macro2::TokenStream
  2. Compile and work as expected: integration tests where we compile the generated code and perform GraphQL queries on it to make sure it behave as expected

Feel free to edit the list. And comments wanted @karatakis @tyt2y3

Complete support for mysql, pgsql

Motivation

Currently the projects supports only sqlte and partial mysql/pgsql support.

The aim is to support mysql and pgsql to expand use cases the tool can be used.

Proposed Solutions

Complete the support for enumeration types generation and mysql, pgsql generator support.

Support of common type crates behind feature flags

async-graphql support an array of common types guarded by feature flags. Ex. chrono, time, uuid, decimal ...etc.

In seaography, we should respect these feature flags as well. Because we need these type crates to implement our internal types. While we can't assumed all feature flag are enabled.

  • seaography/src/lib.rs

    Lines 39 to 59 in f5a91c7

    #[graphql(concrete(name = "DateFilter", params(sea_orm::prelude::Date)))]
    #[graphql(concrete(name = "DateTimeFilter", params(sea_orm::prelude::DateTime)))]
    #[graphql(concrete(name = "DateTimeUtcFilter", params(sea_orm::prelude::DateTimeUtc)))]
    // TODO #[graphql(concrete(name = "TimestampFilter", params()))]
    // TODO #[graphql(concrete(name = "TimestampWithTimeZoneFilter", params()))]
    #[graphql(concrete(name = "DecimalFilter", params(sea_orm::prelude::Decimal)))]
    // TODO #[graphql(concrete(name = "UuidFilter", params(uuid::Uuid)))]
    #[graphql(concrete(name = "BinaryFilter", params(BinaryVector)))]
    #[graphql(concrete(name = "BooleanFilter", params(bool)))]
    // TODO #[graphql(concrete(name = "EnumFilter", params()))]
    pub struct TypeFilter<T: async_graphql::InputType> {
    pub eq: Option<T>,
    pub ne: Option<T>,
    pub gt: Option<T>,
    pub gte: Option<T>,
    pub lt: Option<T>,
    pub lte: Option<T>,
    pub is_in: Option<Vec<T>>,
    pub is_not_in: Option<Vec<T>>,
    pub is_null: Option<bool>,
    }

So, we can "pass on" the feature flag we received in seaography to async-graphql. Just like what we did below in SeaORM. Where we "pass on" the with-* feature flag to sea-query and sqlx.

Cursor based pagination

SeaORM added support of cursor based pagination in 0.9 SeaQL/sea-orm#822

Which I think is generally the preferred way to perform pagination in GraphQL

I think we should handle either offset-based or cursor-based pagination in a single query (make them mutually exclusive).

And ideally we only allow cursoring on an indexed column.

Add API that receives GraphQL query input and returns generated SQL statements

Motivation

Currently the project has some unit tests and some integration tests.

The drawbacks are that unit tests are simple and integration tests require infrastructure.

Proposed Solutions

  • Create and API that receives GraphQL query input and return SQL statements
  • Use proposed API to test various scenarios

Additional Information

The advantage is that we can inspect SQL queries without needing a database to execute them.

Add sorting

Motivation

Sorting plays an important role on various scenarios, and SQL spec supports it for that reason.

Proposed Solutions

  • Add query sorting parameter

Additional Information

Example query:

query QueryZoo {
  queryZoo(pagination: {limit: 10, page: 7}, sorting: { name: 'asc', year: 'desc'}) {
    data {
      name
      year
      animals {
        name
      }
    }
    pages
    current
  }
}

Can we exclude migrations tables?

I'm trying Seaography for the first time.

On the first run it creates this file which makes me think that it is also considering the migrations tables.

#[derive(Debug, seaography::macros::QueryRoot)]
#[seaography(entity = "crate::entities::player")]
#[seaography(entity = "crate::entities::seaql_migrations")]
#[seaography(entity = "crate::entities::sqlx_migrations")]
pub struct QueryRoot;
  1. How can they be excluded?

  2. Can we exclude them by default?

Make it public?

You are doing a wonderful job! Congratulations to all!

I think after an initial but comprehensive documentation this project could be made public so as to receive feedback, PRs and more from Rust lovers so as to proceed even faster.

What do you think?

I could help write some docs but I'm still stuck with projects and haven't started using this yet.

[codegen] create the root folder if not exists

Following command should output the source code to mysql folder in the current directory. However, if the directory doesn't exist, it will result in an error.

➜ cargo r mysql://root:root@localhost/sakila mysql mysql
    Finished dev [unoptimized + debuginfo] target(s) in 0.80s
     Running `target/debug/seaography 'mysql://root:root@localhost/sakila' mysql mysql`
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: IoError(Os { code: 2, kind: NotFound, message: "No such file or directory" })', src/main.rs:19:10
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

I think we should always check the existence of the target directory before writing Cargo.toml file to it. Perhaps adding std::fs::create_dir_all(path.as_ref())?; before writer::write_cargo_toml(path, crate_name, &sql_version)?; at:

  • let (tables, sql_version) =
    seaography_discoverer::extract_database_metadata(&database_url).await?;
    writer::write_cargo_toml(path, crate_name, &sql_version)?;
    std::fs::create_dir_all(&path.as_ref().join("src/entities"))?;

Add macro derive based code generation

Motivation

In its current state the project generates lots of code for every entity in order to work. This generates lots of entropy (easier for errors to hide in plain sight), its harder to maintain (because you have to re-generate the project and compile it), and our work cannot be used outside of the current project.

Proposed Solutions

  • design a derive macro library that implements various macros that achieve same goal as generator
  • create new generator that depends on derive_macro library

Add pagination on nested queries

Allow API to generate pagination for nested queries

query QueryZoo {
  zoo {
    data {
      name
      year
      animals(pagination: {limit: 4, page: 2}) {
        name
      }
    }
    pages
    current
  }
}

Move helper types from application into seaography

Seems to me all these are belongs to the core of seaography. I think we could move it into seaography instead of placing it on user's application crate.

use sea_orm::prelude::*;
pub mod entities;
pub mod query_root;
pub use query_root::QueryRoot;

pub struct OrmDataloader {
    pub db: DatabaseConnection,
}

#[derive(Debug, Clone, Copy, PartialEq, Eq, async_graphql :: Enum)]
pub enum OrderByEnum {
    Asc,
    Desc,
}

pub type BinaryVector = Vec<u8>;

#[derive(async_graphql :: InputObject, Debug)]
#[graphql(concrete(name = "StringFilter", params(String)))]
#[graphql(concrete(name = "TinyIntegerFilter", params(i8)))]
#[graphql(concrete(name = "SmallIntegerFilter", params(i16)))]
#[graphql(concrete(name = "IntegerFilter", params(i32)))]
#[graphql(concrete(name = "BigIntegerFilter", params(i64)))]
#[graphql(concrete(name = "TinyUnsignedFilter", params(u8)))]
#[graphql(concrete(name = "SmallUnsignedFilter", params(u16)))]
#[graphql(concrete(name = "UnsignedFilter", params(u32)))]
#[graphql(concrete(name = "BigUnsignedFilter", params(u64)))]
#[graphql(concrete(name = "FloatFilter", params(f32)))]
#[graphql(concrete(name = "DoubleFilter", params(f64)))]
#[graphql(concrete(name = "DateFilter", params(Date)))]
#[graphql(concrete(name = "DateTimeFilter", params(DateTime)))]
#[graphql(concrete(name = "DateTimeUtcFilter", params(DateTimeUtc)))]
#[graphql(concrete(name = "DecimalFilter", params(Decimal)))]
#[graphql(concrete(name = "BinaryFilter", params(BinaryVector)))]
#[graphql(concrete(name = "BooleanFilter", params(bool)))]
pub struct TypeFilter<T: async_graphql::InputType> {
    pub eq: Option<T>,
    pub ne: Option<T>,
    pub gt: Option<T>,
    pub gte: Option<T>,
    pub lt: Option<T>,
    pub lte: Option<T>,
    pub is_in: Option<Vec<T>>,
    pub is_not_in: Option<Vec<T>>,
    pub is_null: Option<bool>,
}

async-graphql enum should simply be a ordinary enum

The async-graphql enum is just a wrapper of the SeaORM enum. So, it's confusing at a glance to see SeaORM attributes appear on the it.

#[derive(Debug, Copy, Clone, Eq, PartialEq, EnumIter, DeriveActiveEnum, Enum)]
#[graphql(remote = "sea_orm_active_enums::Rating")]
#[sea_orm(rs_type = "String", db_type = "Enum", enum_name = "Rating")]
pub enum Rating {
#[sea_orm(string_value = "G")]
G,
#[sea_orm(string_value = "PG")]
Pg,
#[sea_orm(string_value = "PG-13")]
Pg13,
#[sea_orm(string_value = "R")]
R,
#[sea_orm(string_value = "NC-17")]
Nc17,
}

The above can be simplify to

use crate::orm::sea_orm_active_enums;
use async_graphql::*;
use sea_orm::entity::prelude::*;

#[derive(Debug, Copy, Clone, Eq, PartialEq, Enum)]
#[graphql(remote = "sea_orm_active_enums::Rating")]
pub enum Rating {
    G,
    Pg,
    Pg13,
    R,
    Nc17,
}

Add filters on related queries

Motivation

Imagine you have the following schema

struct MovieCategory {
  id: String,
  title: String,
  movies: Vec<Movie>
}

struct Movie {
  id: String,
  category_id: String,
  duration: u64,
  rating i64
}

I want to find the categories where (category.title === 'Romance' || category.title === 'Comedy') and get movies where (category.movies.duration >120).

Proposed Solutions

Add filters on related queries.

Additional Information

none

Generate `Cargo.toml` from template

Personally, the generated Cargo.toml is a bit difficult to read.

[package]
edition = '2021'
name = 'generated_expanded'
version = '0.1.0'
[dependencies.async-graphql]
version = '4.0.10'
features = [
    'decimal',
    'chrono',
    'dataloader',
]

[dependencies.async-graphql-poem]
version = '4.0.10'

[dependencies.async-trait]
version = '0.1.53'

[dependencies.heck]
version = '0.4.0'

[dependencies.itertools]
version = '0.10.3'

[dependencies.poem]
version = '1.3.29'

[dependencies.sea-orm]
version = '0.7.0'
features = [
    'sqlx-sqlite',
    'runtime-async-std-native-tls',
]

[dependencies.seaography_derive]
path = '../derive'

[dependencies.tokio]
version = '1.17.0'
features = [
    'macros',
    'rt-multi-thread',
]

[dependencies.tracing]
version = '0.1.34'

[dependencies.tracing-subscriber]
version = '0.3.11'
[dev-dependencies.serde_json]
version = '1.0.82'

[workspace]
members = []

I'd prefer it to define dependency in the short form (one line format):

  • seaography/Cargo.toml

    Lines 8 to 11 in ea575d6

    [dependencies]
    seaography_generator = { path = "./generator" }
    clap = { version = "3.2.6", features = ["derive"] }
    async-std = { version = "1.12.0", features = [ "attributes", "tokio1" ] }

I'd suggest you to generate it with a template:

Better implementation for `K: Eq` in `async_graphql::Loader`

As discussed on Discord chat

The background:

I did a hacky version of partial eq here

#[derive(Clone, Debug)]
pub struct #foreign_key_name(pub sea_orm::Value);
impl PartialEq for #foreign_key_name {
fn eq(&self, other: &Self) -> bool {
// TODO temporary hack to solve the following problem
// let v1 = TestFK(sea_orm::Value::TinyInt(Some(1)));
// let v2 = TestFK(sea_orm::Value::Int(Some(1)));
// println!("Result: {}", v1.eq(&v2));
fn split_at_nth_char(s: &str, p: char, n: usize) -> Option<(&str, &str)> {
s.match_indices(p).nth(n).map(|(index, _)| s.split_at(index))
}
let a = format!("{:?}", self.0);
let b = format!("{:?}", other.0);
let a = split_at_nth_char(a.as_str(), '(', 1).map(|v| v.1);
let b = split_at_nth_char(b.as_str(), '(', 1).map(|v| v.1);
a.eq(&b)
}
}
impl Eq for #foreign_key_name {
}
impl std::hash::Hash for #foreign_key_name {
fn hash<H: std::hash::Hasher>(&self, state: &mut H) {
// TODO this is a hack
fn split_at_nth_char(s: &str, p: char, n: usize) -> Option<(&str, &str)> {
s.match_indices(p).nth(n).map(|(index, _)| s.split_at(index))
}
let a = format!("{:?}", self.0);
let a = split_at_nth_char(a.as_str(), '(', 1).map(|v| v.1);
a.hash(state)
// TODO else do the following
// match self.0 {
// sea_orm::Value::TinyInt(int) => int.unwrap().hash(state),
// sea_orm::Value::SmallInt(int) => int.unwrap().hash(state),
// sea_orm::Value::Int(int) => int.unwrap().hash(state),
// sea_orm::Value::BigInt(int) => int.unwrap().hash(state),
// sea_orm::Value::TinyUnsigned(int) => int.unwrap().hash(state),
// sea_orm::Value::SmallUnsigned(int) => int.unwrap().hash(state),
// sea_orm::Value::Unsigned(int) => int.unwrap().hash(state),
// sea_orm::Value::BigUnsigned(int) => int.unwrap().hash(state),
// sea_orm::Value::String(str) => str.unwrap().hash(state),
// sea_orm::Value::Uuid(uuid) => uuid.unwrap().hash(state),
// _ => format!("{:?}", self.0).hash(state)
// }
}
}

The proposed solution:

Can we hop into sea-query and implement Eq for sea_query::Value?

A PR that might be helpful for implementing Eq for the generic K (representing key) in Loader (https://docs.rs/async-graphql/latest/async_graphql/dataloader/trait.Loader.html)

Add update mutation

Add generation path for update mutations

TODO: future work is to support update of related entities using filters

What's the suggested way for a existing SeaORM user to adopt seaography?

Currently, I see the seaography assume there exists a schema reside in one of the supported database engine. Then, SeaORM entities and async-graphql types will be generated by seaography.

But what if a existing SeaORM user wish to introduce async-graphql support into their existing codebase? What's the suggested way of doing it? Should we provide a way to generate async-graphql types only? Instead of only provide a CLI that generate both at the same time?

Support more types

SeaORM supports chrono as well as time, and a bunch of others. Would be great if Seaography supports them too.

Add cursor oriented pagination

Motivation

Currently the project supports page driven pagination. The problem is that when a new item is added/removed and we fetch the next page we might skip something or see a duplicate item.

Proposed Solutions

  • Use cursor based pagination

Additional Information

We depend on this feature to be merged SeaQL/sea-orm#822

Remove the junction in many-to-many relations

In SeaORM codegen there is a heuristic to check whether a table is a junction table https://github.com/SeaQL/sea-orm/blob/4475a662c123fe5a3f9ed6453d30325b96aac7be/sea-orm-codegen/src/entity/transformer.rs#L138

My question is, is it possible to remove the filmActor junction so we can simply:

{
  film {
    data {
      title
        actor {
          firstName
          lastName
        }
    }
  }
}

Given that we already detected it is a many-to-many relation? We already has some support (via) for it in SeaORM.

StringFilter should support LIKE

There's an operator in SeaORM called contains, which basically is '%WORD%'. We should add an attribute contains.

And by the way, why there is isNull but lacks a isNotNull? (for completeness's sake)

Add test_cfg into types crate

Motivation

currently many project crates require basic dummy types from the types crate to do some unit testing

those types are duplicate around the project, and duplicate code sometimes is harder to maintain

Proposed Solutions

pick types cargo associated data structures and place them under types/test_cfg module

Add CI

Write github actions for better quality assurance

The steps will be:

  1. code linting
  2. code building
  3. unit tests
  4. code generation
  5. integration tests

Work for prototype

We have to do the following tasks in order to have a working prototype:

  • bring code from other repository
  • break up code into 3 crates (cli, generation, schema discovery)
  • use json as input for code generation
  • add unit tests for generated code
  • add integration tests that test if queries complete and return expected result
  • add CI/CD

Docs: adoption guide for existing SeaORM users

I think we need an "adoption guide" (feel free to rename it) for existing SeaORM users.

For example,

  1. they have to add extra derive on Model and Relation
  2. query_root.rs has to be added as well
  3. all necessary dependency are also required
use sea_orm::entity::prelude::*;

#[derive(
    Clone,
    Debug,
    PartialEq,
    DeriveEntityModel,
+   async_graphql::SimpleObject,
+   seaography::macros::Filter,
)]
#[sea_orm(table_name = "albums")]
#[graphql(complex)]
#[graphql(name = "Albums")]
pub struct Model {
    #[sea_orm(column_name = "AlbumId", primary_key)]
    pub album_id: i32,
    #[sea_orm(column_name = "Title")]
    pub title: String,
    #[sea_orm(column_name = "ArtistId")]
    pub artist_id: i32,
}

#[derive(
    Copy,
    Clone,
    Debug,
    EnumIter,
+   DeriveRelation,
+   seaography::macros::RelationsCompact,
)]
pub enum Relation {
    #[sea_orm(
        belongs_to = "super::artists::Entity",
        from = "Column::ArtistId",
        to = "super::artists::Column::ArtistId",
        on_update = "NoAction",
        on_delete = "NoAction"
    )]
    Artists,
    #[sea_orm(has_many = "super::tracks::Entity")]
    Tracks,
}

impl Related<super::artists::Entity> for Entity {
    fn to() -> RelationDef {
        Relation::Artists.def()
    }
}

impl Related<super::tracks::Entity> for Entity {
    fn to() -> RelationDef {
        Relation::Tracks.def()
    }
}

impl ActiveModelBehavior for ActiveModel {}

Name of relation method doesn't consider reverse relation

As a result some generated relation method are not prefixed with its own table name

For example,

pub async fn artist_artists<'a>(
&self,
ctx: &async_graphql::Context<'a>,
) -> crate::orm::artists::Model {
let data_loader = ctx
.data::<async_graphql::dataloader::DataLoader<OrmDataloader>>()
.unwrap();
let key = ArtistArtistsFK(self.artist_id.clone());
let data: Option<_> = data_loader.load_one(key).await.unwrap();
data.unwrap()
}

Such behaviour can be changed at

let key_items: Vec<TokenStream> = source_columns
.iter()
.map(|col: &ColumnMeta| {
col.snake_case_ident()
})
.collect();

Dependency of internal / derived types should be re-exported

Motivation

Some dependency shouldn't be imported in user space. It's only used inside derive macros. So, we can re-export those dependencies in seaography and avoid leaking it into userspace.

sea-orm = { version = "^0.9", features = ["sqlx-sqlite", "runtime-async-std-native-tls"] }
seaography = { version = "^0.1", features = [ "with-decimal", "with-chrono" ] }
async-graphql = { version = "4.0.10", features = ["decimal", "chrono", "dataloader"] }
async-graphql-poem = { version = "4.0.10" }
async-trait = { version = "0.1.53" }
dotenv = { version = "0.15.0" }
heck = { version = "0.4.0" }          # Should be re-export by `seaography`
itertools = { version = "0.10.3" }    # Should be re-export by `seaography`
poem = { version = "1.3.29" }
tokio = { version = "1.17.0", features = ["macros", "rt-multi-thread"] }
tracing = { version = "0.1.34" }
tracing-subscriber = { version = "0.3.11" }

Extension based `inject_graphql`

I saw we're using some magic to inject graphql derives to Model, Relation and ActiveEnum. It's not future proof and depends heavily on the output of SeaORM codegen. Let say, the sea_orm attribute comes before derive, then the inject_graphql code will fail.

// `inject_graphql` expect this as the input
#[derive(Clone, Debug, PartialEq, DeriveEntityModel)]
#[sea_orm(table_name = "actor")]
pub struct Model {
    #[sea_orm(primary_key)]
    pub actor_id: u16,
    pub first_name: String,
    pub last_name: String,
    pub last_update: DateTimeUtc,
}

// How about this?
#[sea_orm(table_name = "actor")]
#[derive(Clone, Debug, PartialEq, DeriveEntityModel)]
pub struct Model {
    #[sea_orm(primary_key)]
    pub actor_id: u16,
    pub first_name: String,
    pub last_name: String,
    pub last_update: DateTimeUtc,
}

We can provide extension inside SeaORM codegen. For example, EntityWriterContext can store a Option<dyn ExtendModelStructWriter> where we can specify additional derives and attributes for the generated Model. The trait will take some context provided by the SeaORM codegen. Feel free to rename it or even take the context as &EntityWriterContext.

pub trait ExtendModelStructWriter {
    fn expended_model_extra_derives(entity: &Entity, with_serde: &WithSerde, date_time_crate: &DateTimeCrate) -> TokenStream;

    fn compact_model_extra_attributes(entity: &Entity, with_serde: &WithSerde, date_time_crate: &DateTimeCrate, schema_name: &Option<String>) -> TokenStream;
}

Use json as input for code generation

Now we read plain token streams from sea orm generator.

We need to read json produced from sea schema to be more future proof.

Remove sea orm generator dependency and easier to test

Replace HashMap with BTreeMap

Right now, running the code gen multiple time would result in different ordering of things, e.g. in query_root.rs.

I think it is due to the pseudo random nature of HashMap. We should replace hashmap with btreemap which is ordered.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.