onyxframework / sql Goto Github PK
View Code? Open in Web Editor NEWA delightful SQL ORM ☺️
Home Page: https://api.onyxframework.com/sql
License: MIT License
A delightful SQL ORM ☺️
Home Page: https://api.onyxframework.com/sql
License: MIT License
class User
include Core::Schema
include Core::Query
end
user = repo.query(User.where(id: 42)) # It's just repo.query(<Core::Query(User)>)
It's like Active Record, but not magic ✨🤔
validate_presense(:field)
- calls .nil?
on :field
validate_size(:string_field, 3)
or validate_size(:string, (3..5))
Depends on #11
Of course you could do users.each { |u| repo.delete(u) }
, but this will affect performance; so it's better to implement methods which would accept an array of instances and compose a single query. Obviously.
insert
update
delete
Compare by primary keys by default. Allow to define custom comparison.
Split Query#to_s
to:
Query#to_s
to just build SQL without paramsQuery#build
to only fill up #params
(questionable method name though)ditto
In real world models can have fields which are NOT NULL
; but in Core
developer still has to write model.field.not_nil!
when working with such a field.
Proposal:
class User < Core::Model
schema do
field :username, String
field :age, Int32?
end
end
Should expand to:
class User < Core::Model
getter! username, String?
getter age : Int32?
end
Is this the recommended way to covert from JSON for a complex type like PG::Geo::Point?
struct PointConverter
def self.from_rs(rs)
rs.read(PG::Geo::Point)
end
def self.to_json(value, json)
json.object do
json.field "x", value.x
json.field "y", value.y
end
end
def self.from_json(pull)
pull.read_next
pull.read_next
# not checking order of keys
x = pull.read_float.to_f64
pull.read_next
y = pull.read_float.to_f64
PG::Geo::Point.new(x, y)
end
end
class Place < Core::Model
schema do
table_name "places"
primary_key :id
field :name, String
field :location, PG::Geo::Point, converter: PointConverter
created_at_field :created_at
reference :referrer, Place, key: :referrer_id, foreign_key: :id
end
end
place = Place.from_json "{...}"
UsingJSON.mapping
(and not Core) to decode from JSON looks like this:
class Point
JSON.mapping({
x: Float64,
y: Float64,
})
end
class Place
JSON.mapping({
id: Int32,
name: String,
location: Point,
created_at: Time,
})
end
place = Place.from_json "{...}"
With Core is there a way to specify a class with a JSON.mapping instead of having to define a convertor with from_json
and work with the PullParser?
field :base, String, validate: {size: (1..8)}
field :quote, String, validate: {
size: (1..8),
presence: true,
regex: //,
custom: [->(quote : String) {
raise "Nope" if quote == "false"
}],
}
Or keep validations split from schema?
validate do
validate_size(:base, (1..8))
validate_size(:quote, (1..8))
validate_presence(:quote)
validate_regex(:quote, //)
raise "Nope" if quote == "false"
end
Which is better? 🤔
.where("users.id = ? AND payments.status = ?", [self.id, Payment::Status::Confirmed])
.where("users.id = ? AND payments.status = ?", self.id, Payment::Status::Confirmed)
If a developer does migrations on his own, which requires SQL knowledge, why using handy Query
then? Let 'em write raw SQL queries!
Pros:
Query(T)
allows to get rid of Model
class, turning model into a set of modulesCons:
JOIN
suser = repo.query_one(Query(User).where(id: 42))
turns into
query = <<-SQL
SELECT * FROM users WHERE id = $1
SQL
user = repo.query_one(User, query, 42)
and
posts = repo.query(Query(Post).join(:author).order_by(:created_at)
turns into
query = <<-SQL
SELECT posts.*, '' AS _author, author.*
FROM posts
JOIN users AS author ON author.id = posts.author_id
ORDER BY created_at
SQL
posts = repo.query(Post, query)
rfc 🤔
DB
(sql) drivers onlyI haven't tested it yet, but looks like calling return errors.push({:field => "blah"})
in validation do
block will result in a positive validation. This should be fixed.
Basically, Query(Post).where(user: user)
has been broken since the first release 😕
user = User.new(name: "kek")
user.name = "kek"
user.changes # => {:name => "kek"}
user = User.new(name: "kek")
user.name = "kek"
user.changes # => {}
Maybe need a query method to fetch by a primary key.
ATM I use Query(User).where(id: id)
and it seems to be simple enough.
Query(User)[id]
- it's a handy method, but it looks uglyQuery(User).find(id)
- it's quite handy and not so ugly but interferes logically with where
Query(User).by_pk(id)
- is uglyQuery(User).by_id(id)
or Query(User).id(id)
- ugly and will involve Model::Schema::CORE__PRIMARY_KEY_FIELD
constant. I don't want to use Schema
's constants in Query
Any comments?
Post.join(:author, select: nil)
Table name is essential, so make it #schema's argument just like Crecto? 🤔
It just seems more appropriate.
posts = repo.query(Query(Post).all.join(:author, select: [:id, :name]))
# SELECT posts.*, '' AS _author, author.id, author.name
# FROM posts
# JOIN users AS author ON users.id = posts.author_id
It would be handy to call something like:
repo.delete(Query(Post).where(status: Declined))
repo.update(Query(Post).where(status: Approved).set(visibility: true)
But ATM Query
implementation is strictly tied to SELECT
. And it's logically valid, because querying is always getting something.
I could implement methods transform select query to delete query:
struct Query
def delete
# Flag this query as delete query, this will affect on building
# If called again, ignore
end
def update
# ditto
end
def select
# return to normal SELECT state
end
end
class Repository
def update(query : Query)
query.update # Just to ensure
db.exec(query.to_s, query.params)
end
def delete(query : Query)
query.delete # Just to ensure
# ditto
end
end
repo.query(User.where(id: 42)).first?
vs repo.query_one?(User.where(id: 42))
This underscore is ugly. It may also confuse with an idea that _one
somehow affects the query (e.g. automatically limiting it to 1), which is false.
Eventually you'll come up to a query which returns a scalar value, i.e:
Query(User).join(:payments).select(:"COALESCE(SUM(payments.amount), 0.0) as sum").where("users.id = ?", user.id)
ATM your code looks like this:
db.scalar(repository.prepare_query(q.to_s), q.params).as(PG::Numeric).to_f64
While it could be enhanced to this:
repository.scalar(query, PG::Numeric).to_f64
Other methods like #query(Tuple), #query_one, #query_one(Tuple) could be implemented as well.
Bonus: you'll also see query logging!
Progress:
#query_all
(alias of #query
)#query_one?
#query_one
#exec
#scalar
TIL you can pass arrays as SQL parameters, e.g. WHERE id = ANY($1)
. Should refactor queries so they become bakeable:
Repository#insert
, #delete
, #update
Query#where
, #join
etc.Allow to initialize Repository
without query_logger
argument.
It's more commonly used IMO.
Core::Model
is a simple entity, which is a result of database querying.
Proposal: if you want to represent a model as JSON, use separate Decorators, therefore .to_json
should be removed. There are practically no usecases when you need to cast a model from JSON, therefore .from_json
should be removed as well.
This will definitely increase boilerplate code, especially if you're writing JSON API, but, in most cases, JSON representations should have some fields hidden from unauthorized users. Crystal doesn't allow defining JSON mapping dynamically. That's why you'll write your own Decorator sooner or later. That's why Core::Model
doesn't need its own JSON mapping.
As @RX14 said,
Inheritance instead of modules doesn't even gain you anything at all, as the base class cannot be instantiated on it's own and it doesn't define any common instance variables. Please, use include and modules here.
Therefore, I should remove Model
class.
Core::Query
has Core::Schema
as generic typeCore::Model
is removedAt the moment, a developer has to create a separate Repository object for each Model class:
post_repo = Repository(Post).new(db, query_logger)
user_repo = Repository(User).new(db, query_logger)
user_repo.query(Query(User).last)
Generics in Repository
allow to automatically cast to ModelType
in query
:
https://github.com/vladfaust/core.cr/blob/292a30b3d4522c2811a741c4c5b0f0a82bfa2bc8/src/core/repository.cr#L56
And... That'all what generic ModelType
is actually needed for for in Repository
. #update
and #insert
and even #delete
could grab model information from instance, so what's the point?
Also we understand that separate Databases are not likely to be used in a single project. And if yes, how could two models from different Database interfere with each other?
Make Repository
non-generic. So only one repository instance will be needed. #insert
, #delete
and #update
will stay the same. Example implementation:
# We would have to pass model class explicitly.
#
# ```
# repo.query(User, "SQL QUERY", params)
# ```
def query(model : Model.class, query : String, *params)
rs.read(model)
end
# Or we could extract model class from Query because it's still a generic.
#
# ```
# repo.query(Query(User).last)
# ```
def query(query : Query)
query(query.model_class, query.to_s, query_params)
end
What do you think? Note that it's in your hands to build an ideal ORM 😊
Constantly improving...
post = repo.query(Query(Post).join(:author)).first
post.author # => nil
post = repo.query(Query(Post).join(:author)).first
post.author.class # => User
Current implementation:
query = Query(User).all
query.limit(4).offset(2) # Currently affects `query`
Proposal:
query = Query(User).all
new_query = query.limit(4).offset(2) # Returns new query instance
Query
is a struct now. Need to implement clone
method which will duplicate all the clauses.
Thanks for the lib! I'm finding the more lightweight approach to be easier to wrap my head around than Crecto, but I am struggling with something as it stands (sorry I'm a typed languages noob - it could be simple)
I want to use BigRational
to be able to represent financial data but my initial attempts have been thwarted. A bit of trial and error has helped but still banging my head against the wall.
I'm hoping to take advantage of the PG::Numeric extensions to pull data in and out of a Postgres DECIMAL(20,8)
Initial attempt was to:
require "big_rational"
module Models
class Order < Core::Model
schema do
primary_key :id
field :rate, BigRational
field :quantity, BigRational
end
end
end
> instance variable '@changes' of Models::Order must be Hash(Symbol, Number), not (Hash(Symbol, BigRational | Int32) | Hash(Symbol, Number))
Which I didn't expect to be a problem since I thought BigRational was a Number
I've also tried to define my own converter, but I'm not confident in what I should be converting to/from, being honest!
class Core::Model::Validation::InvalidModelError
end
user.valid! # Raises InvalidModelError if invalid
Not sure if I'm missing it, or if core.cr doesn't support transactions.
status SMALLINT NOT NULL DEFAULT 0
field :status, Int16, insert_empty: true
# ...
model = Model.new
model.valid? # Should return true even if status is nil
model = repository.query(q).first
model.status = nil
model.valid? # Should return false in this case
As a result, if status is Nil
at the moment of insertion, it will not be stated in INSERT
clause, allowing database to handle DEFAULT
value.
For validation, I think some kind of @fresh : Bool
variable set to true on explicit initialization could allow to bypass validation.
The bool type is not working now.
The offending line is in model/schema/mapping.cr
at line 80:
@\\{{field[:name].id}} = value.as(\\{{field[:type].id}}) || \\{{field[:default] || nil.id}}
The || operator is not working for false value, and it replaces false by nil (or the default value).
A possible solution is to replace the line by this:
@\\{{field[:name].id}} = value.nil? ? \\{{field[:default] || nil.id}} : value.as(\\{{field[:type].id}})
Currently Validation
could be ommited as a module, but validations
option in field declaration is hardcoded:
Need to find out a way to include dynamic options into INTERNAL__CORE_FIELDS
, so schema could be extended (e.g. field :name, my_option: true
).
Using macros:
id = 42
repo.query(User.where(id: id))
# Would be expanded to:
id = 42
repo.query(User, "SELECT users.* FROM users WHERE id = ?", id)
It's a shame, but currently you cannot do neither Query(User).where_not(status: Status::Disabled)
nor Query(User).where(confirmed: true).and_not(status: Status::Disabled)
.
Ditto
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.