absinthe-graphql / dataloader Goto Github PK
View Code? Open in Web Editor NEWDataLoader for Elixir
License: MIT License
DataLoader for Elixir
License: MIT License
Ecto: 2.2.11
Dataloader 1.0.4
In our system, we have the following contexts:
User:
has_many :institution_users, InstitutionUser
has_many :institutions, through: [:institution_users, :institution]
has_many :institution_spaces, through: [:institutions, :spaces]
Institution:
has_many :spaces, Space
has_many :institution_users, InstitutionUser
has_many :users, through: [:institution_users, :user]
Our goal is to serialize the institution_spaces
as part of our user
object in the schema:
object :user do
field :name, :string
field :institution_spaces, list_of(:institution_space) do
resolve dataloader Dataloaders.User, fn _, _, res ->
{ :institution_spaces, %{user_id: Util.Resolution.user_id(res)} }
end
end
The associations work correctly via raw Ecto when preloading.
Serializing it via the schema results in:
** (CaseClauseError) no case clause matching: %Ecto.Association.HasThrough{cardinality: :many, field: :institutions, on_cast: nil, owner: Api.Users.User, owner_key: :id, relationship: :child, through: [:institution_users, :institution], unique: true}
stemming from the stack trace of:
stacktrace:
(dataloader) lib/dataloader/ecto.ex:327: Dataloader.Source.Dataloader.Ecto.chase_down_queryable/2
(dataloader) lib/dataloader/ecto.ex:337: Dataloader.Source.Dataloader.Ecto.get_keys/2
(dataloader) lib/dataloader/ecto.ex:247: Dataloader.Source.Dataloader.Ecto.fetch/3
(dataloader) lib/dataloader/ecto.ex:287: Dataloader.Source.Dataloader.Ecto.load/3
(elixir) lib/enum.ex:1899: Enum."-reduce/3-lists^foldl/2-0-"/3
(dataloader) lib/dataloader.ex:123: Dataloader.load_many/4
(absinthe) lib/absinthe/resolution/helpers.ex:255: Absinthe.Resolution.Helpers.do_dataloader/6
(absinthe) lib/absinthe/resolution.ex:209: Absinthe.Resolution.call/2
(absinthe) lib/absinthe/phase/document/execution/resolution.ex:209: Absinthe.Phase.Document.Execution.Resolution.reduce_resolution/1
(absinthe) lib/absinthe/phase/document/execution/resolution.ex:168: Absinthe.Phase.Document.Execution.Resolution.do_resolve_field/4
(absinthe) lib/absinthe/phase/document/execution/resolution.ex:153: Absinthe.Phase.Document.Execution.Resolution.do_resolve_fields/6
(absinthe) lib/absinthe/phase/document/execution/resolution.ex:72: Absinthe.Phase.Document.Execution.Resolution.walk_result/5
(absinthe) lib/absinthe/phase/document/execution/resolution.ex:257: Absinthe.Phase.Document.Execution.Resolution.build_result/4
(absinthe) lib/absinthe/phase/document/execution/resolution.ex:153: Absinthe.Phase.Document.Execution.Resolution.do_resolve_fields/6
(absinthe) lib/absinthe/phase/document/execution/resolution.ex:72: Absinthe.Phase.Document.Execution.Resolution.walk_result/5
(absinthe) lib/absinthe/phase/document/execution/resolution.ex:53: Absinthe.Phase.Document.Execution.Resolution.perform_resolution/3
(absinthe) lib/absinthe/phase/document/execution/resolution.ex:24: Absinthe.Phase.Document.Execution.Resolution.resolve_current/3
(absinthe) lib/absinthe/pipeline.ex:269: Absinthe.Pipeline.run_phase/3
(absinthe_plug) lib/absinthe/plug.ex:414: Absinthe.Plug.run_query/4
(absinthe_plug) lib/absinthe/plug.ex:240: Absinthe.Plug.call/2
It looks like it goes through https://github.com/absinthe-graphql/dataloader/blob/master/lib/dataloader/ecto.ex#L310 correctly and matches the HasThrough
in the case. Then, it runs again at https://github.com/absinthe-graphql/dataloader/blob/master/lib/dataloader/ecto.ex#L326. Based on how our associations are setup, this returns an additional HasThrough
struct which then results in the error.
Basically, my question is this:
I want to be able to do the following query in absinthe:
query {
categories {
products(limit: 4) {
...
}
}
}
The resolver for products in the categories is a dataloader/1
function delegating to the query function, as per the tutorial.
In the query/2
function I receive the limit
parameter, as expected, but when I go to limit the query like
from p in Product, limit: 4
Dataloader gives 4 objects in total, not per category.
How can I get around this limitation?
StackOverflow turned up a couple of solutions like https://stackoverflow.com/questions/1124603/grouped-limit-in-postgresql-show-the-first-n-rows-for-each-group
, but I really don't know where to plug them.
This might be a bit vague but I believe it's to do with the batching functionality here #93 (which is what I upgraded to get).
My app uses Postgres schemas for multi-tenancy, and in 1.0.7 this worked fine. As far as I can tell, I haven't needed to manually specify the schema for Dataloader anywhere - my Absinthe resolvers use the correct prefix (eg. loading data using Repo.one(query, prefix: "my_prefix")
, and then Dataloader magically does the rest.
Upon upgrading to 1.0.8 for the new lateral join queries, now the database prefix information is missing when preloading data (and thus erroring because the default schema has no data in it).
If there's any more information I can provide, please let me know!
Writing this up as discussed in our Slack discussion, although I'm not sure it is a bug per se.
I think this may be just a change in behavior that should be documented as a breaking change (in 1.0.3). I'll prepare a PR to that effect.
This complicates investigation and makes all links from hexdocs broken
The model:
defmodule Database.Models.Account do
@moduledoc false
use Ecto.Schema
import Ecto.Changeset
@primary_key {:id, :binary_id, autogenerate: true}
@foreign_key_type :binary_id
schema "accounts" do
has_many(:organization_memberships, Database.Models.OrganizationMembership)
has_many(:organizations, through: [:organization_memberships, :organization])
end
@spec data :: Dataloader.Ecto.t()
def data() do
Dataloader.Ecto.new(Database.Repository)
end
end
The type:
defmodule Graphql.Types.Account do
@moduledoc false
use Absinthe.Schema.Notation
import Absinthe.Resolution.Helpers
object :account do
field :id, non_null(:id)
field :organizations, list_of(:organization),
resolve: dataloader(Database.Models.Organization)
end
end
The context with dataloader
def context(context) do
context
|> Map.merge(%{
loader: Enum.reduce(
[
Database.Models.Account,
Database.Models.Organization
],
Dataloader.new(),
fn model, loader -> Dataloader.add_source(loader, model, model.data()) end
)
})
end
The exception:
attempting to cast or change association `organization_memberships` from `Database.Models.Account` that was not loaded. Please preload your associations before manipulating them through changesets
I attempted to also add dataloader sources for OrganizationMembership, but that didn't change anything.
why this error can produced? it happened when we use big graph query with connections?
I'm trying to filter an objects association with the id of my current user which I put in the context via middleware.
I am trying to find, and failing, a way to pass this to the dataloader method I have setup in the context.
def query(UserExerciseFavourite, args) do
# I want the user id in args
#...
end
I can pass arguments this way:
object :exercise do
# I can pass arguments this way
field :user_exercise_favourites, list_of(:user_exercise_favourite) do
resolve(dataloader(Exercises, :user_exercise_favourites, args: %{foo: "bar"}))
end
end
And I can access the id this way, but I can't find the way to then pass it to the method like I can above!
field :user_exercise_favourites, list_of(:user_exercise_favourite) do
resolve fn exercise, _, %{context: %{loader: loader} = context} ->
user_id = context |> Map.get(:current_user)
Logger.debug("\n\n User: #{inspect user_id}\n\n")
# How can I take my user out of the context and pass to the Dataloader?
loader
|> on_load(fn loader ->
{:ok, Dataloader.load(loader, Exercises, :user_exercise_favourites, exercise)}
end)
end
end
Any tips please!
Hi,
we use Absinthe, Ecto and dataloader to fetch data from DB. In table, we have string column, but in GraphQl interface, we need to present string as enum (with String.to_atom or something).
Is there a place for such a transformation?
Thanks.
https://github.com/absinthe-graphql/dataloader/blob/master/lib/dataloader/ecto.ex#L281
Can this be changes from Map.t()
-> map()
?
I pulled down the repo but could not push a branch upstream.
Thanks!
Hi guys
give a schema like this:
schema "communities" do
field(:title, :string)
field(:desc, :string)
....
many_to_many(
:posts,
Post,
join_through: "communities_posts",
join_keys: [community_id: :id, post_id: :id]
)
timestamps(type: :utc_datetime)
end
Absinthe Schema :
object :community do
field(:id, :id)
field(:title, :string)
field(:desc, :string)
...
field :posts_count, :integer do
arg(:count, :count_type, default_value: :count)
arg(:type, :community_type, default_value: :community)
resolve(dataloader(CMS, :posts))
end
end
dataloader query part:
def query(Post, args) do
Post
|> select([p], count(p.id))
end
what i want is very simple: get the total postCount of the community, but a got GROUP BY errors :
[debug] ABSINTHE schema=MastaniServerWeb.Schema variables=%{}
---
{
community(id: 3) {
title
postsCount
}
}
---
[debug] QUERY OK source="communities" db=5.4ms
SELECT c0."id", c0."title", c0."desc", c0."user_id", c0."inserted_at", c0."updated_at" FROM "communities" AS c0 WHERE (c0."id" = $1) [3]
[debug] QUERY ERROR source="cms_posts" db=20.1ms
SELECT c1."id", count(c0."id") FROM "cms_posts" AS c0 INNER JOIN "communities" AS c1 ON c1."id" = ANY($1) INNER JOIN "communities_posts" AS c2 ON c2."community_id" = c1."id" WHERE (c2."post_id" = c0."id") ORDER BY c1."id" [[3]]
[info] Sent 200 in 909ms
[error] Task #PID<0.487.0> started from #PID<0.485.0> terminating
** (Postgrex.Error) ERROR 42803 (grouping_error): column "c1.id" must appear in the GROUP BY clause or be used in an aggregate function
(ecto) lib/ecto/adapters/sql.ex:431: Ecto.Adapters.SQL.execute_and_cache/7
(ecto) lib/ecto/repo/queryable.ex:133: Ecto.Repo.Queryable.execute/5
(ecto) lib/ecto/repo/queryable.ex:37: Ecto.Repo.Queryable.all/4
(elixir) lib/enum.ex:1294: Enum."-map/2-lists^map/1-0-"/2
(dataloader) lib/dataloader/ecto.ex:398: Dataloader.Source.Dataloader.Ecto.run_batch/2
(elixir) lib/task/supervised.ex:88: Task.Supervised.do_apply/2
(elixir) lib/task/supervised.ex:38: Task.Supervised.reply/5
(stdlib) proc_lib.erl:247: :proc_lib.init_p_do_apply/3
Function: &:erlang.apply/2
Args: [#Function<5.80139747/1 in Dataloader.Source.Dataloader.Ecto.run/1>, [{{:assoc, MastaniServer.CMS.Community, #PID<0.475.0>, :posts, MastaniServer.CMS.Post, %{count: :count, type: :community}}, #MapSet<[{[3], %MastaniServer.CMS.Community{__meta__: #Ecto.Schema.Metadata<:loaded, "communities">, author: #Ecto.Association.NotLoaded<association :author is not loaded>, desc: "js community", editors: #Ecto.Association.NotLoaded<association :editors is not loaded>, id: 3, inserted_at: #DateTime<2018-02-06 11:25:46.662701Z>, posts: #Ecto.Association.NotLoaded<association :posts is not loaded>, subscribers: #Ecto.Association.NotLoaded<association :subscribers is not loaded>, title: "js", updated_at: #DateTime<2018-02-06 11:25:46.662785Z>, user_id: 1}}]>}]]
Now we can just load by id
or a named association. It would be nice if we can support the full ecto functionality by having the ability to load any queryable.
We cannot do batching on queryables, so that benefit is gone, but we can execute the queries in parallel, reducing waterfalls and apply simple caching. Any thoughts?
Given the following Ecto schema (copied from the tests)
schema "posts" do
belongs_to(:user, Dataloader.User)
has_many(:likes, Dataloader.Like)
has_many(:liking_users, through: [:likes, :user])
field(:title, :string)
field(:deleted_at, :utc_datetime)
end
and a query/2
function like
def query(User, _args) do
User |> order_by(desc: :id)
end
The end result of
loader
|> Dataloader.load(Test, :liking_users, post1)
|> Dataloader.run()
is not not ordered by user.id
DESC
In other words, the end result is returned with the order of the intermediate query, not honoring the ORDER BY clause given in query/2
.
I'm not very familiar with the code but from I've saw in it, it seems to me that instead of keeping intermediate query results for %Ecto.Association.HasThrough{through: through}
, these results have to be replaced by the end query in order to respect the given ORDER BY
We've been seeing ballooning memory usage on our production servers.
The memory is taken up by Task.Supervisor spawned here: https://github.com/absinthe-graphql/dataloader/blob/master/lib/dataloader.ex#L232. Each time that async_safely
is run, a new supervisor is spawned which is never cleaned up.
For large queries, this supervisor can take up considerable memory (Iโm not sure why, this may be another, different bug). In our case it takes 32MB per supervisor.
Certain queries can cause Absinthe.run
to be called upwards of 30 times, which gives rather large memory requirements.
When I try to update to Ecto 3.0 in my codebase I get a conflict:
Failed to use "db_connection" because
ecto (versions 2.2.0 to 2.2.11) requires ~> 1.1
ecto_sql (version 3.0.0) requires ~> 2.0
The v1.0.8
tag is missing on Github (maybe it was simply not pushed) and therefore the hexdoc links are broken for 1.0.8 documentation.
How can I use dataloader load_many
to load all records (not giving any IDs)?
We can already override run_batch
, but preload
is hard coded. It would be nice to be able to add a customizable preload here.
I have a Dataloader with several data sources. Some are slow Dataloader.KV
sources, so I set their timeouts to 30 seconds. Others are fast Dataloader.Ecto
sources, I don't specify a timeout. The problem is that dataloader_timeout/1
return nil
instead of 30_000
, so the dataloader doesn't respect the timeout of the slow data sources.
I think this is an issue, feel free to close otherwise.
I might have misunderstood what dataloader is trying to do, but I thought I could potentially use it to load data across sources (In this case two distinct databases).
An example could be:
IAM database:
Contains BASIC user information
Core database:
contains user preferences etc:
Example of what I would like to achieve in Absinthe:
object :user do
field(:id, :string)
field(:email, :string)
field(:preferences, :user_preferences, resolve: dataloader(CoreDataSource))
end
If I try to do something similar to this I get the following error:
Valid association user_preferences not found on schema IAM.User.Schemas.User
It makes sense, I just hoped there was a way to use the ID provided to the dataloader to effectively load data from other sources. This case is trivial to solve, but the value of doing this with dataloader comes in handy when requesting a list of objects where you're loading subfields.
https://hexdocs.pm/ecto/Ecto.Schema.html#has_many/3-has_many-has_one-through
Right now a RuntimeError saying the association cannot be found in the schema is raised.
Dataloader.Ecto
currently throws an UndefinedFunctionError
if you try to load a non-ecto struct. However, I believe that this is a scenario that Dataloader.Ecto
should be able to handle because that non-ecto struct could be derived from data that is in the database.
Here is an example error for an example non-ecto struct named %Dataloader.MentionTag{}
(that represents entries that can be mentioned in a chat program like Slack):
** (UndefinedFunctionError) function Dataloader.MentionTag.__schema__/1 is undefined or private
code: |> Dataloader.load(Test, {:many, MentionTag}, name: "everything")
stacktrace:
(dataloader) Dataloader.MentionTag.__schema__(:primary_key)
(dataloader) lib/dataloader/ecto.ex:367: Dataloader.Source.Dataloader.Ecto.normalize_value/2
(dataloader) lib/dataloader/ecto.ex:343: Dataloader.Source.Dataloader.Ecto.get_keys/2
(dataloader) lib/dataloader/ecto.ex:247: Dataloader.Source.Dataloader.Ecto.fetch/3
(dataloader) lib/dataloader/ecto.ex:287: Dataloader.Source.Dataloader.Ecto.load/3
(elixir) lib/enum.ex:1925: Enum."-reduce/3-lists^foldl/2-0-"/3
(dataloader) lib/dataloader.ex:128: Dataloader.load_many/4
test/dataloader/ecto_test.exs:372: (test)
Is this a scenario that Dataloader.Ecto
could be expanded to support?
Let's say I have a :fruits_basket
object that can contains fruits and that can be placed in another basket. Something like:
object :fruits_basket do
field :id, non_null(:id)
field :slug, non_null(:string)
field :fruits, list_of(:delicious_fruit), do: resolve dataloader(:db, :fruits)
field :parent, :fruits_basket, do: resolve dataloader(:db, :parent_basket)
end
As of today a request like:
basket(slug: "grandma-basket") {
id
fruits {
name
color
}
parent {
id # We only ask for parent's id and no other field
}
}
...would make two queries like:
SELECT (...) FROM fruits_baskets WHERE slug = "grandma-basket" -- Get basket
SELECT (...) FROM fruits_baskets WHERE id = 42 -- Get parent basket
The thing is that second query is not necessary: we already have the parent id in grandma's basket.
I think it is a very common pattern to query only for the id of an association. In this example, the request could be used to show a "Go to parent" link on the frontend.
My real use case is a comment system where comments can reply to each others by using a reply_to
field. I would love to see Dataloader return the reply_to_id
field instead of preloading the full reply_to
when requesting only the id.
The easiest way to achieve this with actual system is to add a field for the id:
...
field :reply_to_id, :id
field :reply_to :comment, do: resolve dataloader(:db, :reply_to)
...
But that doesn't look very neat to me.
I only see hexdocs for 1.0.2 and not for 1.0.3: https://hexdocs.pm/dataloader/1.0.3/Dataloader.html is a 404 and https://hexdocs.pm/dataloader/Dataloader.html shows the 1.0.2 docs.
Hi devs, I'm reporting this issue after a lot of unsuccessful debugging ... even with :debugger
.
The problem is that when you create a migration without an :id
PK, such as
defmodule MyApp.Repo.Migrations.CreatePagaresAcreedores do
use Ecto.Migration
def change do
create table("pagares_acreedores", primary_key: false) do
add(:pagare_id, references("pagares"), null: false, on_delete: :delete_all)
add(:person_id, references("people"), null: false)
timestamps()
end
# avoid repeated person acreedor in pagare
create(unique_index("pagares_acreedores", [:pagare_id, :person_id]))
end
end
... and try to load it, only the first record is loaded. This is fixed as soon as you put the id in the migration.
Unfortunately I can't paste the rest of the code here but anyway, I'm leaving this issue here to keep record of it.
I suspect the problem starts in
defp run_batch({{:assoc, schema, pid, field, queryable, opts} = key, records}, source)
But I couldn't put a breakpoint in it because it timeouts the batch and kills the process .
For the record, the SQL query performed is correct .. perhaps, it's just the fetch logic that needs to support assocs without :id
Thanks in advance.
Upgraded to 1.0.8 of dataloader and am getting this error now:
** (Dataloader.GetError) {%Ecto.SubQueryError{exception: %Ecto.QueryError{message: "cannot preload associations in subquery in query:\n\nfrom u0 in Server.Accounts.User,\n join: u1 in Server.Accounts.UserProfile,\n on: u1.user_id == u0.id,\n where: u0.id == parent_as(:input).id,\n limit: ^40,\n offset: ^0,\n preload: [profile: u1]\n"}, message: "the following exception happened when compiling a subquery.\n\n ** (Ecto.QueryError) cannot preload associations in subquery in query:\n \n from u0 in Server.Accounts.User,\n join: u1 in Server.Accounts.UserProfile,\n on: u1.user_id == u0.id,\n where: u0.id == parent_as(:input).id,\n limit: ^40,\n offset: ^0,\n preload: [profile: u1]\n \n\nThe subquery originated from the following query:\n\nfrom u0 in subquery(from u0 in Server.Accounts.User,\n where: u0.id in ^[\"4cb62820-1d0e-4265-852a-e62086d98264\"],\n distinct: true,\n select: [:id]),\n as: :input,\n join_lateral: u1 in subquery(from u0 in Server.Accounts.User,\n join: p1 in assoc(u0, :profile),\n where: u0.id == parent_as(:input).id,\n limit: ^40,\n offset: ^0,\n preload: [profile: p1]),\n on: true,\n select: u1\n"}, [{Ecto.Repo.Queryable, :execute, 4, [file: 'lib/ecto/repo/queryable.ex', line: 176]}, {Ecto.Repo.Queryable, :all, 3, [file: 'lib/ecto/repo/queryable.ex', line: 17]}, {Dataloader.Ecto, :run_batch, 6, [file: 'lib/dataloader/ecto.ex', line: 328]}, {Dataloader.Source.Dataloader.Ecto, :run_batch, 2, [file: 'lib/dataloader/ecto.ex', line: 643]}, {Dataloader.Source.Dataloader.Ecto, :"-run_batches/1-fun-1-", 2, [file: 'lib/dataloader/ecto.ex', line: 601]}, {Task.Supervised, :invoke_mfa, 2, [file: 'lib/task/supervised.ex', line: 90]}, {Task.Supervised, :reply, 5, [file: 'lib/task/supervised.ex', line: 35]}, {:proc_lib, :init_p_do_apply, 3, [file: 'proc_lib.erl', line: 226]}]} (dataloader 1.0.8) lib/dataloader.ex:223: Dataloader.do_get/2
Didn't see anything mentioned in the changelog about this - happy to dig in and provide more information if helpful!
Since commit 8a53e31 changes the optional dependency on ecto
to ecto_sql
, since my project is still on Ecto 2.x the optional dependency on ecto_sql
has no effect. This means that dataloader may be compiled before ecto, which in turns means that the if Code.ensure_loaded?(Ecto)
check fails in dataloader_ecto.ex
and then my code breaks since it uses Dataloader.Ecto
.
I'm not sure what the best course is for this. Some options/thoughts:
Dataloader.Ecto
into a separate project that has a direct dependency on ectoDataloader.Ecto
uses is actually from ecto
and not ecto_sql
It seems like if we make multiple duplicate loads, dataloader constructs a SQL query with multiple IDs.
For example, we construct a loader like this:
loader = loader
|> Dataloader.load(:nest_db, NestDB.BuyerOffer, 290)
|> Dataloader.load(:nest_db, NestDB.BuyerOffer, 290)
|> Dataloader.load(:nest_db, NestDB.BuyerOffer, 290)
Now we get a struct like so:
%Dataloader{
options: [],
sources: %{
nest_db: %Dataloader.Ecto{
batches: %{
{:queryable, #PID<0.1196.0>, NestDB.BuyerOffer, %{}} => [
{290, 290},
{290, 290},
{290, 290}
]
},
default_params: %{},
options: [],
query: #Function<0.77438868/2 in Eggl.Loader.dataloader/0>,
repo: NestDB.Repo,
repo_opts: [],
results: %{}
}
}
}
Which results in a query that looks like this:
...
WHERE (b0."id" = ANY($1)) [[290, 290, 290, 290]]
Could we use a Mapset to remove this duplication?
I'm getting an error from Ecto when using dataloader with a batched GraphQL query request. Specifically I have two Ecto schemas, Foo and Bar, which both reference a third, Baz, via association.
The batched query operates against both Foo and Bar, utilizing the same dataloader(Baz).
This line in the ecto source triggers the error I'm receiving:
https://github.com/elixir-ecto/ecto/blob/master/lib/ecto/repo/preloader.ex#L133
Not sure how to fix this, or if this is something that can be handled in the dataloader code.
The traceback:
** (ArgumentError) expected a homogeneous list containing the same struct, got: Foo and Bar
(elixir) lib/enum.ex:1899: Enum."-reduce/3-lists^foldl/2-0-"/3
(elixir) lib/enum.ex:1294: Enum."-map/2-lists^map/1-0-"/2
(dataloader) lib/dataloader/ecto.ex:297: Dataloader.Source.Dataloader.Ecto.run_batch/2
(elixir) lib/task/supervised.ex:88: Task.Supervised.do_apply/2
(elixir) lib/task/supervised.ex:38: Task.Supervised.reply/5
(stdlib) proc_lib.erl:247: :proc_lib.init_p_do_apply/3
https://github.com/absinthe-graphql/dataloader/blob/master/lib/dataloader/ecto.ex#L297
1.0.2
Craft GraphQL APIs In Elixir
Chap 10, Complex Queries
schema.ex
query do
field :menu_item, :menu_item do
arg :id, non_null(:id)
resolve &Resolvers.Menu.get_item/3
end
resolvers/menu.ex
defmodule PlateSlateWeb.Resolvers.Menu do
alias PlateSlate.Menu
import Absinthe.Resolution.Helpers
def get_item(_, %{id: id}, %{context: %{loader: loader}}) do
loader
|> Dataloader.load(Menu, Menu.Item, id)
|> on_load(fn loader ->
{:ok, Dataloader.get(loader, Menu, Menu.Item, id)}
end)
end
GraphiQL
{
menuItem (id: "1") {
name
}
}
{
"data": {
"menuItem": {
"name": "Reuben"
}
}
}
It works with dataloader 1.0.0
or 1.0.1
.
{
"data": {
"menuItem": null
}
}
Sorry to submit an issue here due to the temporary shutdown of Pragmatic Forum.
The FOSTA-SESTA act of 2017 makes us legally responsible for all content posted here by anyone at any time. This act removes Section 230 Safe Harbor protections.
We cannot possibly monitor all posts made, in real time, and decide if they break any particular interpretation of a vague and imprecise law. That is logistically ludicrous and philosophically objectionable. Our only option is to remove all access to the forums until the US legislature restores reasonable safe harbor law
I've been following this guide for setting up absinthe + dataloader + ecto.
https://hexdocs.pm/absinthe/ecto.html
I cannot for the life of me figure out why none of the example in that guide (or the many others I've searched) do not work for my application.
When I make this GraphQL query:
{ families { id, name, traits { id } }}
I get an exception that reads The given atom - :traits - is not a module.
.
** (exit) an exception was raised:
** (Dataloader.GetError) The given atom - :traits - is not a module.
This can happen if you intend to pass an Ecto struct in your call to
`dataloader/4` but pass something other than a struct.
(dataloader 1.0.7) lib/dataloader/ecto.ex:468: Dataloader.Source.Dataloader.Ecto.validate_queryable/1
(dataloader 1.0.7) lib/dataloader/ecto.ex:442: Dataloader.Source.Dataloader.Ecto.get_keys/2
(dataloader 1.0.7) lib/dataloader/ecto.ex:370: Dataloader.Source.Dataloader.Ecto.load/3
(elixir 1.10.3) lib/enum.ex:2111: Enum."-reduce/3-lists^foldl/2-0-"/3
(dataloader 1.0.7) lib/dataloader.ex:128: Dataloader.load_many/4
(absinthe 1.4.16) lib/absinthe/resolution/helpers.ex:255: Absinthe.Resolution.Helpers.do_dataloader/6
(absinthe 1.4.16) lib/absinthe/resolution.ex:209: Absinthe.Resolution.call/2
(absinthe 1.4.16) lib/absinthe/phase/document/execution/resolution.ex:209: Absinthe.Phase.Document.Execution.Resolution.reduce_resolution/1
(absinthe 1.4.16) lib/absinthe/phase/document/execution/resolution.ex:168: Absinthe.Phase.Document.Execution.Resolution.do_resolve_field/4
(absinthe 1.4.16) lib/absinthe/phase/document/execution/resolution.ex:153: Absinthe.Phase.Document.Execution.Resolution.do_resolve_fields/6
(absinthe 1.4.16) lib/absinthe/phase/document/execution/resolution.ex:72: Absinthe.Phase.Document.Execution.Resolution.walk_result/5
(absinthe 1.4.16) lib/absinthe/phase/document/execution/resolution.ex:98: Absinthe.Phase.Document.Execution.Resolution.walk_results/6
(absinthe 1.4.16) lib/absinthe/phase/document/execution/resolution.ex:87: Absinthe.Phase.Document.Execution.Resolution.walk_result/5
(absinthe 1.4.16) lib/absinthe/phase/document/execution/resolution.ex:257: Absinthe.Phase.Document.Execution.Resolution.build_result/4
(absinthe 1.4.16) lib/absinthe/phase/document/execution/resolution.ex:153: Absinthe.Phase.Document.Execution.Resolution.do_resolve_fields/6
(absinthe 1.4.16) lib/absinthe/phase/document/execution/resolution.ex:72: Absinthe.Phase.Document.Execution.Resolution.walk_result/5
(absinthe 1.4.16) lib/absinthe/phase/document/execution/resolution.ex:53: Absinthe.Phase.Document.Execution.Resolution.perform_resolution/3
(absinthe 1.4.16) lib/absinthe/phase/document/execution/resolution.ex:24: Absinthe.Phase.Document.Execution.Resolution.resolve_current/3
(absinthe 1.4.16) lib/absinthe/pipeline.ex:274: Absinthe.Pipeline.run_phase/3
(absinthe_plug 1.4.7) lib/absinthe/plug.ex:421: Absinthe.Plug.run_query/4
Here are the relevant parts of my Schema module...
The query block:
query do
@desc "Get all families"
field :families, list_of(:family) do
resolve &Resolvers.Catalog.list_families/3
end
end
The resolver
def list_families(_parent, _args, _resultion) do
families = Family
|> Repo.all()
|> Enum.map(&Map.drop &1, [:__meta__, :__struct__, :traits])
{:ok, families}
end
The family object:
object :family do
field :id, :id
field :name, :string
field :traits, list_of(:trait), resolve: dataloader(Catalog)
end
The dataloader
def data() do
Dataloader.Ecto.new(Repo)
end
The trait object:
object :trait do
field :id, :id
field :name, :string
end
I am under the impression that Dataloader should be able to figure out which Ecto schema I'm referring to because the association name matches the names of the GraphQL field.
Here is the schema for family
schema "families" do
field :name, :string
timestamps()
many_to_many :traits, Trait, join_through: "families_traits"
end
I don't know if my exception is an issue with Dataloader or not, but I do believe there is an issue with the exception message. It refers to a function dataloader/4
which I do not see anywhere. Should that say dataloader/3
?
Also... when I try to remedy the exception by using Absinthe.Resolution.Helpers.dataloader/3
and explicitly passing the Ecto schema to use.. I get another exception that makes even less sense to me.
Here is the family object with dataloader/3
object :family do
field :id, :id
field :name, :string
field :traits, list_of(:trait), resolve: dataloader(Catalog, Trait, [])
end
And here's the exception
Request: POST /api/graphql
** (exit) an exception was raised:
** (Dataloader.GetError) {{:badmatch, :error}, [{Dataloader.Source.Dataloader.Ecto, :"-run_batch/2-fun-1-", 3, [file: 'lib/dataloader/ecto.ex', line: 548]}, {Enum, :"-reduce/3-lists^foldl/2-0-", 3, [file: 'lib/enum.ex', line: 2111]}, {Dataloader.Source.Dataloader.Ecto, :run_batch, 2, [file: 'lib/dataloader/ecto.ex', line: 547]}, {Task.Supervised, :invoke_mfa, 2, [file: 'lib/task/supervised.ex', line: 90]}, {Task.Supervised, :reply, 5, [file: 'lib/task/supervised.ex', line: 35]}, {:proc_lib, :init_p_do_apply, 3, [file: 'proc_lib.erl', line: 249]}]}
(dataloader 1.0.7) lib/dataloader.ex:198: Dataloader.do_get/2
(absinthe 1.4.16) lib/absinthe/resolution/helpers.ex:257: anonymous fn/5 in Absinthe.Resolution.Helpers.do_dataloader/6
(absinthe 1.4.16) lib/absinthe/middleware/dataloader.ex:33: Absinthe.Middleware.Dataloader.get_result/2
(absinthe 1.4.16) lib/absinthe/phase/document/execution/resolution.ex:209: Absinthe.Phase.Document.Execution.Resolution.reduce_resolution/1
(absinthe 1.4.16) lib/absinthe/phase/document/execution/resolution.ex:168: Absinthe.Phase.Document.Execution.Resolution.do_resolve_field/4
(absinthe 1.4.16) lib/absinthe/phase/document/execution/resolution.ex:98: Absinthe.Phase.Document.Execution.Resolution.walk_results/6
(absinthe 1.4.16) lib/absinthe/phase/document/execution/resolution.ex:77: Absinthe.Phase.Document.Execution.Resolution.walk_result/5
(absinthe 1.4.16) lib/absinthe/phase/document/execution/resolution.ex:98: Absinthe.Phase.Document.Execution.Resolution.walk_results/6
(absinthe 1.4.16) lib/absinthe/phase/document/execution/resolution.ex:87: Absinthe.Phase.Document.Execution.Resolution.walk_result/5
(absinthe 1.4.16) lib/absinthe/phase/document/execution/resolution.ex:98: Absinthe.Phase.Document.Execution.Resolution.walk_results/6
(absinthe 1.4.16) lib/absinthe/phase/document/execution/resolution.ex:77: Absinthe.Phase.Document.Execution.Resolution.walk_result/5
(absinthe 1.4.16) lib/absinthe/phase/document/execution/resolution.ex:53: Absinthe.Phase.Document.Execution.Resolution.perform_resolution/3
(absinthe 1.4.16) lib/absinthe/phase/document/execution/resolution.ex:24: Absinthe.Phase.Document.Execution.Resolution.resolve_current/3
(absinthe 1.4.16) lib/absinthe/pipeline.ex:274: Absinthe.Pipeline.run_phase/3
(absinthe_plug 1.4.7) lib/absinthe/plug.ex:421: Absinthe.Plug.run_query/4
(absinthe_plug 1.4.7) lib/absinthe/plug.ex:247: Absinthe.Plug.call/2
(phoenix 1.4.17) lib/phoenix/router/route.ex:41: Phoenix.Router.Route.call/2
(phoenix 1.4.17) lib/phoenix/router.ex:288: Phoenix.Router.__call__/2
(myapp 0.1.0) lib/myapp_web/endpoint.ex:1: MyappWeb.Endpoint.plug_builder_call/2
(myapp 0.1.0) lib/plug/debugger.ex:132: MyappWeb.Endpoint."call (overridable 3)"/2
Analogous to graphql/dataloader#42
Very useful with the KV source, where the load function is calling batch APIs with max batch sizes.
I may be able to help implement this. If I were to implement to this, should the "max batch size" functionality be in the Dataloader or KV level? I think it makes sense at the Dataloader level to allow any source to use this functionality.
Even recompiling absinthe doesnโt fix it.
{:phoenix, "~> 1.4.15"},
{:phoenix_pubsub, "~> 1.1"},
{:phoenix_ecto, "~> 4.0"},
{:gettext, "~> 0.11"},
{:nuki, in_umbrella: true},
{:jason, "~> 1.0"},
{:plug_cowboy, "~> 2.0"},
{:dataloader, "~> 1.0"},
{:absinthe_plug, "~> 1.4"},
{:absinthe, "~> 1.4"},
{:absinthe_ecto, "~> 0.1.3"}
Iโm in an umbrella application, added absinthe and dataloader to the phoenix app within the umbrella, and added only dataloader to the logic app within the umbrella.
Tried deleting _build
and deps
but the same result.
defmodule NukiWeb.Schema.ContentTypes do
import Absinthe.Resolution.Helpers, only: [dataloader: 1]
use Absinthe.Schema.Notation
# ====
$ iex -S mix phx.server
Erlang/OTP 21 [erts-10.1] [source] [64-bit] [smp:8:8] [ds:8:8:10] [async-threads:1] [hipe]
==> nuki_web
Compiling 12 files (.ex)
== Compilation error in file lib/nuki_web/schema/content_types.ex ==
** (CompileError) lib/nuki_web/schema/content_types.ex:12: undefined function dataloader/1
(elixir 1.10.2) src/elixir_locals.erl:114: anonymous fn/3 in :elixir_locals.ensure_no_undefined_local/3
(stdlib 3.6) erl_eval.erl:680: :erl_eval.do_apply/6
(elixir 1.10.2) lib/kernel/parallel_compiler.ex:304: anonymous fn/4 in Kernel.ParallelCompiler.spawn_workers/7
I'm wondering if this is caused by my core app only having dataloader in mix.exs for:
defmodule Nuki.DataloaderRepo do
def data() do
Dataloader.Ecto.new(Nuki.Repo, query: &query/2)
end
def query(queryable, _params) do
queryable
end
end
And my phoenix app having dataloader and absinthe libs.
Appreciate any insight from anyone who used this library in an umbrella setting.
When we have a limit and offset in a child queryable of a parent that have composite primary keys, we get a badmatch
exception because of this:
dataloader/lib/dataloader/ecto.ex
Line 688 in 19345b4
I'm happy to help with a PR, I'm just no sure what is the best way to use the primary keys in the where
clause below to get the results:
dataloader/lib/dataloader/ecto.ex
Line 706 in 19345b4
Any ideas?
I've run into a problem where loading the same item from the same source with the same batch key, but after dataloader has already ran, causes the record to be fetched again. Example:
post(id: 1) {
children {
parentPost { ... }
}
}
In the resolver for the "post" query, I load the record with dataloader instead of just using raw ecto because I want its children to be able to read it without fetching it again. When I run the above query, it runs the SELECT ... FROM posts WHERE id = ANY([1, ...])
. Then it runs the query to get its children and when resolving the parentPost
field, it runs the SELECT ... FROM posts WHERE id = ANY([1, ...])
query again. The example isn't great because it looks like fetching parentPost is unnecessary, but I don't actually have that field, I just need to get the parent to perform some logic on each child post.
Looking at dataloader's code, it looks like when run_batches/1
calls run_batch/2
function (not the customizablerun_batch/5
), run_batch/2
never checks to see if the record has already been loaded, that check only happens in load
dataloader/lib/dataloader/ecto.ex
Line 452 in 532d67e
EDIT: Opened up the debugger and it looks like the checking in load should be enough, now I just have to figure out why fetched?/3 is returning false
Dataloader.run/1
, then Dataloader.load/4
with new items and then Dataloder.run/1
again to fetch results for the new itemsDataloader.KV
is the source, the second call to Dataloader.run/1
does not fetch results for new itemsIt's possible that the :load_function
I'm passing to Dataloader.KV.new/1
is flawed.
However, it should be easy to reproduce this behavior with a proven :load_function
of your own.
With Dataloader.KV
as the source:
iex(1)> loader = Dataloader.new() |> Dataloader.add_source(Data, kv_source)
iex(2)> loader = Dataloader.load(loader, Data, kv_batch, kv_item_1)
iex(3)> Dataloader.pending_batches?(loader)
true
iex(4)> loader = Dataloader.run(loader)
iex(5)> Dataloader.pending_batches?(loader)
false
iex(6)> loader = Dataloader.load(loader, Data, kv_batch, kv_item_2)
iex(7)> Dataloader.pending_batches?(loader)
false
On the other hand, if I use Dataloader.Ecto
, I get a different result:
iex(1)> loader = Dataloader.new() |> Dataloader.add_source(EctoData, ecto_source)
iex(2)> loader = Dataloader.load(loader, EctoData, ecto_batch, ecto_item_1)
iex(3)> Dataloader.pending_batches?(loader)
true
iex(4)> loader = Dataloader.run(loader)
iex(5)> Dataloader.pending_batches?(loader)
false
iex(6)> loader = Dataloader.load(loader, EctoData, ecto_batch, ecto_item_2)
iex(7)> Dataloader.pending_batches?(loader)
true
When you run a GraphQL query like:
query StarWarsHuman($humanID:ID!){
human(id: $humanID) {
name
friends {
name
friends {
name
}
}
}
}
You get back:
{
"data": {
"human": {
"name": "Luke Skywalker",
"friends": [
{
"name": "Han Solo",
"friends": null
},
{
"name": "Leia Organa",
"friends": null
},
{
"name": "C-3PO",
"friends": null
},
{
"name": "R2-D2",
"friends": null
}
]
}
}
}
You can get traverse one depth level of a graph, but no further ๐
dataloader/lib/dataloader/ecto.ex
Line 540 in 56f7cb4
By default the ID
type in Absinthe is a String. If one has regular integer fields in their Ecto schema, a user might pass a string that can't be casted to ID, the line above will fail, raising an match error.
iex(1)> Ecto.Type.cast(:id, "123")
{:ok, 123}
iex(2)> Ecto.Type.cast(:id, "foo")
:error
Setting get_policy
to :return_nil_on_error
or :tuples
will still make dataloader raise.
The workaround seems to be checking the id arguments as valid before calling the dataloader, defeating the purpose of the dataloading helpers in Absinthe.Resolution.Helpers
As discuss with @benwilson512 , the assoc
macro from absinthe_ecto is checking if the value for the field was already present in the parent in the case that it was preloaded and would simply return it if that is the case.
The Ecto dataloader should mimic this behaviour which is super useful when a field may or may not be preloaded already.
This is more of a question, like is it a good idea / possible...
We have a struct that does not always have an id; the struct is generated from a sql query and we map the result into a struct (that might at some point get saved to the db).
That struct also has relations, and I was going to use dataloader to get its relations, but dataloader batches by the parent id (as far as I can tell). Meaning if we generate 3 different structs which all have a null id, dataloader says "I've seen this struct before" and thinks all of them are the same thing, when they are not.
Is there a way to tell dataloader to not do that? Can I say "Hey dataloader use these key(s) to batch on"? Is this is a good idea even? Are bagels just doughnuts in disguise??
The Ecto loader caches the existing associations if they are already loaded. I think a better behaviour is to explicitly cache them such as the Absinthe helper looks to support with this code:
def use_parent(loader, resource, parent, args, opts) do
with true <- Keyword.get(opts, :use_parent, false),
{:ok, val} <- is_map(parent) && Map.fetch(parent, resource) do
Dataloader.put(loader, Ectoloader, {resource, args}, parent, val)
else
_ -> loader
end
end
However even if use_parent
is false the value will still be cached.
The below code. If parent
already contains resource
a query will not be made:
loader
|> Dataloader.load(Context, {resource, args}, parent)
|> on_load(fn loader ->
{:ok, Dataloader.get(loader, Context, {resource, args}, parent)}
end
I'm using dataloader to resolve a field:
field :foos, list_of(:foo) do
resolve(dataloader(Foo, :foos, args: %{}))
end
When I query this field, I eventually hit my custom query()
function. I want to be able to explicitly access values from the parent so I can use them for certain clauses within my query, similar to what's shown below:
def query(Foo, _params, %{bar: bar, baz: baz} = _foo_parent) do
from(f in Foo,
where: f.bar == ^bar,
where: f.baz != ^baz
)
end
Is there a way to achieve this? If not, it would be helpful functionality to have added.
%Dataloader{
options: [],
sources: %{
RedOcean.Entities => %Dataloader.Ecto{
batches: %{},
default_params: %{},
options: [],
query: #Function<0.78108663/2 in RedOcean.Entities.data/1>,
repo: RedOcean.Repo,
repo_opts: [],
results: %{
{:assoc, RedOcean.Entities.Instance, #PID<0.622.0>, :entity_facts,
RedOcean.Entities.Fact,
%{dataset: "public"}} => %{
["03954047-7a76-46a1-a31a-53a6f3039f4f"] => []
}
}
},
}
}
Dataloader.get(
loader,
RedOcean.Entities,
:entity_facts,
%RedOcean.Entities.Instance{id: "03954047-7a76-46a1-a31a-53a6f3039f4f"}
) # => nil
I'd expect an empty array to be passed.
note
I did scrub out some irrelevant assocs on the dataloader struct.
i use v 1.0.7
All association use custom batch.
Also i have a big struct like
posts(first: 10) {
user {
id
}
comments(first: 10) {
user {
id
}
replies {
user {
id
}
uploads: {
user {
id
}
}
}
}
}
But looks like when i try load many dataloader batches i got error
"Unable to find item [\"a83e3944-1b45-4f70-b400-0f119d989fb0\"] in batch
when i remove 1 field with dataloader from query all is ok but after back it query again failed.
Any things?
one example
def dataloader_one(source, queryable, type, map_fn \\ & &1) do
fn parent, _args, %{context: %{loader: loader}} = _resolution ->
parent_id = Map.get(parent, :id) || Map.get(parent, :__parent_id)
loader
|> Dataloader.load(source, {:one, queryable}, [{type, parent_id}])
|> on_load(fn loader ->
loader
|> Dataloader.get(source, {:one, queryable}, [{type, parent_id}])
|> map_fn.()
|> wrap_with_ok
end)
end
end
def run_batch(_, query, :quests_count, user_ids, repo_opts, %{current_user: current_user} = params) do
User.quests_count(current_user.id, user_ids, true)
|> run_batch_wrapper(params[:repo], user_ids, repo_opts, 0)
end
def run_batch_wrapper(query, repo, entities_ids, repo_opts, default_value \\ nil, map_fn \\ &(&1)) do
repo = repo || Repo
results =
query
|> repo.all(repo_opts)
|> Map.new
for id <- entities_ids, do: map_fn.(Map.get(results, id, default_value))
end
Would it be possible to add the association name to the options passed to the query function when using a custom query?
For example:
Dataloader.load(loader, Post, :replies, post)
would pass %{assoc_name: :replies}
or something similar to the opts parameter in the query function.
Howdy all,
We recently ran into a production bug which breaks queries that touch the same field (backed by a custom KV
store) in two different places in the query.
The underlying bug was fixed in #75 and merged in commit 4c4d536
Currently we are depending on that commit specifically for builds/deployments, but would greatly appreciate if it would be possible to cut a new release that could be published to hex.
@bruce said to post an issue and tag @benwilson512, so here you go!
Thanks for everything, dataloader is magic.
Hello, this repo needs a license.
I have a graphql query that looks like this (simplified)
posts { # Type of [Post]
body
viewerHasLiked
comments { # Type of [Post] (but just has parent_id column or something)
body
viewerHasLiked
replies { # Type of [Post]
body
viewerHasLiked
}
}
}
I am using Dataloader for the associations and for the viewerHasLiked
field. Unfortunately, when this query is run, Dataloader runs the query for viewerHasLiked
twice. Once for the post and its comments, and once for the comments' replies. I have been trying to figure out a way to merge this all into one query and it ended up bringing me to Absinthe.Middleware.Dataloader
. It looks like, if I understand correctly, before the field is to be resolved, the plugin calls Dataloader.run/1
on the loader in the context. Then when the middleware is called, it checks if batches are still pending (if the Dataloader has finished running?). If there are none pending, it returns the result from Dataloader and if there are still batches, it sets the field's state to :suspended
and adds a middleware to that field to call it again, and that is when it returns the result.
The issue is that when Dataloader.run/1
in before_resolution/1
is called, it fetches for all the existing batches (including for viewerHasLiked
), but it has only resolved posts and comments so far. I'm not really sure how this would be approached because Dataloader has to load the stuff for comments and replies in order to have the ids to use for loading viewerHasLiked
.
It could be some kind of priority system where things with a higher priority (posts, comments, replies) would be resolved first, and then something with a lower priority (viewerHasLiked
) would be resolved last. This might require extensive rearchitecting of absinthe because it would have to resolve in multiple passes, depth first, instead of breadth first. I'm not sure though.
We could also have a Dataloader.run/2
that selectively fetches. It could group things together by their object and look at both the schema and what's being requested in the query or mutation. With this information, it could do passes where it tries to resolve as much as possible until it can't anymore and then resolve the fields with Dataloader middleware by looking at what child fields are requested. Here's an example using the query above:
posts
body
viewerHasLiked
, but it's a Dataloader field so it suspendscomments
, but it's a Dataloader field so it suspendscomments
's child fields is viewerHasLiked
, so it resolves (and fetches from DB) comments
firstcomments
' children. It can resolve body
, but not viewerHasLiked
.replies
, but it's a Dataloader field, so it can't. It has nothing more to resolve so it starts on the Dataloader fieldsreplies
are, and it sees viewerHasLiked
, so it resolves (and fetches) replies
first.replies
' bodyviewerHasLiked
with the ids from every level, running a single query. It can resolve the replies
' viewerHasLiked
posts
' viewerHasLiked
and then comments
' viewerHasLiked
I'm pretty familiar with Dataloader's internals, but not as much with Absinthe's. If someone who knows more about how Absinthe words could provide their input this would be much appreciated.
Now we could only configure prefix
in repo_opts
, is it possible to make the option also available in Dataloader.Ecto.load
? For example
Dataloader.Ecto.load_many(Repo, :assoc_key, record,
repo_opts: [prefix: "my_prefix"], other_option: true)
Now ecto-sql 3.0 already supported prefix
well, hope dataloader could do better.
Using dataloader 1.3 on:
elixir -v
Erlang/OTP 21 [erts-10.0.4] [source] [64-bit] [smp:12:12] [ds:12:12:10] [async-threads:1] [hipe]
Elixir 1.6.6 (compiled with OTP 21)
When my data
functions look like:
def data(default_params) do
Dataloader.Ecto.new(Repo, query: &query/2, default_params: default_params)
end
Dialyzer complains with the "Function data/1 has no local return" message, which trickles up to its callers (terminating with the dataloader
setup function).
If the code looks like:
def data(default_params) do
Dataloader.Ecto.new(Repo, query: &query/2)
end
(ie no default_params), then I get no dialyzer warnings (at all, in a 310-file project, how did that happen?). Yay! Of course, I need the default params in order to pass auth information to the queries.
Looking in Dataloader.Ecto.new I see plenty of opportunities for things to go wrong, but I'm not convinced I can fix it before the heat-death of the universe. Any dialyzer gurus fancy taking a look?
Thanks, Rob.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.