Code Monkey home page Code Monkey logo

absinthe_relay's Introduction

Absinthe.Relay

Build Status Version Hex Docs Download License Last Updated

Support for the Relay framework from Elixir, using the Absinthe package.

Installation

Install from Hex.pm:

def deps do
  [
    {:absinthe_relay, "~> 1.5.0"}
  ]
end

Note: Absinthe.Relay requires Elixir 1.11 or higher.

Upgrading

See CHANGELOG for upgrade steps between versions.

You may want to look for the specific upgrade guide in the Absinthe documentation.

Documentation

See "Usage," below, for basic usage information and links to specific resources.

Related Projects

See the GitHub organization.

Usage

Schemas should use Absinthe.Relay.Schema, optionally providing what flavor of Relay they'd like to support (:classic or :modern):

defmodule Schema do
  use Absinthe.Schema
  use Absinthe.Relay.Schema, :modern

  # ...

end

For a type module, use Absinthe.Relay.Schema.Notation

defmodule Schema do
  use Absinthe.Schema.Notation
  use Absinthe.Relay.Schema.Notation, :modern

  # ...

end

Note that if you do not provide a flavor option, it will choose the default of :classic, but warn you that this behavior will change to :modern in absinthe_relay v1.5.

See the documentation for Absinthe.Relay.Node, Absinthe.Relay.Connection, and Absinthe.Relay.Mutation for specific usage information.

Node Interface

Relay requires an interface, "Node", be defined in your schema to provide a simple way to fetch objects using a global ID scheme.

See the Absinthe.Relay.Node module documentation for specific instructions on how do design a schema that makes use of nodes.

Connection

Relay uses Connection (and other related) types to provide a standardized way of slicing and paginating a one-to-many relationship.

Support in this package is designed to match the Relay Cursor Connection Specification.

See the Absinthe.Relay.Connection module documentation for specific instructions on how do design a schema that makes use of nodes.

Mutation

Relay supports mutation via a contract involving single input object arguments (optionally for Relay Modern) with client mutation IDs (only for Relay Classic).

See the Absinthe.Relay.Mutation module documentation for specific instructions on how to design a schema that makes use of mutations.

Supporting the Babel Relay Plugin

To generate a schema.json file for use with the Babel Relay Plugin, run the absinthe.schema.json Mix task, built-in to Absinthe.

In your project, check out the documentation with:

mix help absinthe.schema.json

Community

The project is under constant improvement by a growing list of contributors, and your feedback is important. Please join us in Slack (#absinthe-graphql under the Elixir Slack account) or the Elixir Forum (tagged absinthe).

Please remember that all interactions in our official spaces follow our Code of Conduct.

Contributing

Please follow contribution guide.

License

See LICENSE.md.

absinthe_relay's People

Contributors

amrnt avatar avitex avatar benwilson512 avatar bgentry avatar binaryseed avatar bruce avatar dolfinus avatar dpehrson avatar flowerett avatar hanrelan avatar johnnoone avatar jparise avatar jsteiner avatar kianmeng avatar kyasu1 avatar maennchen avatar mckayb avatar ndreynolds avatar rewritten avatar ronaldcurtis avatar rzane avatar simunkaracic avatar skosch avatar ssbb avatar superhawk610 avatar thaterikperson avatar tmsch avatar un3qual avatar v-kolesnikov avatar vtm9 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

absinthe_relay's Issues

Connection and edge types non_null?

Is it possible to make connection and edge types generated from:

connection node_type: :pet

Be wrapped with Non-Null?

This would be useful because relay-compiler now generates flow types automatically so using connections can be type checked without having to:

const nodes = (
  connection &&
  connection.edges &&
  connection.edges.map((edge: any) => edge.node) // edge.node is possibly null
) || []

Thanks!

Use Absinthe.Relay.Node.ParseIDs with input_object

Is there any way to use Absinthe.Relay.Node.ParseIDs with input_object? How to convert (id, type_id) from global_id.

  input_object :address_for_save do
    field :id, :id
    field :value, :integer
    field :type_id, :id
  end

Thanks

Question about RANGE_ADD mutation_type

In the Star Wars Relay example, there is an AddShipMutation which has the following config

// ...
  getConfigs() {
    return [{
      type: 'RANGE_ADD',
      parentName: 'faction',
      parentID: this.props.faction.id,
      connectionName: 'ships',
      edgeName: 'newShipEdge',
      rangeBehaviors: {
        '': 'append',
        'orderby(oldest)': 'prepend',
      },
    }];
  }
// ....

Notice that RANGE_ADD needs to be specified with an edgeName

The schema.js in the example imports cursorFromObjectInConnection() from graphql-relay and use that function to calculate a cursor in the outputFields of the mutation.

To implement the same mutation in Absinthe.Relay I had to get the cursor manually.

Do you think we need a cursor_from_object_in_connection()/2 in Absinthe.Relay to handle RANGE_ADD output the same way than in graphql-relay Javascript imlementation?

absinthe.schema.graphql task complains about ID type

The schema.graphql is created fine, but the following warning is shown when the Relay node interface is used:

[warn] Absinthe: Elixir.Absinthe.Adapter.LanguageConventions could not adapt %Absinthe.Language.InterfaceDefinition{fields: [%Absinthe.Language.FieldDefinition{arguments: [], loc: %{start: nil}, name: "id", type: %Absinthe.Language.NonNullType{loc: %{start: nil}, type: %Absinthe.Language.NamedType{loc: %{start: nil}, name: "ID"}}}], loc: %{start: nil}, name: "Node"}

compile error in umbrella project

Permanently get compilation error when trying to use absinthe_relay in umbrella project.

$ mix compile
==> absinthe_relay
Compiling 12 files (.ex)

== Compilation error on file lib/absinthe/relay/connection.ex ==
** (CompileError) lib/absinthe/relay/connection.ex:335: cannot use ^limit outside of match clauses
    (elixir) expanding macro: Kernel.|>/2
    lib/absinthe/relay/connection.ex:337: Absinthe.Relay.Connection.from_query/4
    (elixir) lib/kernel/parallel_compiler.ex:117: anonymous fn/4 in Kernel.ParallelCompiler.spawn_compilers/1

could not compile dependency :absinthe_relay, "mix compile" failed. You can recompile this dependency with "mix deps.compile absinthe_relay", update it with "mix deps.update absinthe_relay" or clean it with "mix deps.clean absinthe_relay"

I'm using this configuration in mix.exs

  defp deps do
    [
     {:phoenix, "~> 1.2.4"},
     {:phoenix_pubsub, "~> 1.0"},
     {:gettext, "~> 0.11"},
     {:cowboy, "~> 1.0"},
     {:absinthe, "~> 1.3.1"},
     {:absinthe_plug, "~> 1.3.0"},
     {:absinthe_relay, "~> 1.3.0"},
    ]
  end

mix.lock of umbrella project looks like

%{"absinthe": {:hex, :absinthe, "1.3.2", "1d16ee5ceeb8b90e37f936b924caacc099b9da667e9c7e6a4d729f41197123d4", [:mix], [], "hexpm"},
  "absinthe_plug": {:hex, :absinthe_plug, "1.3.1", "e97c9faa6a2e29be181205d994c3ca2e2c5c5dec38aea69fcd324407f6c163c6", [:mix], [{:absinthe, "~> 1.3.0", [hex: :absinthe, repo: "hexpm", optional: false]}, {:plug, "~> 1.3.2 or ~> 1.4", [hex: :plug, repo: "hexpm", optional: false]}], "hexpm"},
  "absinthe_relay": {:hex, :absinthe_relay, "1.3.1", "3e64bc7d6622c898b061730ad773c3f7e000d5e726580af5b8ed2b3775ba366d", [:mix], [{:absinthe, "~> 1.3.0", [hex: :absinthe, repo: "hexpm", optional: false]}, {:ecto, "~> 1.0 or ~> 2.0", [hex: :ecto, repo: "hexpm", optional: true]}], "hexpm"},
  "base64url": {:hex, :base64url, "0.0.1", "36a90125f5948e3afd7be97662a1504b934dd5dac78451ca6e9abf85a10286be", [:rebar], [], "hexpm"},
  "bunt": {:hex, :bunt, "0.2.0", "951c6e801e8b1d2cbe58ebbd3e616a869061ddadcc4863d0a2182541acae9a38", [:mix], [], "hexpm"},
  "cachex": {:hex, :cachex, "2.1.0", "fad49b4e78d11c6c314e75bd8c9408f5b78cb065c047442798caed10803ee3be", [:mix], [{:eternal, "~> 1.1", [hex: :eternal, repo: "hexpm", optional: false]}], "hexpm"},
  "certifi": {:hex, :certifi, "1.2.1", "c3904f192bd5284e5b13f20db3ceac9626e14eeacfbb492e19583cf0e37b22be", [:rebar3], [], "hexpm"},
  "con_cache": {:git, "https://github.com/sasa1977/con_cache.git", "f5a8cf18be9861c1e5dc833fbd357606e9169bad", [tag: "0.12.0"]},
  "connection": {:hex, :connection, "1.0.4", "a1cae72211f0eef17705aaededacac3eb30e6625b04a6117c1b2db6ace7d5976", [:mix], [], "hexpm"},
  "coverex": {:hex, :coverex, "1.4.13", "d90833b82bdd6a1ec05a6d971283debc3dd9611957489010e4b1ab0071a9ee6c", [:mix], [{:hackney, "~> 1.5", [hex: :hackney, repo: "hexpm", optional: false]}, {:poison, "~> 1.5 or ~> 2.0 or ~> 3.0", [hex: :poison, repo: "hexpm", optional: false]}], "hexpm"},
  "cowboy": {:hex, :cowboy, "1.1.2", "61ac29ea970389a88eca5a65601460162d370a70018afe6f949a29dca91f3bb0", [:rebar3], [{:cowlib, "~> 1.0.2", [hex: :cowlib, repo: "hexpm", optional: false]}, {:ranch, "~> 1.3.2", [hex: :ranch, repo: "hexpm", optional: false]}], "hexpm"},
  "cowlib": {:hex, :cowlib, "1.0.2", "9d769a1d062c9c3ac753096f868ca121e2730b9a377de23dec0f7e08b1df84ee", [:make], [], "hexpm"},
  "credo": {:hex, :credo, "0.8.1", "137efcc99b4bc507c958ba9b5dff70149e971250813cbe7d4537ec7e36997402", [:mix], [{:bunt, "~> 0.2.0", [hex: :bunt, repo: "hexpm", optional: false]}], "hexpm"},
  "decimal": {:hex, :decimal, "1.4.0", "fac965ce71a46aab53d3a6ce45662806bdd708a4a95a65cde8a12eb0124a1333", [:mix], [], "hexpm"},
  "deppie": {:hex, :deppie, "1.1.0", "cfb6fcee7dfb64eb78cb8505537971a0805131899326ad469ef10df04520f451", [:mix], [], "hexpm"},
  "ecto": {:hex, :ecto, "1.0.7", "d8deca5d5a03e7c9ed4821e6f1f85dad736aca960785f154e3a8c1bee1aeaa13", [:mix], [{:decimal, "~> 1.0", [hex: :decimal, repo: "hexpm", optional: false]}, {:mariaex, "~> 0.4.1", [hex: :mariaex, repo: "hexpm", optional: true]}, {:poison, "~> 1.0", [hex: :poison, repo: "hexpm", optional: true]}, {:poolboy, "~> 1.4", [hex: :poolboy, repo: "hexpm", optional: false]}, {:postgrex, "~> 0.9.1", [hex: :postgrex, repo: "hexpm", optional: true]}, {:sbroker, "~> 0.7", [hex: :sbroker, repo: "hexpm", optional: true]}], "hexpm"},
  "ecto_enum": {:hex, :ecto_enum, "0.3.1", "28771a73c195553b32b434f926302092ba072ba2b50224b8d63081cad5e0846b", [:mix], [{:ecto, ">= 0.13.1", [hex: :ecto, repo: "hexpm", optional: false]}, {:mariaex, ">= 0.3.0", [hex: :mariaex, repo: "hexpm", optional: true]}, {:postgrex, ">= 0.8.3", [hex: :postgrex, repo: "hexpm", optional: true]}], "hexpm"},
  "eternal": {:hex, :eternal, "1.1.4", "3a40fd9b9708f79216a6ec8ae886f2b17685dc26b119b9c0403c2b0d3dc1ac69", [:mix], [{:deppie, "~> 1.1", [hex: :deppie, repo: "hexpm", optional: false]}], "hexpm"},
  "ex_machina": {:hex, :ex_machina, "0.6.2", "2d25802d269b21ecb3df478c3609f3b162ef6d1c952d75770e0969f8971611de", [:mix], [], "hexpm"},
  "ex_statsd": {:git, "https://github.com/timCF/ex_statsd.git", "2fe285fd46d59d7e6b680371f9d37bfbc8f618f6", []},
  "exactor": {:hex, :exactor, "2.2.3", "a6972f43bb6160afeb73e1d8ab45ba604cd0ac8b5244c557093f6e92ce582786", [:mix], [], "hexpm"},
  "exfswatch": {:hex, :exfswatch, "0.4.2", "d88a63b5c2f8f040230d22010588ff73286fd1aef32564115afa3051eaa4391d", [:mix], [], "hexpm"},
  "exprotobuf": {:git, "https://github.com/bitwalker/exprotobuf.git", "20b75fdf1c9ee539f3a40a018d27449acd097bfb", [tag: "1.2.7"]},
  "exsync": {:hex, :exsync, "0.1.4", "f5800f5c3137271bf7c0f5ca623919434f91798e1be1b9d50fc2c59168d44f17", [:mix], [{:exfswatch, "~> 0.4", [hex: :exfswatch, repo: "hexpm", optional: false]}], "hexpm"},
  "gelf_logger": {:git, "https://github.com/jschniper/gelf_logger.git", "0c1d7eb2e755fd0b3fc6cebf3986fee963f9f34e", []},
  "gettext": {:hex, :gettext, "0.13.1", "5e0daf4e7636d771c4c71ad5f3f53ba09a9ae5c250e1ab9c42ba9edccc476263", [:mix], [], "hexpm"},
  "gpb": {:hex, :gpb, "3.27.0", "bc0e6d2a3eb11dc7c0f37a061e8dc110b8a4c0af49d3e564dff89890c1d11e04", [:make, :rebar], [], "hexpm"},
  "gproc": {:git, "https://github.com/uwiger/gproc.git", "1d16f5e6d7cf616eec4395f2385e3a680a4ffc9f", [tag: "0.6.1"]},
  "guardian": {:hex, :guardian, "0.14.4", "331e659e59d8dd2f0a4f05168fbdf511a025359d3c79b2a5406a87c7708393ca", [:mix], [{:jose, "~> 1.8", [hex: :jose, repo: "hexpm", optional: false]}, {:phoenix, "~> 1.2", [hex: :phoenix, repo: "hexpm", optional: true]}, {:plug, "~> 1.3", [hex: :plug, repo: "hexpm", optional: false]}, {:poison, ">= 1.3.0", [hex: :poison, repo: "hexpm", optional: false]}, {:uuid, ">=1.1.1", [hex: :uuid, repo: "hexpm", optional: false]}], "hexpm"},
  "hackney": {:hex, :hackney, "1.8.6", "21a725db3569b3fb11a6af17d5c5f654052ce9624219f1317e8639183de4a423", [:rebar3], [{:certifi, "1.2.1", [hex: :certifi, repo: "hexpm", optional: false]}, {:idna, "5.0.2", [hex: :idna, repo: "hexpm", optional: false]}, {:metrics, "1.0.1", [hex: :metrics, repo: "hexpm", optional: false]}, {:mimerl, "1.0.2", [hex: :mimerl, repo: "hexpm", optional: false]}, {:ssl_verify_fun, "1.1.1", [hex: :ssl_verify_fun, repo: "hexpm", optional: false]}], "hexpm"},
  "httpoison": {:git, "https://github.com/edgurgel/httpoison.git", "ed017faa592bfeecb50f0b9632c8c60ca9d69ad9", [tag: "v0.11.2"]},
  "idna": {:hex, :idna, "5.0.2", "ac203208ada855d95dc591a764b6e87259cb0e2a364218f215ad662daa8cd6b4", [:rebar3], [{:unicode_util_compat, "0.2.0", [hex: :unicode_util_compat, repo: "hexpm", optional: false]}], "hexpm"},
  "jose": {:hex, :jose, "1.8.4", "7946d1e5c03a76ac9ef42a6e6a20001d35987afd68c2107bcd8f01a84e75aa73", [:mix, :rebar3], [{:base64url, "~> 0.0.1", [hex: :base64url, repo: "hexpm", optional: false]}], "hexpm"},
  "junit_formatter": {:hex, :junit_formatter, "1.3.0", "e4321e3275f48daecadb3116bc814e1a743645f2549c6526b1a32cd6c8dd1833", [:mix], [], "hexpm"},
  "logger_datadog_backend": {:git, "https://github.com/timCF/logger_datadog_backend.git", "34e2cf1038e6bba861b747b8ec3ff976ff789da3", []},
  "meck": {:hex, :meck, "0.8.7", "ebad16ca23f685b07aed3bc011efff65fbaf28881a8adf925428ef5472d390ee", [:rebar3], [], "hexpm"},
  "metrics": {:hex, :metrics, "1.0.1", "25f094dea2cda98213cecc3aeff09e940299d950904393b2a29d191c346a8486", [:rebar3], [], "hexpm"},
  "mime": {:hex, :mime, "1.1.0", "01c1d6f4083d8aa5c7b8c246ade95139620ef8effb009edde934e0ec3b28090a", [:mix], [], "hexpm"},
  "mimerl": {:hex, :mimerl, "1.0.2", "993f9b0e084083405ed8252b99460c4f0563e41729ab42d9074fd5e52439be88", [:rebar3], [], "hexpm"},
  "mock": {:hex, :mock, "0.2.1", "bfdba786903e77f9c18772dee472d020ceb8ef000783e737725a4c8f54ad28ec", [:mix], [{:meck, "~> 0.8.2", [hex: :meck, repo: "hexpm", optional: false]}], "hexpm"},
  "mongodb": {:hex, :mongodb, "0.1.1", "8737d8f57466b4171f8216f76aa1ab3855d55220b19a1a248fedc96c9f50c274", [:mix], [{:connection, "~> 1.0", [hex: :connection, repo: "hexpm", optional: false]}, {:poolboy, "~> 1.5", [hex: :poolboy, repo: "hexpm", optional: true]}], "hexpm"},
  "mongodb_ecto": {:hex, :mongodb_ecto, "0.1.5", "b52f996538fc6c7a80897946694e337170bd2d60fa5bfb66bd9abb5a7ebb1cd9", [:mix], [{:ecto, "~> 1.0.0", [hex: :ecto, repo: "hexpm", optional: false]}, {:mongodb, "~> 0.1.0", [hex: :mongodb, repo: "hexpm", optional: false]}], "hexpm"},
  "phoenix": {:hex, :phoenix, "1.2.4", "4172479b5e21806a5e4175b54820c239e0d4effb0b07912e631aa31213a05bae", [:mix], [{:cowboy, "~> 1.0", [hex: :cowboy, repo: "hexpm", optional: true]}, {:phoenix_pubsub, "~> 1.0", [hex: :phoenix_pubsub, repo: "hexpm", optional: false]}, {:plug, "~> 1.4 or ~> 1.3.3 or ~> 1.2.4 or ~> 1.1.8 or ~> 1.0.5", [hex: :plug, repo: "hexpm", optional: false]}, {:poison, "~> 1.5 or ~> 2.0", [hex: :poison, repo: "hexpm", optional: false]}], "hexpm"},
  "phoenix_pubsub": {:hex, :phoenix_pubsub, "1.0.2", "bfa7fd52788b5eaa09cb51ff9fcad1d9edfeb68251add458523f839392f034c1", [:mix], [], "hexpm"},
  "plug": {:hex, :plug, "1.3.5", "7503bfcd7091df2a9761ef8cecea666d1f2cc454cbbaf0afa0b6e259203b7031", [:mix], [{:cowboy, "~> 1.0.1 or ~> 1.1", [hex: :cowboy, repo: "hexpm", optional: true]}, {:mime, "~> 1.0", [hex: :mime, repo: "hexpm", optional: false]}], "hexpm"},
  "poison": {:hex, :poison, "1.5.2", "560bdfb7449e3ddd23a096929fb9fc2122f709bcc758b2d5d5a5c7d0ea848910", [:mix], [], "hexpm"},
  "poolboy": {:hex, :poolboy, "1.5.1", "6b46163901cfd0a1b43d692657ed9d7e599853b3b21b95ae5ae0a777cf9b6ca8", [:rebar], [], "hexpm"},
  "ranch": {:hex, :ranch, "1.3.2", "e4965a144dc9fbe70e5c077c65e73c57165416a901bd02ea899cfd95aa890986", [:rebar3], [], "hexpm"},
  "sentry": {:hex, :sentry, "4.0.3", "73d1b6a0ef79ddc5b499190ac03d6cdd5a81783835c6b8e655437ec6e71ed96a", [:mix], [{:hackney, "~> 1.8.0 or 1.6.5", [hex: :hackney, repo: "hexpm", optional: false]}, {:plug, "~> 1.0", [hex: :plug, repo: "hexpm", optional: true]}, {:poison, "~> 1.5 or ~> 2.0 or ~> 3.0", [hex: :poison, repo: "hexpm", optional: false]}, {:uuid, "~> 1.0", [hex: :uuid, repo: "hexpm", optional: false]}], "hexpm"},
  "sentry_logger_backend": {:hex, :sentry_logger_backend, "0.1.1", "6e45b18e6c868c84ee8eae2ce315401505e746899f464ba6d7bab577164076e0", [:mix], [{:sentry, "~> 4.0", [hex: :sentry, repo: "hexpm", optional: false]}], "hexpm"},
  "ssl_verify_fun": {:hex, :ssl_verify_fun, "1.1.1", "28a4d65b7f59893bc2c7de786dec1e1555bd742d336043fe644ae956c3497fbe", [:make, :rebar], [], "hexpm"},
  "uelli": {:git, "https://github.com/timCF/uelli.git", "1654ea8efafc9371ead24a330290991770334ce8", []},
  "unicode_util_compat": {:hex, :unicode_util_compat, "0.2.0", "dbbccf6781821b1c0701845eaf966c9b6d83d7c3bfc65ca2b78b88b8678bfa35", [:rebar3], [], "hexpm"},
  "uuid": {:hex, :uuid, "1.1.7", "007afd58273bc0bc7f849c3bdc763e2f8124e83b957e515368c498b641f7ab69", [:mix], [], "hexpm"},
  "websocket_client": {:hex, :websocket_client, "1.2.4", "14ec1ca4b6d247b44ccd9a80af8f6ca98328070f6c1d52a5cb00bc9d939d63b8", [:rebar3], [], "hexpm"},
  "wobserver": {:hex, :wobserver, "0.1.7", "377b9a2903728b62e4e89d4e200ec17d60669ccdd3ed72b23a2ab3a2c079694d", [:mix], [{:cowboy, "~> 1.1", [hex: :cowboy, repo: "hexpm", optional: false]}, {:httpoison, "~> 0.11", [hex: :httpoison, repo: "hexpm", optional: false]}, {:plug, "~> 1.3", [hex: :plug, repo: "hexpm", optional: false]}, {:poison, "~> 2.0 or ~> 3.0", [hex: :poison, repo: "hexpm", optional: false]}, {:websocket_client, "~> 1.2", [hex: :websocket_client, repo: "hexpm", optional: false]}], "hexpm"}}

`clientMutationId` as an optional feature?

From what I can see, apollo-client does not require clientMutationId, making it a pretty redundant field.
A potential fix for this would be to provide, in the initial absinthe config, a switch to disable it for the clients that don't need it.

I'm happy to craft a PR after discussion. Any thoughts?

Add facility to pull out internal IDs from cursors

Similar to Node.from_global_id, support a mechanism to parse out internal IDs from cursor values. While the cursors are opaque to the client, IDs may be needed to support accessing data from backends.

Allow null for after argument

Using Absinthe 1.4.0-beta.3 A-relay 1.3.4

Passing null for the after arg is apparently how the latest Relay Modern spec says is how to handle initial calls.

shifts(first:10, after: null)

However, this is breaking

 ** (FunctionClauseError) no function clause matching in Base.decode64/2
        (elixir) lib/base.ex:309: Base.decode64(nil, [])
        (absinthe_relay) lib/absinthe/relay/connection.ex:466: Absinthe.Relay.Connection.cursor_to_offset/1
        (absinthe_relay) lib/absinthe/relay/connection.ex:422: Absinthe.Relay.Connection.offset/1
        (absinthe_relay) lib/absinthe/relay/connection.ex:373: Absinthe.Relay.Connection.offset_and_limit_for_query/2
        (absinthe_relay) lib/absinthe/relay/connection.ex:344: Absinthe.Relay.Connection.from_query/4
        (absinthe) lib/absinthe/resolution.ex:184: Absinthe.Resolution.call/2

Removing the after arg the query works

node field resolution errors when schema-level middleware puts an error result in

I was attempting to apply a schema-wide middleware that would restrict access to all mutations/query fields by default, white-listing only certain ones to non-authenticated/administrative users.

The middleware in question looks like this:

defmodule MyApp.Graph.Private.Middleware.AdminEnforcement do
  @behaviour Absinthe.Middleware

  def call(resolution, _config) do
    case resolution.context do
      %{current_user: %MyApp.User{is_admin: true}} -> resolution
      _ -> Absinthe.Resolution.put_result(resolution, {:error, "Internal error"})
    end
  end
end

I attempted to configure the middleware at the schema level to only allow non-admins to do the following:

  • Query only this portion of the graph: query { viewer { isAuthenticated, isAuthorized } } which would allow a client to check only whether they were logged in, and if so, whether they were in any way authorized to use this graph
  • Perform only the mutations necessary to authenticate/deauthenticate via mutation { authenticate } and mutation { relinquishToken }

The following middleware/3 definition was put in place at the schema level with help from @benwilson512

def middleware(middleware, field, object) do
  middleware
  |> apply_middleware(:admin_enforcement, field, object)
end

# Restrict all root query fields except `viewer` to administrators
defp apply_middleware(middleware, :admin_enforcement, %{identifier: identifier}, %{identifier: :query})
  when not identifier in [:viewer]
do
  [Middleware.AdminEnforcement | middleware]
end

# Restrict all viewer fields except `isAuthenticated` and `isAuthorized` to administrators
defp apply_middleware(middleware, :admin_enforcement, %{identifier: identifier}, %{identifier: :viewer})
  when not identifier in [:is_authorized, :is_authenticated]
do
  [Middleware.AdminEnforcement | middleware]
end

# Restrict all mutations except `authenticate` and `relinquishToken` to administrators
defp apply_middleware(middleware, :admin_enforcement, %{identifier: identifier}, %{identifier: :mutation})
  when not identifier in [:authenticate, :relinquish_token]
do
  [Middleware.AdminEnforcement | middleware]
end

defp apply_middleware(middleware, _name, _field, _object), do: middleware

However, when issuing the following query:

query { node(id: "...") { id }

The following error is thrown:

** (FunctionClauseError) no function clause matching in Absinthe.Relay.Node.resolve_with_global_id/2
    (absinthe_relay) lib/absinthe/relay/node.ex:116: Absinthe.Relay.Node.resolve_with_global_id(#Absinthe.Resolution<...>)

To work around this, I was able to exempt node from the enforcement at the schema-level just like viewer and then apply the same middleware manually within the node field definition in the query root definition.

For some reason applying this middleware at the field-level works, but applying it at the schema-level does not. After talking with @benwilson512 the assumption is that somewhere in absinthe_relay's node resolution code, it is not checking to see if resolution has been halted.

I hope this is enough to understand the potential issue, let me know if I can answer any other questions.

Incosistency on Connection.offset/1

Current behavior

When the pagination is :forward the offset is changed, but when the pagination is :backward the offset remains the same of the request. Ex:

iex(2)> Connection.offset(%{first: 2, after: Connection.offset_to_cursor(2)})
3
iex(3)> forward_args = %{first: 2, after: Connection.offset_to_cursor(2)}
%{after: "YXJyYXljb25uZWN0aW9uOjI=", first: 2}
iex(4)> Connection.offset(forward_args)
3
iex(5)> Connection.limit(forward_args) 
{:ok, :forward, 2}
iex(6)> backward_args = %{last: 2, before: Connection.offset_to_cursor(2)}
%{before: "YXJyYXljb25uZWN0aW9uOjI=", last: 2}
iex(7)> Connection.offset(backward_args)                                  
2
iex(8)> Connection.limit(backward_args) 
{:ok, :backward, 2}

Expected behavior

iex(2)> Connection.offset(%{first: 2, after: Connection.offset_to_cursor(2)})
3
iex(3)> forward_args = %{first: 2, after: Connection.offset_to_cursor(2)}
%{after: "YXJyYXljb25uZWN0aW9uOjI=", first: 2}
iex(4)> Connection.offset(forward_args)
3
iex(5)> Connection.limit(forward_args) 
{:ok, :forward, 2}
iex(6)> backward_args = %{last: 2, before: Connection.offset_to_cursor(2)}
%{before: "YXJyYXljb25uZWN0aW9uOjI=", last: 2}
iex(7)> Connection.offset(backward_args)                                  
0
iex(8)> Connection.limit(backward_args) 
{:ok, :backward, 2}

Internally the calculation actually happens at https://github.com/absinthe-graphql/absinthe_relay/blob/master/lib/absinthe/relay/connection.ex#L279, but it seems to me that the method is behaving differently depending on the direction and would be easier if it returns the proper offset for the query.

The documentation on the method also explains that this is the expected behavior, I can submit a PR if that makes sense

Boolean fields in a mutation's input type are dropped when false

I'm working on a project where we want to supply a boolean value as a field on a mutation's input type, for example:

    payload field :create_user do
      input do
        field :name, non_null(:string)
        field :admin, non_null(:boolean)
      end
      output do
        field :user, :user
      end
      resolve &Resolver.User.create/2
    end

With the Absinthe v1.2 branch (and the corresponding branch for Absinthe Relay), if the supplied value for admin is false, it gets dropped from the args before it gets to the resolver. I've set up an example Phoenix app to illustrat the issue. This is what is logged:

[info] POST /docs
[info] Variables: %{"input" => %{"admin" => false, "clientMutationId" => "0", "name" => "Not Test User"}}
[debug] GraphQL Document:
mutation testing($input:CreateUserInput!) {
  createUser(input:$input) {
    user {
      id
      name
      admin
    }
  }
}

[info] args for create: %{client_mutation_id: "0", name: "Not Test User"}
[info] Sent 200 in 24ms

When I define a mutation without using any of the Relay Notation, the admin value is not dropped.

Mutation definition:

    field :other_create_user, type: :user do
      arg :name, non_null(:string)
      arg :admin, non_null(:boolean)
      resolve fn args, ctx ->
        with {:ok, user} <- Resolver.User.create(args, ctx),
          do: {:ok, user[:user]}
      end
    end

Log:

[info] POST /docs
[info] Variables: %{"admin" => false, "name" => "Not a Test User"}
[debug] GraphQL Document:
mutation testing($admin: Boolean!, $name:String!) {
  otherCreateUser(admin: $admin, name:$name) {
    id
    name
    admin
  }
}

[info] args for create: %{admin: false, name: "Not a Test User"}
[info] Sent 200 in 167ms

As a result, I think the issue must be somewhere inside the Absinthe Relay support. All cases work with Absinthe 1.1 and the corresponding version of Absinthe Relay. You can see my full app at https://github.com/redjohn/absinthe_relay_boolean_mutation_field. There are branches named 1.1 and 1.2 with tests that illustrate the issue.

I'm willing to help fix it, but could use a pointer about where to start looking in the code. Hope this is helpful.

Default value for pagination

When not including pagination variables in the GraphQL query, Absinthe throws an exception:

Code

connection field :cases, node_type: :case do
  resolve fn pagination_vars, _ ->
    {:ok, Absinthe.Relay.Connection.from_query(Case, &Repo.all/1, pagination_vars)}
  end
end

GraphQL query

query {
    cases {
        edges {
            title
        }
    }
}

Exception

Request: POST /graphql
** (exit) an exception was raised:
    ** (CaseClauseError) no case clause matching: 0
        (absinthe_relay) lib/absinthe/relay/connection.ex:324: Absinthe.Relay.Connection.from_query/4

When running the query including pagination parameters the query resolves just fine:

query {
    cases(first: 10) {
        edges {
            title
        }
    }
}

Maybe it is a good idea to include a default first parameter, e.g. first 10. Would like to submit a pull request for that but I can use some directions for a proper fix.

Use with Dataloader?

How would I used Dataloader with absinthe_relay. Specifically, without dataloader I might return

Post
|> Connection.from_query(
  &(&1
  |> Query.join_user_and_group
  |> Query.preload_user_and_group
  |> Query.only_posts_in_users_groups(user)
  |> Repo.all
  ),
  pagination_args, 
  PostUtils.sort_to_order(pagination_args.sort),
  &PostUtils.order_to_where/3
)

But with dataloader I need to return a Dataloader.load_many or Dataloader.load, but if I do that, it will not be formatted as relay requires with nodes, edges, cursors, etc.

(note: Connection.from_query is modified from the one provided by absinthe_relay hence the extra arguments)

`Could not find schema_node for x` when using ParseIDs in payloads (v1.4, modern)

Relay version: 1.4.0
Relay Schema Flavor: Modern

On querying, the below code fails with Could not find schema_node for foo_id.

payload field :some_mutation do
  input do
    field :foo_id, non_null(:id)
  end
  output do
    # ...
  end
  middleware Absinthe.Relay.Node.ParseIDs, foo_id: :foo
  # ...
end

Not 100% sure if the correct solution, but ensuring __parse_ids_root is set to :input (after this line) solves this issue for me.

If this is the correct fix, I have a branch with the repair and a test ready for a PR.

Support outputs containing edge fields

As far as I can tell, it's not possible at this point to use RANGE_ADD, since the client expects a new edge containing the new node, instead of just the new node itself.

Maybe it's possible to add a new piece of notation, e.g. edge(:type) that wraps the final output in edges { node {...}} or similar (although I'm not sure how robust that approach would be, especially in light of facebook/relay#538 ... hopefully RANGE_ADD will soon be usable with multiple new nodes as well).

Support custom connection object types

To support multiple connections to a node type, we need to support custom connection type names -- to support different properties for connection/edge types:

Something like, eg:

connection :lived_locations, node_type: :location
connection :visited_locations, node_type: :location
object :thing do
  # TODO: need some notation to use the right connection type...
  connection field :lived_locations, node_type: :location` do
    # ...
  end
  connection field :visited_locations, node_type: :location do
    # ...
  end
end

Support for meta when using the payload macro

Absinthe lets us define some meta on pretty much any types but it does not seem like this is supported with the payload macro.

Example:

payload field :my_mutation do
  meta foo: 5
  input do
    field :test_id, non_null(:id), meta: [bar: "nice"]
  end
  output do
   field :my_payload, :my_object
  end
end

Splitting up mutations

I'm creating quite a few large mutations and I want to split the input and output of a mutation into different modules.

My approach is something like this:

  input_object :customer_input do
    field :client_mutation_id, non_null(:string)
    field :name, non_null(:string)
  end

  object :customer_output do
    field :id, :integer
  end
  object :customer_payloads do
    payload field :customer do
      input do: import_fields :customer_input
      output do: import_fields :customer_output
      resolve fn(_, _) -> {:ok} end
    end
  end

To get it working, I've just had to add the client mutation id to the input object.

Is this good enough - or will it be missing something important.

Or - is there a better way to do this?

Ideally, there would be something like this:

 payload field :customer, :customer_input, :customer_output do
    resolve...
 end

Or some similar combination

Of course, you can do this by defining the whole payload elsewhere and importing the fields into the root schema

Node query returns wrong object

Here's my situation: Viewer has a connection years; each Year has a connection emails.

node object :viewer do
    connection field :years, node_type: :year do
      resolve fn
        pagination_args, %{source: _} ->
          {:ok, list} = App.Resolver.Year.all(%{}, %{})
          {:ok, Absinthe.Relay.Connection.from_list(
            list, pagination_args
          )}
      end
    end
end

connection node_type: :year
node object :year do
  # ...
  connection field :emails, node_type: :email do
    resolve fn
      pagination_args, %{source: year} ->
        with {:ok, list} <- App.Resolver.Email.all_for_year(%{year_id: year.id}, %{}), do:
        {:ok, Absinthe.Relay.Connection.from_list(
          list, pagination_args
        )}
    end
  end
end

Let's get a year's emails:

query {
  yearByYear(year: 2016) {  # just a shortcut to some year object, so I can grab $year from the URL
    emails(first: 1) {
      edges {
        node {
          id
        }
      }
    }
  }
}

... that's no problem at all:

{
  "data": {
    "yearByYear": {
      "emails": {
        "edges": [
          {
            "node": {
              "id": "RW1haWw6MQ=="
            }
          }
        ]
      }
    }
  }
}

Alright, now let's ask for that email node directly by its global id:

query {
  node(id: "RW1haWw6MQ==") {
    id
    __typename
  }
}

... now this returns

{
  "data": {
    "node": {
      "id": "WWVhcjoxOA==",
      "__typename": "Year"
    }
  }
}

Wat. It's now returning the year the email belongs to.

Just to be clear, the Email.find resolver is being called; and the email is being returned, but then somewhere along the way Absinthe decides that it's dealing with a Year instead. This happens with all other "children" of Year but not with those of the viewer, which leads me to conjecture (though I haven't tested it) that this has to do with second-level connections.

There is a distinct possibility that this is due to some misconfiguration on my part, but I can't figure out what's wrong. It's insidious insofar as Relay often inserts ... on Node (see facebook/relay#782 etc.) into mutation queries, and as a result those often come up emptyhanded when they shouldn't.

Root Viewer query support

Hey @bruce

How does root viewer queries are supported in this package? For example: Reading a list of posts from server on index page.

Here is how I am implementing it in a Phoenix app. Looks ok?

object :post do
  field :id, :id
  field :title, :string
  field :description, :string
end

connection node_type: :post

1. viewer_type.ex

# A viewer type with connection to posts
  object :viewer do
    field :id, :id
    connection field :posts, node_type: :post do
      resolve fn
        pagination_args, %{source: post} ->
          Connection.from_list(
            Repo.all(Post),
            pagination_args
          )
      end
    end
  end

2. viewer.ex

# A simple struct with ID to hack root query
defmodule Blog.Viewer do
  def __struct__ do
    %{id: "root"}
  end
end

3. schema.ex

# Main schema - I am initializing the viewer struct
  query do
    field :root,
     type: :viewer,
     resolve: fn
       _, _ -> {:ok, %Viewer{}}
     end
  end

This is client Client side relay query:

const PostsContainer = Relay.createContainer(Posts, {
  initialVariables: {
    count: 2,
    order: '-id',
  },
  fragments: {
    posts: () => Relay.QL`
      fragment on Viewer {
        id,
        posts(first: $count){
          pageInfo {
            hasPreviousPage
            hasNextPage
          }
          edges {
            node {
              id,
              title
            }
          }
        }
      }
    `,
  },
});

query FindPosts{root{...F0}} fragment F0 on Viewer{id,_posts4jOdEP:posts(first:2){pageInfo{hasPreviousPage,hasNextPage},edges{node{id,title},cursor}}}

but it seems that something is wrong in between. The client or the server isn't throwing any error, only this warning: (program):245 Invalid count value : Relay Pending query Tracker on client

If I remove the posts connection request from query, it does return fine the ID of the Struct.

posts: Object
__dataID__:"root"
id:"root"

It seems that response is messed up compared to what relay is expecting it to be. Any pointers?

Attempting to declare a connection field on an interface results in an error

Use case

Multiple node objects in my schema can be commented on, they should share a comments field which is relay connection of comment nodes and a declared interface to allow for polymorphic querying.

Expected results

I should be able to declare an interface with a field that is a relay connection for querying as such:

interface :commentable do
  connection field :comments, node_type: :comment
end

Actual results

Errors 😭

== Compilation error on file web/graph/schema.ex ==
** (FunctionClauseError) no function clause matching in Access.fetch/2
    (elixir) lib/access.ex:147: Access.fetch({:field, [line: 41], [:comments, [node_type: :comment]]}, :node_type)
    (elixir) lib/access.ex:179: Access.get/3
    (absinthe_relay) lib/absinthe/relay/connection/notation.ex:41: Absinthe.Relay.Connection.Notation.naming_from_attrs!/1
    (absinthe_relay) expanding macro: Absinthe.Relay.Connection.Notation.connection/1
    web/graph/schema.ex:41: App.Graph.Schema (module)
    (absinthe) expanding macro: Absinthe.Schema.Notation.interface/2
    web/graph/schema.ex:40: App.Graph.Schema (module)
    (elixir) lib/kernel/parallel_compiler.ex:117: anonymous fn/4 in Kernel.ParallelCompiler.spawn_compilers/1

Extend automatic global/internal id mapping to mutation input fields

An oft-repeated pattern in my code:

input do
  field :id, :string  # global id for the :item
end

followed in the resolver by something like

def find(%{id: item_global_id}, _info) do
  {:ok, %{id: id}} = Absinthe.Relay.Node.from_global_id(item_global_id, App.Schema)
...

It would be great if we could instead define

input do:
  field :id, global_id(:item)
end

... and get access to the whole item in the resolver, including its internal id. (Or maybe just a :global_id type that gets mapped onto an internal id, and the user can choose to pull the item's data if necessary.)

issue with has_previous_page

I think I found a little problem in the logic of has_previous_page parameter while using it with an ecto query.

in this line the parameters is true if offset is > 0 (right), but also if last is different from nil. This is not ok, because if I have a query with the parameter first, and one of after and before, it is always false.
I think that the offset > 0 is enough to tell if a query slice have a previous page.

What do you think?

Use connection with subscriptions

I'm seeking for some suggestions on how to use connections with subscriptions. I'm using Apollo as the client, but I do not think Relay (modern) behaviors differently on this topic.

We have a Notification type and with that we generated NotificationConnection and NotificationEdge types. On User object there is notifications field that utilizes NotificationConnection type for pagination. So far so good.

Now we want to add realtime notification using subscription. With the subscription we will get a new Notification object, which we need to manually insert to the beginning of user.notifications cache on the client-side. The difficulty is user.notifications is a connection type but the subscription is giving the node object itself. Now to update the client-side cache properly we need to:

  • Generate a notification edge with the notification. This requires we generate a cursor for the edge on the client-side.
  • Update pageInfo's cursor info
  • Since absinthe-relay generates an offset-based cursor, we will need to update all the existing edges to a new cursor with their new offset (old offset + 1)

Is this the right approach? It's quite a bit logic in the frontend to maintain its state, but I couldn't come up with a way to avoid this. Relay's document seems to imply this is the right approach. Although I don't think relay generated client id is recognizable by absinthe.

Incorrect documentation around connection definitions?

I'm not sure if this is just incorrect documentation or a larger error at connection.ex#L52.

The resolve function returns {:ok, connection} but that gives me the following BadMapError.

** (BadMapError) expected a map, got: {:ok, %{edges: [%{cursor: "Y…

If I return the connection only instead of the tuple, the error goes away and everything seems to work correctly.

From my limited Absinthe experience, all resolve functions return a {:ok, X} tuple, which leads me to believe the documentation is correct, and the handling of the connection's resolve function is incorrect.

Here's my full, working schema:

defmodule MyApp.Web.GraphQL.Schema do
  use Absinthe.Schema
  use Absinthe.Relay.Schema

  node interface do
    resolve_type fn
      %{email: _}, _ ->
        :user
      %{name: _}, _ ->
        :policy
      _, _ ->
        nil
    end
  end

  @desc "A Policy"
  node object :policy do
    field :id, non_null(:id)
    field :name, non_null(:string)
  end

  connection node_type: :policy

  @desc "A User"
  node object :user do
    field :id, non_null(:id)
    field :email, non_null(:string)
    field :name, non_null(:string)
    connection field :policies, node_type: :policy do
      resolve fn (pagination_args, %{source: _user}) ->
        connection = Absinthe.Relay.Connection.from_list(
          Policies.list_policies,
          pagination_args
        )
        connection
      end
    end
  end

  query do
    field :user, :user do
      arg :id, non_null(:id)
      resolve fn %{id: id}, %{context: _context} ->
        {:ok, Accounts.get_user(id)}
      end
    end
  end
end

Access information from parent node in custom edge resolver

Hi! I have an Organization type that has a field members, which is a connection of Users, but with a custom edge type. The custom edge type has a role field, apart from the standard node field.

This is the query I would like to perform:

{
  organization(name: "org1") {
    members(first: 10) {
      edges {
        role
        node {
          username
          email
        }
      }
    }
  }
}

And this is my current connection type definition:

connection node_type: :organization_member do
  edge do
    field :node, non_null(:user)

    field :role, non_null(:string) do
      resolve fn
        %{node: user}, _, _ ->
          {:ok, Organizations.get_member_role(organization_id, user.id)}
      end
    end
  end 
end

I need the organization_id inside the resolver function of the edge, but I couldn't find a way to access the parent node information. Is there a way I can do that?

Just in case, this is my Organization type:

node object :organization do
  # ...

  connection field :members, node_type: :organization_member do
    resolve &Resolvers.Organizations.resolve_members/3
  end

  # ...
end

Thanks for the awesome work!

Add middleware to specific node fields

How do I add middleware to specific resolvers? For example, if I have

query do
    node field do
      resolve fn
        %{type: :post, id: id}, _ ->
          Web.PostResolver.find(nil, %{id: id}, nil)
        %{type: :user, id: id}, _ ->
          # I only want my middleware to apply to this one
          # I can't just add
          middleware MyApp.Middleware.EnsureAuthenticated # <-- Doesn't work
          # Here because it's inside the resolve block
          Web.UserResolver.find(%{id: id}, nil)
      end
    end
end

I could add it above resolve fn, but then it would apply to all of the resolvers.

Return all edges in connection (Connection.from_list/3)

Currently, when you call Connection.from_list without pagination args, it fails with the error

** (exit) an exception was raised:
    ** (CaseClauseError) no case clause matching: 0
        lib/absinthe/relay/connection.ex:215: Absinthe.Relay.Connection.from_list/3
....

You can reproduce the error with a query like the following.

query{
  node(id: "RmFjdGlvbjoy") {
    id
    ... on Faction{
      name
      ships {
        edges {
          node {
            id
          }
        }
      }
    }
  }
}

However everything works if you pass pagination args as in:

query{
  node(id: "RmFjdGlvbjoy") {
    id
    ... on Faction{
      name
      ships(first: 10) {
        edges {
          node {
            id
          }
        }
      }
    }
  }
}

Do you think we can change the behaviour of the Connection.from_list to return all edges in the connection if no pagination args are specified ?

Pluggable / Extendable Global IDs

I have a pretty extensive use case that I'm battling with where I need to use global ids that are more than a combination of type and id (specifically, they include shard-resolvable information for a multi-tenant environment).

It doesn't look like I can customize this global id currently. Would love to see a more extensible approach! Let me know your thoughts and I'll see if I can help.

Thanks!

Parsing nested ids

Absinthe currently has the ParseIDs middleware, which converts ids for the top level object. It would be nice to have an easy way to parse the ids of nested objects as well. For example if you had

{
  foo {
    id
    baz {
      id
    }
    bars {
      id
    }
  }
}

It would parse the id of foo, baz and all the bars (assuming it's a has_many).

Error with schemas defined via macro with modern payloads

Based on this comment.

With this macro

defmodule SupportingSchema do
  defmacro __using__(flavor) do
    quote do
      use Absinthe.Schema
      use Absinthe.Relay.Schema, unquote(flavor)

      # ...
      
      mutation do
        payload field :update_foo do
          input do
            field :foo, :string
          end
          output do
            field :foo, :string
          end
          resolve fn %{foo: foo}, _ -> {:ok, %{foo: foo}} end
        end
      end

      # ...

    end
  end
end

And this definition

defmodule SchemaModern do
  use SupportingSchema, :modern # Classic works (assuming you also account for clientMutationId)
end

This query

"""
mutation updateFoo {
  updateFoo(input: {foo: "bar"}) {
    foo
  }
}
"""
|> Absinthe.run(SchemaModern)

Fails with

{:ok,
 %{errors: [%{locations: [],
      message: "Unknown argument \"input\" on field \"updateFoo\" of type \"RootMutationType\"."}]}}

More nuanced error message when internal id can't be found

The error No source non-global ID value could be fetched from the source object (from absinthe_relay/lib/absinthe/relay/node.ex) can mean different things, including

  • you forgot to add a matching pattern to node field
  • you're asking for the id of an associated sub-field, but you've forgotten to call preload before returning a result (info.source is Ecto.Association.NotLoaded)

... a more verbose error message (and one that goes out to Logger.warn at the very least) would be helpful.

ParseID middleware defined directly on mutation doesn't find the schema_node anymore

Error: In field \"updateParentLocalMiddleware\": Could not find schema_node for parent

Try this in the test:

        payload field :update_parent_local_middleware do
          input do
            field :parent, :parent_input
          end

          output do
            field :parent, :parent
          end

          middleware Absinthe.Relay.Node.ParseIDs, parent: [
            id: :parent,
            children: [id: :child],
            child: [id: :child]
          ]

          resolve &resolve_parent/2
        end

and call it the same way

 it "parses nested ids with local middleware" do
    encoded_parent_id = Base.encode64("Parent:1")
    encoded_child1_id = Base.encode64("Child:1")
    encoded_child2_id = Base.encode64("Child:1")
    result =
      """
      mutation FoobarLocal {
        updateParentLocalMiddleware(input: {
          clientMutationId: "abc",
          parent: {
            id: "#{encoded_parent_id}",
            children: [{ id: "#{encoded_child1_id}"}, {id: "#{encoded_child2_id}"}],
            child: { id: "#{encoded_child2_id}"}
          }
        }) {
          parent {
            id
            children { id }
            child { id }
            }
          }
      }
      """
      |> Absinthe.run(Schema)

    expected_parent_data = %{
      "parent" => %{
        "id" => encoded_parent_id, # The output re-converts everything to global_ids.
        "children" => [%{"id" => encoded_child1_id}, %{"id" => encoded_child2_id}],
        "child" => %{
          "id" => encoded_child2_id
        }
      }
    }
    assert {:ok, %{data: %{"updateParentLocalMiddleware" => expected_parent_data}}} == result
  end

Allow null on ParseIDs

Using absinthe 1.4.0-rc.2 and absinthe_relay 1.3.

ParseIDs does not accept nil values. Maybe, it should ignore decode from nil values...

FunctionClauseError{args: nil, arity: 2, clauses:
nil, function: :decode64, kind: nil, module: Base}
, stack: [{Base, :decode64, [nil, []], [file: 'lib
/base.ex', line: 391]} ...

thanks,

parsing_node_ids blows up when a non-last node id in a list of passed node ids fails to parse

Given a resolver using parsing_node_ids for multiple args as such:

resolve parsing_node_ids(&my_resolver/2, id1: :foo, id2: foo)

If any of the non-last arguments (here, id1) fail to parse an exception is thrown:

** (BadMapError) expected a map, got: {:error, "Invalid node type for argument id1; should be foo, was other_foo"}

I believe this is because the value of the args variable in the implementation of parsing_node_ids is used for multiple purposes within the function and upon the first error within the Enum.reduce the original args are replaced with the error tuple from the first parsing error causing any subsequent calls to Map.get to fail.

And finally a replacement node_test.exs that can be used to illustrate the problem with additional (passing) tests around parsing_node_ids:

defmodule Absinthe.Relay.NodeTest do
  use Absinthe.Relay.Case, async: true

  alias Absinthe.Relay.Node

  defmodule Schema do
    use Absinthe.Schema
    use Absinthe.Relay.Schema

    @foos %{
      "1" => %{id: "1", name: "Bar 1"},
      "2" => %{id: "2", name: "Bar 2"}
    }

    node interface do
      resolve_type fn
        _, _  ->
          # We just resolve :foos for now
          :foo
      end
    end

    node object :foo do
      field :name, :string
    end

    node object :other_foo, name: "FancyFoo" do
      field :name, :string
    end

    query do
      field :single_foo, :foo do
        arg :id, non_null(:id)
        resolve parsing_node_ids(&resolve_foo/2, id: :foo)
      end

      field :dual_foo, list_of(:foo) do
        arg :id1, non_null(:id)
        arg :id2, non_null(:id)
        resolve parsing_node_ids(&resolve_foos/2, id1: :foo, id2: :foo)
      end
    end

    defp resolve_foo({:error, _msg} = error, _info), do: error
    defp resolve_foo(%{id: id}, _) do
      {:ok, Map.get(@foos, id)}
    end

    defp resolve_foos(%{id1: id1, id2: id2}, _) do
      {:ok, [
        Map.get(@foos, id1),
        Map.get(@foos, id2)
      ]}
    end

  end

  @foo1_id Base.encode64("Foo:1")
  @foo2_id Base.encode64("Foo:2")

  describe "to_global_id" do

    it "works given an atom for an existing type" do
      assert !is_nil(Node.to_global_id(:foo, 1, Schema))
    end

    it "returns an atom for an non-existing type" do
      assert is_nil(Node.to_global_id(:not_foo, 1, Schema))
    end

    it "works given a binary and internal ID" do
      assert Node.to_global_id("Foo", 1)
    end

    it "gives the same global ID for different type, equivalent references" do
      assert Node.to_global_id("FancyFoo", 1) == Node.to_global_id(:other_foo, 1, Schema)
    end

    it "gives the different global ID for different type, equivalent references" do
      assert Node.to_global_id("FancyFoo", 1) != Node.to_global_id(:foo, 1, Schema)
    end

    it "fails given a bad ID" do
      assert is_nil(Node.to_global_id("Foo", nil))
    end

  end

  describe "parsing_node_id_args" do

    it "parses one id correctly" do
      result = """
      { singleFoo(id: "#{@foo1_id}") { id name } }
      """ |> Absinthe.run(Schema)
      assert {:ok, %{data: %{"singleFoo" => %{"name" => "Bar 1", "id" => @foo1_id}}}} == result
    end

    it "handles one incorrect id" do
      result = """
      { singleFoo(id: "#{Node.to_global_id(:other_foo, 1, Schema)}") { id name } }
      """ |> Absinthe.run(Schema)
      assert {:ok, %{data: %{}, errors: [
        %{message: "In field \"singleFoo\": Invalid node type for argument `id`; should be foo, was other_foo"}
      ]}} = result
    end

    it "parses multiple ids correctly" do
      result = """
      { dualFoo(id1: "#{@foo1_id}", id2: "#{@foo2_id}") { id name } }
      """ |> Absinthe.run(Schema)
      assert {:ok, %{data: %{"dualFoo" => [
        %{"name" => "Bar 1", "id" => @foo1_id},
        %{"name" => "Bar 2", "id" => @foo2_id}
      ]}}} == result
    end

    # This never succeeeds.
    # The current implementation of `parsing_node_ids` clobbers the `args` variable on the first failure of `id1`
    # causing the next iteration of the `Enum.reduce` when processing `id2` to blow up when it tries to `Map.get`
    # on the now-clobbered `args` that is actually now a `{:error, ...}` tuple resulting from `id1`
    it "handles multiple incorrect ids" do
      result = """
      { dualFoo(id1: "#{Node.to_global_id(:other_foo, 1, Schema)}", id2: "#{Node.to_global_id(:other_foo, 2, Schema)}") { id name } }
      """ |> Absinthe.run(Schema)
      assert {:ok, %{data: %{}, errors: [
        %{message: "In field \"multipleFoo\": Invalid node type for argument `id1`; should be foo, was other_foo"},
        %{message: "In field \"multipleFoo\": Invalid node type for argument `id2`; should be foo, was other_foo"}
      ]}} = result
    end

  end

end

Provide plug that accepts batched queries

As per Slack discussion with @benwilson512, I'd like to suggest that Absinthe should accept arrays of queries in the style of:

[{
  "id": "q1",
  "query": "query Index {vie ...",
  "variables": {}
}, {
  "id": "q2",
  "query": "query ...",
  "variables": {}
}]

The results should mirror this structure, as noted in the README of https://github.com/nodkz/react-relay-network-layer. The new plug can be documented as a solution to facebook/relay#724, when used in combination with react-relay-network-layer.

ParseIDs should accept a list of ids

Example usage:

arg :features, list_of(:id)

middleware Absinthe.Relay.Node.ParseIDs, features: :feature

Gives an error:

    ** (FunctionClauseError) no function clause matching in Base.decode64/2
        (elixir) lib/base.ex:309: Base.decode64([], [])
        lib/absinthe/relay/node.ex:161: Absinthe.Relay.Node.from_global_id/2
        lib/absinthe/relay/node/parse_ids.ex:119: anonymous fn/4 in Absinthe.Relay.Node.ParseIDs.parse/3
        (stdlib) lists.erl:1263: :lists.foldl/3
        lib/absinthe/relay/node/parse_ids.ex:117: Absinthe.Relay.Node.ParseIDs.parse/3
        lib/absinthe/relay/node/parse_ids.ex:104: Absinthe.Relay.Node.ParseIDs.call/2
        (absinthe) lib/absinthe/phase/document/execution/resolution.ex:191: Absinthe.Phase.Document.Execution.Resolution.reduce_resolution/1
        (absinthe) lib/absinthe/phase/document/execution/resolution.ex:161: Absinthe.Phase.Document.Execution.Resolution.do_resolve_field/4
        (absinthe) lib/absinthe/phase/document/execution/resolution.ex:147: Absinthe.Phase.Document.Execution.Resolution.do_resolve_fields/6
        (absinthe) lib/absinthe/phase/document/execution/resolution.ex:87: Absinthe.Phase.Document.Execution.Resolution.walk_result/5
        (absinthe) lib/absinthe/phase/document/execution/resolution.ex:57: Absinthe.Phase.Document.Execution.Resolution.perform_resolution/3
        (absinthe) lib/absinthe/phase/document/execution/resolution.ex:25: Absinthe.Phase.Document.Execution.Resolution.resolve_current/3
        (absinthe) lib/absinthe/pipeline.ex:247: Absinthe.Pipeline.run_phase/3
        (absinthe_plug) lib/absinthe/plug.ex:250: Absinthe.Plug.run_query/3
        (absinthe_plug) lib/absinthe/plug.ex:178: Absinthe.Plug.execute/2
        (absinthe_plug) lib/absinthe/plug.ex:139: Absinthe.Plug.call/2
        (phoenix) lib/phoenix/router/route.ex:161: Phoenix.Router.Route.forward/4
        (phoenix) lib/phoenix/router.ex:278: Phoenix.Router.__call__/1
        (exr) lib/exr/web/endpoint.ex:1: EXR.Web.Endpoint.plug_builder_call/2
        (exr) lib/plug/debugger.ex:123: EXR.Web.Endpoint."call (overridable 3)"/2

`BadMapError` when using `parsing_node_ids` with ID of bad type

When using parsing_node_ids to parse ID of a type that wasn't expected the result sent to user isn't "Invalid node type for argument #{key}; should be #{type}, was #{bad_type}", but 500: BadMapError.

Example

With following mutation

  mutation do
    payload field :user do
      input do
        field :id, non_null(:id)
      end

      output do
        field :user, :user
      end

      resolve parsing_node_ids(&resolver/2, user_id: :user)
    end
  end

We can make a GraphQL request:

mutation {
  user(input: {
    id: "NOT-USER-ID, EG. CLIENT-ID",
    clientMutationId: "0",
  }) {
    user {
      id
    }
  }
}

Which will result in getting Phoenix 500 HTML error page “BadMapError at POST” instead of GraphQL error ”Invalid node type for argument id; should be user, was client”.

hasNextPage lies

Implemented way of checking for hasNextPage value is wrong when the number of edges in a connection is divisible by the limit value.

has_next_page: !(length(records) < limit),

Imagine a connection with 30 records, we will paginate them by 10.

  1. We see records 1...10, hasNextPage: !(30 < 10) == true.
  2. We see records 11...20, hasNextPage: !(30 < 20) == true.
  3. We see records 21...30, hasNextPage: !(30 < 30) == true, while there is no more records.

Add totalCount field to connections

How do you add total_count info to the connection?

I am stuck here

  connection node_type: :pet do
    field :count, :integer do
      resolve fn _, %{source: conn} ->
        count = # how do you get the total count of the pets?
        {:ok, count}
      end
    end
  end

And here is the resolver:

def pets(args) do
  conn = Pet |> Relay.Connection.from_query(&Repo.all/1, args)
  {:ok, conn}
end

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.