Code Monkey home page Code Monkey logo

tirumaraiselvan / graphql-engine Goto Github PK

View Code? Open in Web Editor NEW

This project forked from hasura/graphql-engine

2.0 2.0 0.0 269.9 MB

Blazing fast, instant realtime GraphQL APIs on Postgres with fine grained access control

Home Page: https://hasura.io

License: Apache License 2.0

Shell 0.59% Dockerfile 0.04% Makefile 0.11% HTML 0.20% Go 4.11% JavaScript 10.78% CSS 2.76% Haskell 23.75% PLpgSQL 0.18% Python 2.91% PowerShell 0.01% Vue 0.14% TypeScript 53.39% Java 0.02% Ruby 0.01% Svelte 0.01% SCSS 0.95% Procfile 0.01% Nix 0.04% Dhall 0.01%

graphql-engine's People

Contributors

0x777 avatar abhi40308 avatar arvi3411301 avatar codingkarthik avatar daniel-chambers avatar danieljharvey avatar ecthiender avatar hasura-bot avatar jberryman avatar m-bilal avatar marionschleifer avatar paf31 avatar paritosh-08 avatar plcplc avatar praveenweb avatar rakeshkky avatar rikinsk avatar robertjdominguez avatar samirtalwar avatar sassela avatar scriptnull avatar scriptonist avatar shahidhk avatar solomon-b avatar sordina avatar soupi avatar tirumaraiselvan avatar varun-choudhary avatar vijayprasanna13 avatar wawhal avatar

Stargazers

 avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

graphql-engine's Issues

update remote schema api

Currently, remote schema modify does a "Create and Delete" which means metadata inconsistencies are very strict e.g. you can't modify header if there is a remote relationship, etc

Use streaming JSON parser for combining JSON results

Currently we're using aeson's decode to parse into Value structures, lookup the data field, then union the HashMaps, which is convenient but less efficient than if we parsed the boundaries of the object using the attoparsec combinators, located the data key boundaries, and then just concatenated the bytestrings together at the right point as a Builder (the EncJSON uses this). Here is an example of parsing directly.

Research and probably reserve top level `extensions` field from remote

This came up here: hasura#2432

https://graphql.github.io/graphql-spec/June2018/#sec-Response-Format

The response map may also contain an entry with key extensions. This entry, if set, must have a map as its value. This entry is reserved for implementors to extend the protocol however they see fit, and hence there are no additional restrictions on its contents.

It seems good to just union all of the extensions fields from remotes and (if any) from hasura. The only trouble obviously is when names overlap or the client needs to disambiguate extension responses from different remotes used in same query.

We could return our own extensions where we put remote extensions under separate namespaces. But that might entail rewriting clients which might defeat the purpose...

Support nested remote fields for remote relationships

We should allow to join with fields like:

 query {
    stripe (account_id : $account_id) {
            subscriptions(sub_id : $sub_id) {
                 leaf1
                 leaf2
             }
      }
}

This will involve changing the metadata API to something like:

type: create_remote_relationship
args:
  name: message
  table: profiles
  hasura_fields:
    - stripe_account_id
    - stripe_sub_id
  remote_schema: my-remote-schema
  remote_field: 
    stripe:
      arguments:
        account_id: "$stripe_account_id"  <- i specify this in the UI
        group: "$g"  <- i specify this in the UI
      field:
        subscriptions:
          arguments:
            sub_id: "$strip_sub_id"  <- i specify this
           -- sub_timestamp: <- result: i want this field
           -- sub_order <- result: and this

how to solve for query plan cache

Currently, query plan will cache the entire request. But after breaking the execution plan into sub-plans, it breaks the existing design. Currently, query plan caching is commented out in the PR.

From console, create a remote relationship with _and in where fails

The console generates the following query

{
  "type": "bulk",
  "args": [
    {
      "type": "create_remote_relationship",
      "args": {
        "name": "foo",
        .
        .
        "remote_field": {
          "hge_remote_addresses": {
            "arguments": {
              "where": {
                "_and": {
                  "id": {
                    "_eq": "$id"
                  },
                  "floor": {
                    "_eq": "$location_id"
                  }
                  .
                  .
}

The _and is generated as an object. It should be an array
This gives the following error

{
  "path": "$.args[0].args",
  "error": "ExpectedTypeButGot (TypeNamed (Nullability {unNullability = True}) (NamedType {unNamedType = Name {unName = \"String\"}})) (TypeNamed (Nullability {unNullability = True}) (NamedType {unNamedType = Name {unName = \"Int\"}})) :| []",
  "code": "remote-schema-error"
}

Handle order of fields

Change hashmap implementation to something which preserves the order of selection set.

Handle merge for websocket

Currently, the websocket code doesn't combine results if there are more than one top level fields. This results in few test failures.

While defining remote relationship, using _and in where is not working

For example, defining remote relationship like this fails

 remote_field:
    hge_remote_baseball_action_plays:
      arguments:
        where:
          _and:
          - baseball_event_state_id:
              _eq: $id
          - _not:
              baseball_event_state_id:
                _neq: $id

with error:

  "path": "$.args.remote_relationships[0]",
  "error": "UnsupportedMultipleElementLists :| []",
  "code": "remote-schema-error"

Remote arguments validations not supported for non-scalar types

Remote arguments are able to validate scalar fields but not complex fields like List, Object etc

E.g.

type: create_remote_relationship
args:
  name: message
  table: profiles
  hasura_fields:
    - id
  remote_schema: my-remote-schema
  remote_field: messages
  remote_arguments:
    where:
      id:
        _eq: "$id"

Remote joins: remove `hasura_fields` from `create_remote_relationship`

Instead we can make this implicit and extract referenced fields from the remote_field section. Simplifies API and removes something extra to validate.

This also removes the weird question of whether including extraneous hasura fields that go unused should be a validation error.

remote execution fails when there are no hasura columns in the remote relationship

For e.g. suppose we create a relationship with only static values, then no remote query is generated. If we add a hasura column in the API (but not use it in the join), then it works.

This is because of the function peelRemoteKeys here:

(neededHasuraFields (rrRemoteField remoteRelField))

neededHasuraFields is actually an empty array and this is used here for creating a Batch:

(fmap peelRemoteKeys keyedRemotes)

concurrency

Currently it is sequential. We have a tree of plans and as per the graphql spec, we can parallelize them.

Add schema dependencies

We need to add 3 dependencies for Remote Relationship:

Table dependency: RemoteRel depends on the table
Column dependencies: All the Tables columns are also a dependency for the RemoteRel
Remote Schema dependency: RemoteRel depends on the remote schema

Order is not preserved when multiple remote fields are defined

This query

    {
      profiles {
        id
        user_output_1: usersNestedArgs(where: {provider: {_eq: "provider1"}}) {
          id
          provider
        }
        user_output_2: usersNestedArgs(where: {_or: [{id: {_eq: 1}}, {provider: {_eq: "provider2"}}]}) {
          id
          provider
        }
        user_output_3: usersNestedArgs(where: {id: {_neq: 1}}) {
          id
          provider
        }
      }
    }

gave the following output

  data:
    profiles:
    - id: 1
      user_output_1: &user1
      - id: 1
        provider: provider1
      user_output_3: []
      user_output_2: *user1
    - id: 2
      user_output_1: []
      user_output_3: &user2
      - id: 2
        provider: provider2
      user_output_2: *user2
    - id: 3
      user_output_1: &user3
      - id: 3
        provider: provider1
      user_output_3: *user3
      user_output_2: []

rethink insertBatchResults function

User are facing issues in the join function.

I'm having issues with remote joins
Specifically no remote objects left for joining at
I can confirm the request is beign sent to my remote schema correctly, so it seems the issue is the handling of the response by hasura.


When using Remote joins, I get this error "message": "no remote objects left for joining at {"descriptionRich":"# F&C"}":thinking:

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.