graphql-python / graphql-core-legacy Goto Github PK
View Code? Open in Web Editor NEWGraphQL base implementation for Python (legacy version – see graphql-core for the current one)
License: MIT License
GraphQL base implementation for Python (legacy version – see graphql-core for the current one)
License: MIT License
Consider the following test case:
from graphql.language.parser import parse
from graphql.utils.build_ast_schema import build_ast_schema
from graphql.validation import validate
def test_list_literal_validation():
schema_text = '''
schema {
query: RootSchemaQuery
}
directive @foo(list: [String!]!) on FIELD | INLINE_FRAGMENT
type Bar {
name: String
}
type RootSchemaQuery {
Bar: Bar
}
'''
schema = build_ast_schema(parse(schema_text))
query = '''
query Baz {
Bar {
name @foo(list: "not_a_list")
}
}
'''
query_ast = parse(query)
errors = validate(schema, query_ast)
assert errors
Note that the directive @foo
requires a list argument, but is passed a string argument instead. Despite this problem, this library claims that the GraphQL query validates with no errors, failing the test. I believe this to be a bug.
As far as I can tell there's no documentation anywhere about what it's fields mean and how to use it, except for maybe 1 example of "how to get requsted field names" buried in another issue. As a fairly core part of the graphql library it needs documentation.
The current graphql.execution.execute.execute implementation forces AsyncioExecutor to use loop.run_until_complete in wait_until_finished. This is a bit ugly.
It would be better to let the executor decide what is returned from the execute method. It can be given a function to call after the data is ready as an argument.
With this AsyncioExecutor could return an awaitable and execute would be used as one would expect
result = await execute(schema, ast, executor=AsyncioExecutor())
It could even have an argument to get the new way to keep compatibility with existing uses.
This would also make TwistedExecutor much easier to implement.
This issue is just a check-list of outstanding tests I need to implement.
If you plan to help, please comment before starting work on any of these tests so we don't do the same work at the same time. If there's a name by the checkbox, that person is working on implementing those tests.
In graphql-js, the schema parser was merged into the regular parser. We have yet to implement the schema parser yet though.
I'm probably going to put this in v0.2 milestone.
Related to #18, sometimes users might want their results to be in the same order as they are in the fields. In this case, I am thinking that it should be an optional argument (default False) to the Executor
.
In the default, it will use regular dict
objects to store results. But if say enforce_strict_ordering=True
is passed to the Executor, then we will use OrderedDict
instead.
Looks like we're missing some tests: https://github.com/graphql/graphql-js/tree/master/src/execution/__tests__
And some stuff is not-to-spec. Going to look through this and finish implementation of all the tests that I can.
To have it logged here as well, it would be great to have deprecations supported as mentioned here:
graphql-python/graphene#93
Errors in @property def schema
(and probably any other @property
in that class) do not yield useful tracebacks; instead, python "helpfully" falls back to __getattr__
and then fails in there, which is extremely unhelpful. This appears to be a specific case of encode/django-rest-framework#2108
Would it be possible to ditch the __getattr_
use, or alternatively the @property
s?
Traceback of what I'm hitting:
<ipython-input-2-8b3585dcab10> in <module>()
----> 1 res2 = schema.execute("query { subject_user {user_id} }", request_context=dict(_subject_user_id=1))
/usr/local/encap/python-2.7.7/lib/python2.7/site-packages/graphene/core/schema.pyc in execute(self, request, root, args, **kwargs)
120
121 def execute(self, request='', root=None, args=None, **kwargs):
--> 122 kwargs = dict(kwargs, request=request, root=root, args=args, schema=self.schema)
123 with self.plugins.context_execution(**kwargs) as execute_kwargs:
124 return self.executor.execute(**execute_kwargs)
/usr/local/encap/python-2.7.7/lib/python2.7/site-packages/graphene/core/schema.pyc in __getattr__(self, name)
47 if name in self.plugins:
48 return getattr(self.plugins, name)
---> 49 return super(Schema, self).__getattr__(name)
50
51 def T(self, _type):
AttributeError: 'super' object has no attribute '__getattr__'
Not sure how much time I have the next few weeks but wanted to get the ball rolling on v0.6.0+
. I think it is probably best to target the sub-relsease rather than straight to 0.7.0
making it easier chunks.
In any case, here is the list of commits: graphql/graphql-js@v0.6.0...v0.6.1
As soon as I find some time I'll get started on it.
Version 0.5.0
compatible with the GraphQL
Apr 2016 spec.
Related Pull Requests: #54 #59
GraphQLError
)graphql.core.*
to graphql.*
promise
package into own package in PyPIpypromise
Improve the executor to be closer to the reference implementation:
Promise
instead of Deferred
as it simplifies the implementation. Use parallel resolvers automatically as default behavior when available. Related PR: #59Implement the commits from GraphQL-js:
In https://github.com/graphql/graphql-js/tree/master/src/utilities
graphql/core/utils/(util_name).py
tests/core_utils/test_(util_name).py
(using osx, python 3.5)
seems to be trouble with using promises and pickle together. Also decorated methods seem to have trouble which means resolvers are often a problem. Adding a comma to args fixes the "Queue is not iterable" error but the second error is much more intricate.
ERROR|2016-10-01 01:12:57,673|graphql.execution.executor >> An error occurred while resolving field Query.viewer
Traceback (most recent call last):
File "/Users/ben/.virtualenvs/current/lib/python3.5/site-packages/graphql/execution/executor.py", line 190, in resolve_or_error
return executor.execute(resolve_fn, source, args, context, info)
File "/Users/ben/.virtualenvs/current/lib/python3.5/site-packages/graphql/execution/executors/process.py", line 29, in execute
_process = Process(target=queue_process, args=(self.q))
File "/usr/local/Cellar/python3/3.5.2_1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/multiprocessing/process.py", line 80, in __init__
self._args = tuple(args)
TypeError: 'Queue' object is not iterable
[01/Oct/2016 01:12:57] "POST /graphql? HTTP/1.1" 200 116
Traceback (most recent call last):
File "/usr/local/Cellar/python3/3.5.2_1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/multiprocessing/queues.py", line 241, in _feed
obj = ForkingPickler.dumps(obj)
File "/usr/local/Cellar/python3/3.5.2_1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/multiprocessing/reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
TypeError: can't pickle _thread.lock objects
Hi!
Not sure if it is the right place to report, but the latest version on pypi is not clean. It still has
print(field_def.args.items())
at line 36 of known_argument_names.py (and probably more alike), which brings a lot of garbage in stdout.
Latest version here does not have this issue.
We have GraphQLFloat
and GraphQLInt
, as Python also have the native implementation of a Decimal
, would be great if we can add the type GraphQLDecimal
.
The Scala implementation of GraphQL (Sangria) provides a way for being protected against malicious queries.
More info: http://sangria-graphql.org/learn/#protection-against-malicious-queries
Could be very useful if the executor could support the following:
Particularly this code:
Since graphql-core doesn't bubble up exceptions for most problems with the schema - here included - it's entirely nonobvious what types are involved and more importantly, what resolve_*
function is the cause of this error. It makes it really hard to debug - turning a 5 minute problem into a "hack your own debugging code in" ~30 minute problem... if you have the mind to try that. Resolving a bug like this shouldn't take that kind of surgery to debug.
Various validation optimizations have landed in graphql-js
. I need to implement these.
https://github.com/graphql-python/graphql-core/blob/master/graphql/core/execution/values.py#L131
This line will through a key error if an input values is not provided. I believe to match the reference implementation, it would have to be:
values.get(field_name)
@woodb set up sphinx docs in dittos/graphqllib#15 however not much has been written yet.
If anyone wants to help write docs, that'd be really awesome. Most of the API mirrors that of graphql-js
, however, with some pretty important differences in how the Executor
is ran, and how Types are defined.
@syrusakbary, I saw you were working on gh-pages for graphene
. Are you working on a specific style you want graphql-python
's pages to presented in? Are you also doing sphinx docs?
Apologies if this is a little convoluted - it's hard to explain!
graphql/graphql-spec#123 allows implementations of interfaces to have fields that are implementations of interfaces. Currently assert_object_implements_interface
(graphql/core/type/schema.py
) only allows the fields to match types exactly. An example:
class InterfaceA(graphene.Interface):
...
class InterfaceB(graphene.Interface):
my_field = graphene.List(InterfaceA)
...
class ImplementationA(InterfaceA):
...
class ImplementationB(InterfaceB):
my_field = graphene.List(ImplementationA)
...
To test something out I made assert_object_implements_interface
a no-op then hit further issues with graphql/core/validation/rules/overlapping_fields_can_be_merged.py
. A simplified part of a query (in practise the query was built up from fragments coming from different places) returning an object of type ImplementationB:
my_field
... on InterfaceB {
my_field
}
In theory it should be able to merge these, but in practice the first my_field
is known to be ImplementationA
and the second only InterfaceA
.
I think the first part of this issue is worth fixing and I think I could probably manage that and open a PR if you'd like.
The second part I'm less sure about and I would have a harder time fixing - let me know what you think though. It's possible to work around the second problem by making sure queries are wrapped with ... on InterfaceB
when merging might happen.
Initial discussion started in #31 -
graphql-core
's exception handling is lacking. I'd like to implement a way where we can forward resolver errors to some place where they can be appropriately captured with their traceback in-tact.
It should handle sync and async resolver errors just the same.
Having a schema like this:
class Mutation(graphene.ObjectType):
pass
class Customer(graphene.ObjectType):
test = graphene.Int()
schema = graphene.Schema(
query=Customer,
mutation=Mutation,
)
In 0.x
on running the introspection query I get an error message:
File "third_party/python/graphene/core/schema.py", line 133, in introspect
return graphql(self.schema, introspection_query).data
File "third_party/python/graphene/core/schema.py", line 81, in schema
subscription=self.T(self.subscription))
File "third_party/python/graphene/core/schema.py", line 19, in __init__
super(GraphQLSchema, self).__init__(*args, **kwargs)
File "third_party/python/graphql/type/schema.py", line 58, in __init__
self._type_map = self._build_type_map(types)
File "third_party/python/graphql/type/schema.py", line 107, in _build_type_map
type_map = reduce(type_map_reducer, types, OrderedDict())
File "third_party/python/graphql/type/schema.py", line 151, in type_map_reducer
field_map = type.get_fields()
File "third_party/python/graphql/type/definition.py", line 193, in get_fields
self._field_map = define_field_map(self, self._fields)
File "third_party/python/graphql/type/definition.py", line 211, in define_field_map
).format(type)
AssertionError: Mutation fields must be a mapping (dict / OrderedDict) with field names as keys or a function which returns such a mapping
Where-as in 1.0
I do not and it creates a schema.json
that relay can not handle. Not super high priority but I though I'd log it. Can hopefully provide a failing test and maybe even fix it.
See: dittos/graphqllib#8
Finish up what was started there so we can have type validation working.
Having big mutations and queries it would be really useful having a new version of the library that includes graphql-python/graphql-core@cf10db6
I actually spent some time implementing it myself just to now realise it has already been implemented without a release. So yeah, that would be a quite nice free win to have :) Thanks!
In wsgi_graphql at https://github.com/faassen/wsgi_graphql I have tests in the test suite when I update to graphql-core. These are caused by the handling of default_value. If you omit default_value
(which I do in test_wsgi_graphql.TestSchema) it is set to None
. This now causes error messages in graphql-core
, which
weren't there in graphqllib
before, and it doesn't seem to be right in general: the express-graphql
tests also omit the default value and this works fine.
You can make the tests pass by putting in a default_value
of False
explicitly:
args={
'who': GraphQLArgument(
type=GraphQLString,
default_value=False
)
},
I'm working on this.
@syrusakbary - I'm trying to test the generation of graphql queries by using graphql.parse
and experiencing some weird behavior:
import graphql
print(graphql.parse("""
query {
user(id: 1) { foo {
bar(arg:"2") {
id
}
}
}
}
""") == graphql.parse("""
query {
user(id: 1) { foo { bar(arg:"2") {
id
}
}
}
}
"""))
# prints false
The only difference between these two queries is the position of the bar
node but the individual results look identical:
print(graphql.parse("""
query {
user(id: 1) { foo {
bar(arg:"2") {
id
}
}
}
}
"""))
# prints Document(definitions=[OperationDefinition(operation='query', name=None, variable_definitions=[], directives=[], selection_set=SelectionSet(selections=[Field(alias=None, name=Name(value='user'), arguments=[Argument(name=Name(value='id'), value=IntValue(value='1'))], directives=[], selection_set=SelectionSet(selections=[Field(alias=None, name=Name(value='foo'), arguments=[], directives=[], selection_set=SelectionSet(selections=[Field(alias=None, name=Name(value='bar'), arguments=[Argument(name=Name(value='arg'), value=StringValue(value='2'))], directives=[], selection_set=SelectionSet(selections=[Field(alias=None, name=Name(value='id'), arguments=[], directives=[], selection_set=None)]))]))]))]))])
print(graphql.parse("""
query {
user(id: 1) { foo { bar(arg:"2") {
id
}
}
}
}
"""))
# prints Document(definitions=[OperationDefinition(operation='query', name=None, variable_definitions=[], directives=[], selection_set=SelectionSet(selections=[Field(alias=None, name=Name(value='user'), arguments=[Argument(name=Name(value='id'), value=IntValue(value='1'))], directives=[], selection_set=SelectionSet(selections=[Field(alias=None, name=Name(value='foo'), arguments=[], directives=[], selection_set=SelectionSet(selections=[Field(alias=None, name=Name(value='bar'), arguments=[Argument(name=Name(value='arg'), value=StringValue(value='2'))], directives=[], selection_set=SelectionSet(selections=[Field(alias=None, name=Name(value='id'), arguments=[], directives=[], selection_set=None)]))]))]))]))])
When querying a GraphQL schema, the GraphQL result doesn't preserve fields query order.
Based on this Gist:
https://gist.github.com/syrusakbary/d116212d3a1f688519c2
As of #12 we have six installed now. Let's use it.
I scalars.py, this line:
def coerce_string(value):
if isinstance(value, bool):
return 'true' if value else 'false'
return unicode(value)
If the input value of a mutation is a unicode string, I get a decode error. A simple fix, unicode(value)
handles this for me.
Since we use slotted classes for ASTs, we need a way to convert an AST Node
to a Dict.
I.e. Name(value='test')
Should get converted to {"kind": "Name", "value": "test"}
.
It should convert recursively.
I'm running into a bug using the async
/await
keywords and the AsyncioExecutor. Here is my test:
from graphql.execution.executors.asyncio import AsyncioExecutor
import graphene
class Query(graphene.ObjectType):
recipes = graphene.String()
async def resolve_recipes(self, *_):
return 'hello'
schema = graphene.Schema(
query=Query,
executor=AsyncioExecutor()
)
executed = schema.execute("""
query {
recipes
}
""")
print(executed.data)
print(executed.errors)
which results in the following messages logged in my terminal:
/Users/alec/bin/python/graphql/execution/executor.py:110: RuntimeWarning: coroutine 'Query.resolve_recipes' was never awaited
result = resolve_field(exe_context, parent_type, source_value, field_asts)
OrderedDict([('recipes', '<coroutine object Query.resolve_recipes at 0x10eb0f360>')])
[]
Test against the latest pypy.
I've run into a bug with the AsyncIO Executor when trying to run a query from an application where the event is running.
I think it's because the executor's implemention of wait_until_complete
uses run_until_complete
which starts up the event loop and waits. Since the event loop is already running, it threw an exception when I tried to start it a second time.
Do you have a good solution to hook in some sort of error reporting? I'm thinkin maybe some way to hook into GraphQLLocatedError
when it is generated, but I dont know how to do that without forking the repo... python n00b
Also: for questions like this, is there a better place to ask this? I dont see a slack channel or any place to put this type of a question :(
I'd like to be able to resolve a graphql query with async def resolvers from inside a coroutine.
This used to work with the old AsyncioExecutionMiddleware (I was using version 0.4.12.1) but the new AsyncExecutor seems like it's meant to be run from outside a coroutine and internally calls loop.run_until_complete
itself but my request handler is already a coroutine so I'd like to just call
result = await execute(...)
I'm working on this.
Schema should sort these if they are not ordered dicts. Python doesn't guarantee a consistent ordering for keys in a dict. If we provide a dict that doesn't guarantee it, when inserting them into the OrderedDict
we are going to sort these values.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.