hpyproject / hpy Goto Github PK
View Code? Open in Web Editor NEWHPy: a better API for Python
Home Page: https://hpyproject.org
License: MIT License
HPy: a better API for Python
Home Page: https://hpyproject.org
License: MIT License
Right now hpy doesn't have an API to acquire or release the GIL.
It'd be useful for C extensions to have one which acquires/releases the GIL when a GIL is used in a Python implementation and to make it a no-op when it is not.
This is an attempt to start a discussion on how to implement function calls and function definitions. In the following, "functions" and "methods" are used interchangeably.
First, we need to identify who are the interested players and use cases:
authors of C extensions who write C functions. Let's call them C-writers.
Authors of C extensions who call Python functions. Let's call them C-callers.
Generators of C extensions. Let's call them "cython" but it applies also to others.
Python implementations with a JIT. Let's call them "pypy", but it applies also to others.
The basic idea is that we want to allow C-writers to write functions with a C-level signature, and generate the logic to do argument parsing automatically. E.g.:
long my_add_impl(long x, long y) { return x+y; };
AUTOMAGICALLY_GENERATE_PYTHON_FUNCTION("my_add", my_add_impl);
The precise details of how to implement AUTOMAGICALLY_GENERATE_PYTHON_FUNCTION
are the scope of this discussion.
Writing functions in this way is optional. It will always be possible to write functions with the usual calling conventions such as HPyFunc_VARARGS
and to the argument parsing manually.
There must be a way to quickly check whether a Python callable supports a given C-level signature and to get the underlying function pointer. "pypy" can use this to generate code which completely bypasses the python argument parsing logic, and cython" could use it to emit a fast-path in case it statically know the C types of the arguments.
Ideally, this should be integrated with the C API to call functions. E.g., if a C-caller calls HPy_Call(my_add, "ll", 4, 5)
, an implementation should be able to bypass argument parsing and call my_add_impl
directly.
Bonus point if we find a way to implement goal 3 also on CPython.
It should be possible to do things manually: i.e., a C-writer could write its own argument parsing code and be able to declare a C-level signature. In that case, he needs to ensure that its own argument parsing code does the "correct things". In particular, when you do call from Python like my_add(4, 5)
, the implementation should be free to decide whether to call the generic version or the C-specialized overload.
To be discussed: we need to decide whether we want to support "overloads" or not. I.e., a given Python functions could in principle support many different C-level signatures. I think that all the following options are reasonable:
support at most one C-level signature. Functions which can't be encoded this way needs to be written in the "old style" and parse arguments manually.
Support at most N C-level signatures, where N is a small number like 4. If you require more than N, you have to write parsing manually.
Support a potentially unlimited number of C-level signatures.
CPython has already something similar: it is called Argument Clinic and AFAIK it's used only internally. You have to write special comments in the C code to specify the signature of your function, then you run a python script which edits the C files and adds the relevant autogenerated code. In the following, when I talk about "Argument Clinic" I don't necessarily mean the very same clinic.py
as CPython. We will probably have to write our own version, but the concept is the same.
On the other hand, HPy so far has relied on macros to generate pieces of code. Consider the following example:
HPyDef_METH(double_obj, "double", double_obj_impl, HPyFunc_O)
static HPy double_obj_impl(HPyContext ctx, HPy self, HPy obj)
{
return HPy_Add(ctx, obj, obj);
}
Here, HPyDef_METH
is a macro which among the other things generates a small trampoline to convert from the CPython calling convention to the HPy calling convention. It generates something similar to this (in the CPython-ABI case):
static PyObject *
double_obj(PyObject *self, PyObject *arg)
{
return _h2py(double_obj_impl(_HPyGetContext(), _py2h(self), _py2h(arg)));
}
So one option is to extend this functionality to generate also the argument parsing logic (more on that later).
Both options have pros and cons, IMHO:
Clinic PRO: is already known by CPython devs and it seems to work well.
Clinic PRO (very futuristic): it will be easier for CPython itself to migrate to HPy.
Clinic CON: C-writers might dislike the fact that an external script modifies their C source, potentially cluttering them with lines and lines of obscure code. It might be possible to put all the generated code into a separate file though.
Macros PRO: they just work with any C compiler. The C code which you have to write is probably more compact and/or nicer to read.
Macros CON: probably we will be more limited in the complexity of logic we can generate (maybe it's a PRO :)).
Macros CON: compiler erros are potentially more obscure, although so far in HPy we managed to get very good compiler errors in response to common mistakes, at least with gcc.
Macros CON: we need to put an upper bound to the number of arguments, because we need to autogen a file which contains the macros for all possible C signatures. We also need to check whether this impacts compilation time negatively.
There are at least two ways to encode/specify C signatures:
use a C string: this is more or less equivalent to what you can pass to HPyArg_Parse
, although we need to extend the notation to specify the return type. E.g., "d ll"
could mean double _(long, long)
.
extend the enum HPyFunc_Signature
to support many more signatures. So, in addition to HPyFunc_VARARGS
, HPyFunc_NOARGS
etc., you have e.g. HPyFunc_d_ll
which corresponds to double _(long, long)
. In this scenario each signature is represented by a single int64_t
value, and there are at least a couple of variations for how to encode it:
if we decide that we are happy to support only long
, double
, HPy
and void
, we can encode a single type in 2 bits. So, we can specify signatures up to 31 arguments (2 bits are reserved for the return type), maybe a bit less if we want to save some bits to encode other interesting features (e.g. if the function supports varargs or keywords).
Use 8 bits for each param: with this we can support many more types, but we are limited to signatures up to ~7 arguments, maybe 6 if we want to reserve some bits for other features. 6-7 arguments are enough to cover the vast majority of functions though: if a function wants to use more args, it has to do argument parsing by itself.
Pros/cons of each approach:
C string PRO: easy to understand, very flexible.
C string CON: it's impossible to do any compile-time type check.
C string CON: I think it's impossible to implement it with macros. The current approach of using HPyFunc_*
works because we can write "specialized" versions of HPyFunc_TRAMPOLINE
for each possible value of HPyFunc_Signature
, but I have no clue how to do it with a generic C string using macros. So, if we choose this, we are automatically choosing argument clinic.
HPyFunc PRO: checking whether a callable supports a given signature is very quick, since you just compare two ints. Do the same check with strings is probably slower because you need a strcmp
.
HPyFunc PRO: works with the macros approach.
HPyFunc PRO: we can probably write something which does compile-time checks of the argument types.
HPyFunc CON: much less flexible that C strings. The C syntax for calling is probably also less nice, e.g. HPy_Call(HPyFunc_d_ll, 4, 5)
vs HPy_Call("ll", 4, 5)
.
HPyFunc CON: if we use the macros approach, we probably need to generate a huge header with all the macros definitions, which might impact the build time.
We can also adopt a hybrid approach: the user-facing API takes and receives C strings for signatures, but internally we represent them inside an encoded int64_t
. This should make runtime signature checks faster (but it's probably a good idea to do some benchmarks).
Another open question is what to do with return types. Consider the example above in which I have the function my_add
whose signature is "d ll"
(i.e., double _(long, long)
):
HPy res = HPy_Call("ll", my_add, 4, 5);
in this case, the return type is HPy
. But what if I want to call the C function directly and the a double
back, without having to box it? Cython surely needs an API to do that. So maybe something like this:
double result;
void *fnptr = HPyFunc_Try("d ll", my_add);
if (fnptr)
result = fnptr(4, 5);
else
result = HPyFloat_AsDouble(HPy_Call("ll", my_add, 4, 5));
Suggestions for a better name instead of HPyFunc_Try
are welcome.
Why do we need fast runtime signature checks? I can think of at least two use cases:
HPy_Call
: you can add a fast-path: if the callable supports the given signature, you call it directly, bypassing the boxing/unboxing. But it is unclear whether it is doable since HPy_Call
knows only the types of the arguments, not the type of the result.
HPyFunc_Try
: see above, this is needed by Cython.
Note that this doesn't apply to "pypy": assuming that the callee is known, the JIT can do the signature check at compile time, so it doesn't have to be particularly efficient.
HPyFunc_*
Currently the enum HPyFunc_Signature
defines ~30 signatures which are used by methods and slots. We need to understand whether the represent the same thing as the C-level function signatures or whether they are completely different beasts. E.g., HPyFunc_O
is basically equivalent to "O O"
, HPyFunc_BINARYFUNC
to "O OO"
, HPyFunc_INQUIRY
to "i O"
, etc.
Let's try to turn this into something more concrete. At the moment, I am not happy with eiher of those though. The following is a sketch proposal which uses the "Argument clinic" and "C string" approaches described above:
/*[hpy-clinic input]
my_add
return: "d"
a: "l"
b: "l"
add two numbers together
[hpy-clinic start generated code]*/
... code generated by hpy-clinic ...
/*[hpy-clinic end generated code]*/
static double my_add_impl(HPyContext ctx, long a, long b)
{
return (double)(a+b);
}
What I don't like too much of this approach is that it's completely different that the HPyDef_METH
that you use for non-argument clinic methods. Maybe something like this, in which we put the generated code BEFORE the call to HPyDef_METH_CLINIC
? But note that in this way you loose the names of the arguments:
/*[hpy-clinic start generated code]*/
...
/*[hpy-clinic end generated code]*/
HPyDef_METH_CLINIC(my_add, "my_add", my_add_impl, "d ll")
static double my_add_impl(HPyContext ctx, long a, long b)
{
return (double)(a+b);
}
The following integrates very well with the existing API, with all the pros&cons described in the sections above.
HPyDef_METH(my_add, "my_add", my_add_impl, HPyFunc_d_ll)
static double my_add_impl(HPyContext ctx, long a, long b)
{
return (double)(a+b);
}
EDIT: s/4 bits/2 bits in the "How to encode C signatures" section
Sorry to bother with such a silly thing.
In setup.py
(https://github.com/hpyproject/hpy/blob/master/setup.py#L18), there is
gitrev = subprocess.check_output('git describe --abbrev=7 --dirty '
'--always --tags --long'.split(), encoding='utf-8')
so that hpy cannot be installed from source without the Git repository.
It seems quite uncommon because I'm able to use hg-git for many Python related projects on Github.
I modified setup.py
to support also hg-git (master...paugier:do-not-assume-git-repo). Do you think something like that could be included in master (yes it is a bit specific 🙂)? Or could there be another solution to install hpy from source without the Git repository?
Is the Git revision use for anything important in HPy?
PyBytes_FromStringAndSize creates an uninitialized PyBytes object when PyBytes_FromStringAndSize(NULL, n)
is called. Since the underlying bytes can't be initialized later from the HPy API we should return HPy_NULL and set an exception instead.
This is already done in the PyPy implementation -- see https://foss.heptapod.net/pypy/pypy/-/merge_requests/778.
We will probably need to discuss tons of details and design decisions. Which tools should we use to keep track of them? I am not very familiar with "modern" tools, as in PyPy we are still using mostly IRC and mailing list 😅
mailing list: should we make a new one or reuse capi-sig?
should we use an IRC channel or something more modern like slack? Other ideas?
should we discuss design ideas on the ML or e.g. in github issues?
It would be great if the API could (at least as an alternative) stick to "normal" functions, i.e. no vararg, no macros and no global variables.
In particular, there should at least be an alternative to HPyArg_Parse
with a signature like
int HPyArg_Parse2(HPyContext ctx, HPy *args, HPy_ssize_t nargs,
const char *fmt, void** targets);
Also, there could be a set of functions to get the constants:
HPy HPyConst_None();
Lastly, there doesn't seem to be a way to define modules and methods "programmatically" right now. Even if this is less efficient, it would nevertheless help a lot.
The reason I'm asking for these things is my involvement with pythonnet, which currently has to translate the C macros manually to C#, with the obvious possibility of breakage down the line and an annoying version-dependence. Also, .NET's P/Invoke mechanism (like, I guess, many FFI implementations) only really supports access to functions, and only to those without varargs. Varargs have the addtional annoying property, that they are ABI-dependent, so we'd have to model that per platform as well.
This is related to #149 but I prefer to open a new issue because it's something API design instead of "just" a documentation issue.
The original idea of HPyContext
was to be fully opaque and as such we decided to call the type because it was not important for the user to know it is a pointer (a bit like Windows' HANDLE
). But later we started to add ctx->h_None
& co., meaning that the user is expected to know it's a pointer.
So, for clarity I propose to rename all HPyContext
to HPyContext *
(and tweak the typedef appropriately of course). This obviously breaks all the existing code, but fortunately we don't have that much around, and a simply find&replace should cover 99.9% of the cases anyway.
I would like to port to HPy a tiny extension providing a (very very limited) Numpy-like array class: https://github.com/paugier/piconumpy
The project is explained in the README, which also contains a list of the functions of the CPython C-API used in PicoNumpy.
Do you think HPy will soon be mature enough for HPy ports of such tiny extensions?
The "cast" in HPy_Cast
and HPy_CastLegacy
suggest that these functions are merely type casts, when in fact they map an HPy handle to the custom type data associated with it.
It would be good to have a more accurate name. The leading contender is currently HPy_AsStruct
.
When we rename this API methods, we should also rename the macro HPy_CUSTOM_CAST
.
Victor Stinner pointed me to this project.
If this is ever accepted, I think it's important for CPython to use this API itself for implementing external modules (e.g. functools
or json
). I'm regularly annoyed that the designers of the C API in CPython are often not users of that API, so they make bad choices and don't see some problems with the API (I'm talking about problems as users of the API, not as implementers of that API).
When using the universal ABI, extension modules inside packages get their __name__
directly from the value set in moduledef.m_name
. However, according to CPython, __name__
should look like package.extension_name
in such a case, even if the C code only has .m_name = "extension_name",
. IOW, applying the following diff, the test should pass (and does indeed pass with the cpython ABI).
diff --git a/proof-of-concept/pofpackage/foo.c b/proof-of-concept/pofpackage/foo.c
index 1480bc7..eea2134 100644
--- a/proof-of-concept/pofpackage/foo.c
+++ b/proof-of-concept/pofpackage/foo.c
@@ -13,7 +13,7 @@ static HPyDef *module_defines[] = {
};
static HPyModuleDef moduledef = {
HPyModuleDef_HEAD_INIT,
- .m_name = "pofpackage.foo",
+ .m_name = "foo",
.m_doc = "HPy Proof of Concept",
.m_size = -1,
.defines = module_defines
diff --git a/proof-of-concept/test_pof.py b/proof-of-concept/test_pof.py
index e0f0c53..8fe3307 100644
--- a/proof-of-concept/test_pof.py
+++ b/proof-of-concept/test_pof.py
@@ -19,4 +19,5 @@ def test_point():
assert repr(p) == 'Point(?, ?)' # fixme when we have HPyFloat_FromDouble
def test_pofpackage():
+ assert pofpackage.foo.__name__ == 'pofpackage.foo'
assert pofpackage.foo.hello() == 'hello from pofpackage.foo'
We need to think about custom object types with a structure declared in C. We don't want to repeat CPython's mistake about explicit destructors everywhere containing Py_DECREF() and a hack on top of that for reference cycles. There must be a better solution, which would also allow an implementation like PyPy's which doesn't need to walk most dead objects at all.
I would suggest to declare local variables and arguments of type "POD" (Python Object Descriptor, or handles), and use a different "PORef" type for fields in objects (Python Object Reference).
PORef_Store(PORef *pref, POD o); /* '*pref = o;' with conversion */
PORef_Clear(PORef *pref); /* '*pref = NULL;' */
PORef_Load(PORef *pref); /* returns a new POD, which must be closed */
Then it would be mandatory to write an equivalent of tp_traverse to list all PORefs in an object. The main difference between POD and PORef is that a POD must be closed when no longer in use, but not a PORef. A PORef is alive if and only if it is stored in an object, the tp_traverse of this object is called, and that function lists the PORef.
Of course the implementation of both POD and PORef are PyObject *
under the hood on CPython. On PyPy a POD could be an integer index in some global list, but a PORef could be a real pointer which might be internally changed during tp_traverse.
The CPython API uses void *
for function pointers. However, in strict standard C, these are not the same things. If would be nice to resolve this issue in HPy.
Practical considerations:
See encukou/abi3#14 for the same issue on CPython.
See #23 for an initial attempt at making this change in HPy.
It's going to be hard to ensure that all functions are available on all Python versions of all Python implementations. Dummy example: some Python 3 functions of the C API are not available in the Python 2 API, and some functions introduced in Python 3.8 are not available in Python 3.6.
The first use case would be to compile a C extension to target Python version X.Y or newer. Newer functions would not be available.
We might have have experimental APIs. For example, should we expose an API to access CPython 3.8 vector call?
Does someone know a similar concept in mature popular APIs like the Windows API, Qt, glib or something else?
Right now, anyone with write privileges can merge a PR even if it isn't ready.
We should block that to avoid any mistakes.
I wanted to label a PR and almost merged it by accident.
We need to pick some APIs to provide for calling Python functions from HPy.
The existing C API has a large zoo of methods for making functions calls. The zoo is documented at https://docs.python.org/3/c-api/call.html#object-calling-api.
Under the hood there are two main calling conventions -- tp_call
and vectorcall
. I propose that we largely ignore these and instead focus on what fits well with the Python language syntax and what is convenient to call from C.
Of the zoo, PyObject_Call
most clearly matches a generic f(*args, **kw)
in Python, and I propose that we adapt is as follows for HPy:
/* Call a callable Python object, with arguments given by the tuple args, and named arguments
given by the dictionary kwargs.
ctx: The HPy execution context.
args: May be HPy_NULL if no positional arguments need to be passed. Otherwise it should be
a handle to a tuple.
kwargs: May be HPy_NULL if no keywords arguments need to be passed. Otherwise it should be
a handle to a dictionary of keyword arguments.
Return the result of the call on success, or raise an exception and return HPy_NULL on failure.
This is the equivalent of the Python expression: callable(*args, **kwargs).
*/
HPy HPy_Call(HPyContext ctx, HPy callable, HPy args, HPy kwargs);
Note that this differs from PyObject_Call
in that args
may be NULL
. I'm not sure why this was a requirement in the existing API.
A some point in the future we might want to implement Py_BuildValue to allow tuples of C values to be constructed easily (and maybe even something similar for easily constructing dictionaries from C values).
PyObject_VectorcallDict
closely matches the signature we chose for HPyFunc_KEYWORDS
methods, but while this is convenient for receiving arguments from Python, I'm not convinced it's a great fit for calling Python functions from C because one has to construct a dictionary of keyword arguments.
PyObject_Vectorcall
takes only the names of the keyword arguments (as a tuple), which seems slightly more convenient.
All of the vectorcall methods have the strange behaviour that nargs
also functions as a flag selecting whether args[-1]
(or args[0]
for PyObject_VectorcallMethod
) may temporarily be overwritten by the called function. I propose that we ensure we do NOT copy this behaviour.
The best suggestion I have at the moment for a convenient C-like calling convention is:
/* Call a callable Python object with positional and keyword arguments given by a C arrays.
ctx: The HPy execution context.
args: An array of positional arguments. May be NULL if there are no positional arguments.
nargs; The number of positional arguments.
kwnames: An array of keyword argument names given as UTF8 strings. May be NULL if there
are no keyword arguments.
kwargs: An array of keyword argument values. May be NULL if there are no keyword arguments.
nkwargs: The number of keyword arguments.
Return the result of the call on success, or raise an exception and return HPy_NULL on failure.
This is the equivalent of the Python expression: callable(*args, **kwargs).
*/
HPy HPy_CallArray(HPyContext ctx, HPy *args, HPy_ssize_t nargs, const char **kwnames, HPy *kwargs, HPy_ssize_t nkwargs);
I'm uncertain whether this is to big a departure from existing APIs or whether there are better options.
HPyCallable_Check
at the same time.HPy_CallMethod
and HPy_CallMethodArray
, but I propose that we decide on these first and then ensure those match later.As the title says:
$ cd proof-of-concept
$ rm build -rf
$ python3 setup.py --hpy-abi=universal build_ext --inplace
...
writing hpy universal stub loader for pofpackage.foo to build/lib
copying build/lib/pof.hpy.so ->
writing hpy universal stub loader for pof to build/lib
error: build/lib/pof.py already exists! Please delete.
cc @hodgestar
This is an idea which just came to my mind, let me write it down not to forget.
To create a type, we currently need to define an HPyType_Spec
and then manually call HPyType_FromSpec
in the module init, as you do on CPython.
However, for the vast majority of cases this is unnecessarily complicate. Let's support something like this:
HPyType_Spec PointType_spec = { ... };
HPyDef_TYPE(Point_def, "Point", PointType_spec);
static HPyDef *module_defines[] = {
&Point_def,
NULL
};
static HPyModuleDef moduledef = {
HPyModuleDef_HEAD_INIT,
.m_name = "foo",
.defines = module_defines
};
The semantics would be to call HPyType_FromSpec
and put the result inside the module dictionary, of course
ATM, there is no way to properly install a module that uses the universal ABI.
At first glance, python setup.py --hpy-abi=universal install
looks like it might work. But while it does create a valid pof.hpy.so
and sticks it in site-packages/
, the pof.py
shim that setuptools creates isn't HPy-aware.
In any case, that solution uses the deprecated .egg
format, and we should rather find something that works with pip.
The current C-API provides lots of functions which are duplicate, the only difference being whether they do a typecheck of the argument or not.
E.g., PyTuple_GET_SIZE
vs PyTuple_Size
, PyTuple_GET_ITEM
vs PyTuple_GetItem
, etc.
The following idea tries to:
The idea is to have special C types of handles which represent specialized objects and/or protocolos, e.g. HPyTuple
. You can cast an HPy
to HPyTuple
with or without typechecks. Functions like HPyTuple_Size
takes and HPyTuple
argument, so it is impossible to call them with an HPy
. For example:
typedef struct { long _i; } HPy;
typedef struct { long _i; } HPyTuple;
#define HPyTuple_CAST(x) ((HPyTuple){x._i})
void print_tuple(HPyContext ctx, HPy obj)
{
// no typecheck, we assume that the user knows that obj is a tuple
//HPyTuple t = HPyTuple_CAST(ctx, obj);
// typecheck. "obj" is automatically closed
HPyTuple t = HPyTuple_Check(ctx, obj); // or maybe HPyTuple_TryCast?
if (HPy_IsNull(t))
return HPy_NULL;
// the HPyTuple_* functions DO NOT do any form of typechecking
long n = HPyTuple_Size(ctx, t);
for (int i=o; i < n; i++) {
HPy item = HPyTuple_GetItem(ctx, t, i)
HPy_Print(ctx, item);
HPy_Close(cts, item);
}
HPy_Close(ctx, t);
}
CPython 3.9.0 has been out for 4 months, we should support it properly.
Although the official cpython interpreter does not have GC and GIL restrictions, it is still an official product.If hpy can replace the cpython interpreter, will it go into the official domain?
I called it pyhandle
for now, but we might want a better name. We surely need a reasonable calling convention: I suggest that the new API calls should not start with Py
, to make it easier to distinguish the new and the old ones.
Let's take PyObject_GetAttr
as an example; some random ideas for the new name:
PyHandle_GetAttr
: this starts with Py
, probably not a good idea
PHandle_GetAttr
: no more Py
:)
NPyObject_GetAttr
: NPy
stands for "New Python", but might be confused with numpy
NPy_GetAttr
: if we want to kill the distinction between the object protocol, number procotol etc, we might want this
BPyObject_GetAttr
: "Better Python"; hard to pronunce
YpObject_GetAttr
: I like the joke of being a "reversed Py", not sure I like the pronunciation though :)
I'd like to hear new ideas
#127 and #179 made it official that we can't create "half-ready" strings which are supposed to be filled after creation.
However, there are cases in which this is a legitimate use case and currently it is not supported by HPy. We should:
StringBuilder
, similar to the existing TupleBuilder
and ListBuilder
HPyBytes_FromStringAndSize
to point the user to the right solution. Something like ... please use StringBuilder instead
.looking forward to it.
What would be a good function / series of functions for a prototype?
I get the idea of this project, but nothing is more convincing than an actual example of how this would look like.
Suggestion of an example:
// My cool C code
#include <HPy.h>
HPy* MyCoolFunction(HPy* obj1, HPy* obj2) {
double a = HPy_get_float(obj1);
// TODO: delete this object? Who is the owner of this pointer?
double b = HPy_get_float(obj2);
// TODO: error handling of float retrieval??
HPy* res = Hpy_new_integer(2 + a + b);
return res;
}
Is there a reason for why there is no hpy/__init__.py
file, turning hpy into a "namespace"? Apparently that triggers a bug in "setup.py install" that seems not to handle this properly. I'd vote for avoiding namespaces if setuptools doesn't reliably support them.
Destroy functions have signature void (*tp_destroy_fun_t)(void *obj)
where obj
is the pointer to the native memory associated with the object to destroy.
Notably, the destroy function does also not receive an HPy context.
The (implicit) restrictions given by the signature actually suggest that the runtime would be able to execute destroy functions concurrently (without the need of acquiring the GIL).
However, as far as I know HPy does currently not specify anything concerning concurrency and in general about destroy functions. As we learned in the past, users will rely on non-specify but runtime-specific behavior.
E.g. users could rely on that the GIL is acquired when running the destroy function and thus don't need to do further synchronization.
I propose that we should specify two important restrictions for destroy functions:
I try to build piconumpy with the flag --hpy-abi=universal
. I use python setup.py --hpy-abi=universal build_ext -if
and I get the following error:
running build_ext
building 'piconumpy._piconumpy_cpython_capi' extension
C compiler: gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC
creating build
creating build/temp.linux-x86_64-3.8
creating build/temp.linux-x86_64-3.8/piconumpy
compile options: '-I/home/pierre/.pyenv/versions/3.8.2/include/python3.8 -c'
extra options: '-Wfatal-errors -Werror'
gcc: piconumpy/_piconumpy_cpython_capi.c
creating build/lib.linux-x86_64-3.8
creating build/lib.linux-x86_64-3.8/piconumpy
gcc -pthread -shared -L/home/pierre/.pyenv/versions/3.8.2/lib -L/home/pierre/.pyenv/versions/3.8.2/lib build/temp.linux-x86_64-3.8/piconumpy/_piconumpy_cpython_capi.o -o build/lib.linux-x86_64-3.8/piconumpy/_piconumpy_cpython_capi.cpython-38-x86_64-linux-gnu.so
building 'piconumpy._piconumpy_hpy' extension
C compiler: gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC
creating build/temp.linux-x86_64-3.8/home
creating build/temp.linux-x86_64-3.8/home/pierre
creating build/temp.linux-x86_64-3.8/home/pierre/Dev
creating build/temp.linux-x86_64-3.8/home/pierre/Dev/hpy
creating build/temp.linux-x86_64-3.8/home/pierre/Dev/hpy/hpy
creating build/temp.linux-x86_64-3.8/home/pierre/Dev/hpy/hpy/devel
creating build/temp.linux-x86_64-3.8/home/pierre/Dev/hpy/hpy/devel/src
creating build/temp.linux-x86_64-3.8/home/pierre/Dev/hpy/hpy/devel/src/runtime
compile options: '-DHPY_UNIVERSAL_ABI -I/home/pierre/Dev/hpy/hpy/devel/include -I/home/pierre/.pyenv/versions/3.8.2/include/python3.8 -c'
extra options: '-Wfatal-errors -Werror'
gcc: piconumpy/_piconumpy_hpy.c
gcc: /home/pierre/Dev/hpy/hpy/devel/src/runtime/argparse.c
gcc -pthread -shared -L/home/pierre/.pyenv/versions/3.8.2/lib -L/home/pierre/.pyenv/versions/3.8.2/lib build/temp.linux-x86_64-3.8/piconumpy/_piconumpy_hpy.o build/temp.linux-x86_64-3.8/home/pierre/Dev/hpy/hpy/devel/src/runtime/argparse.o -o build/lib.linux-x86_64-3.8/piconumpy/_piconumpy_hpy.cpython-38-x86_64-linux-gnu.so
copying build/lib.linux-x86_64-3.8/piconumpy/_piconumpy_cpython_capi.cpython-38-x86_64-linux-gnu.so -> piconumpy
error: can't copy 'build/lib.linux-x86_64-3.8/piconumpy/_piconumpy_hpy.hpy.so': doesn't exist or not a regular file
Instead of creating build/lib.linux-x86_64-3.8/piconumpy/_piconumpy_hpy.hpy.so, it creates build/lib.linux-x86_64-3.8/piconumpy/_piconumpy_hpy.cpython-38-x86_64-linux-gnu.so.
Am I doing something wrong?
I got this error when trying to install the antocuni/update-to-latest-hpy branch of ultrajson-hpy (hpy version fdc6047):
$ python setup.py --hpy-abi=universal install
[... build goes smoothly ...]
creating stub loader for ujson_hpy.hpy.so
stub file already created for ujson_hpy.hpy.so
error: file '/home/antocuni/pypy/misc/ultrajson-hpy/build/bdist.linux-x86_64/egg/ujson_hpy.py' does not exist
The same command succeeds for proof-of-concept
, so I think it's something inside ujson's setup.py
.
I think it might be related to cmdclass
but I didn't investigate:
setup(
name = 'ujson-hpy',
...
cmdclass = {'build_ext': build_ext, 'build_clib': build_clib_without_warnings},
)
/cc @hodgestar as he's our setuptools expert by now :)
We're only testing proof-of-concept/
in native CPython mode, but should also make sure that the universal mode is actually usable (which it isn't right now, see #85).
One of our goals is to get numpy to use HPy. We expect this to become the "killer feature" of HPy from the user point of view: a version of numpy which can run fast on alternate implementations, with PyPy being the primary example. This is useful on its own and enables the HPy-ification of the rest of the scientific ecosystem. Moreover, it will help us to drive the design of HPy itself, by using it in a real-word scenario.
Ultimately, when HPy is mature, we hope to convince the numpy maintainers to use it, but for now this would be developed in a fork.
List of subtasks:
Hi, i don't see anything about sub-interpreters in the readme, i know there's no C api for it (yet) and it's still a draft https://www.python.org/dev/peps/pep-0554/
But how would it integrate with the project, and is it planned to integrate anything equivalent from the start ?
I can think of :
Pyston is a new Python implementation which should be compatible with CPython.
If we want to allow interop with any implementation, we should test this against their new interpreter as well.
After my email to hpy-dev, I and @arigo discussed a possible solution on IRC.
Here is a summary of the solution we found:
To talk more concretely, let's start from existing Python/C code like this:
typedef struct {
PyObject_HEAD
int x;
int y;
} PyPointObject;
// this is a method of Point
PyObject* Point_foo(PyObject *self, PyObject *arg)
{
PyPointObject *p = (PyPointObject *)self;
// ...
}
// this is an unrelated function which happens to cast a PyObject* into a
// PyPointObject*
void bar(PyObject *point)
{
PyPointObject *p = (PyPointObject *)point;
// ...
}
The idea is that an HPy type (as described by HPyType_Spec) can be:
Pure HPy:
struct PyPointObject
does NOT contain any header.
all methods and slots are written in HPy, and .legacy_slots == NULL
.
if we obtain a PyObject *
(e.g. via HPy_AsPyObject()
), it is NOT possible to cast it to PyPointObject *
.
Legacy type:
struct PyPointObject
MUST start with PyObject_HEAD
HPyType_Spec
MUST contain: .legacy_headersize = offsetof(PyPointObject, x)
, (where x
is the first field) and it is allowed to use .legacy_slots
It is possible to cast PyObject *
to PyPointObject *
Properties of this solution:
We pay the space penalty for PyObject_HEAD
only if it's explicitly requested
When we kill PyObject_HEAD
, .legacy_headersize
becomes automatically 0, and HPyType_FromSpec
will immediately complain if we are still using .legacy_slots
.
The C-casts inside legacy methods of the type (such as Point_foo
above) are automatically valid, since if we kill PyObject_HEAD
we can't have legacy methods (as explained above)
The remaining problem of this solution is how to "cast" an HPy handle into a PyPointObject* efficiently: Currently, we have _HPy_Cast()
, but it is impossible to implement it efficiently, because we need to know whether the
type is pure or legacy, and we don't want to pay the cost of looking up the type at runtime. So, we kill _HPy_Cast
and introduce two new API functions: HPy_CastPure
and HPy_CastLegacy
. Both of them can be implemented efficiently at runtime, and as an additionaly bonuns we can actively check that we are not calling them on the wrong type in debug mode.
In order to simplify daily usage, we will introduce/document the convention that the author of PyPointObject *
will also define a macro:
#define PyPointObject_CAST(ctx, h) ((PyPointObject*)HPy_CastLegacy(ctx, h))
This way, the rest of the code can simply call PyPointObject_CAST
without having to worry whether it's a pure or legacy type. Then, when the porting is completed we can kill PyObject_HEAD
and simply adapt the macro to use HPy_CastPure
. If we forget and/or use the wrong call to HPy_Cast*
, the debug mode will catch the error very soon.
There is a remaining proble, though. The C-casts in unrelated methods (such as bar
above) are problematic: there is no nice/automatic way to detect/avoid/emit a warning in case of a cast from PyObject *
to PyPointObject *
. Once we kill PyObject_HEAD
, all those casts become invalid and will result in subtle bugs/segfaults. One easy way to mitigate the problem is to rename struct PyPointObject
into e.g. struct HPyPointObject
, and typedef HPyPointObject PyPointObject
: the new hpy-friendly code will use HPyPointObject
, while the old code can continue to use PyPointObject
: when we finally kill PyObject_HEAD
we also need to kill the typedef: this way, if by chance there is any code around which still casts to PyPointObject
, we will get a nice compilation error.
So to summarize, the hpy-ification of the original code will start like this:
// step 1 of the porting: start to hpy-ify the code
typedef struct {
PyObject_HEAD // will be killed
int x;
int y;
} HPyPointObject;
typedef HPyPointObject PyPointObject; // will be killed
#define HPyPointObject_CAST(ctx, h) ((HPyPointObject*)HPy_CastLegacy(ctx, h))
// this is a LEGACY method of Point
PyObject* Point_foo(PyObject *self, PyObject *arg)
{
PyPointObject *p = (PyPointObject *)self; // still works
// ...
}
void bar(PyObject *point)
{
PyPointObject *p = (PyPointObject *)point; // still works
// ...
}
HPyType_Spec PointType_Spec = {
.name = "Point",
.legacy_headersize = offsetof(HPyPointObject, x),
.legacy_slots = { ... }
}
Then, we can hpy-ify some of the code:
// step 2
typedef struct {
PyObject_HEAD // still a legacy type for now
int x;
int y;
} HPyPointObject;
...
HPyDef_METH(...)
HPy Point_foo(HPyContext ctx, HPy self, HPy arg)
{
HPyPointObject *p = HPyPointObject_CAST(ctx, self);
// ...
}
...
Finally, we can turn the type into a pure-hpy type:
typedef struct {
//PyObject_HEAD // KILLED!
int x;
int y;
} HPyPointObject;
// typedef HPyPointObject PyPointObject; // KILLED!
// note: this is calling HPy_CastPure now
#define HPyPointObject_CAST(ctx, h) ((HPyPointObject*)HPy_CastPure(ctx, h))
...
HPyType_Spec PointType_Spec = {
.name = "Point",
// .legacy_headersize = offsetof(HPyPointObject, x), // KILLED!
// .legacy_slots = { ... } // KILLED!
...
}
At this point, bar
will no longer compile because struct PyPointObject*
no longer exists. Assuming it's still part of legacy code which manages PyObject *
, it will need to be manually adapted in this way:
void bar(PyObject *point)
{
HPy h_point = HPy_FromPyObject(ctx, point);
HPyPointObject *p = HPyPointObject_CAST(ctx, h_point);
// ...
}
Open question: should we find a better name for HPy_Cast{Pure,Legacy}
? They don't really do a "cast" (the actual cast is done by the macro), but they are returning a pointer to the C implementation. Maybe something like
HPy_GetStructImplPtr
is more appropriate, but I can't find any good name, so suggestions are welcome.
HPy supports two kinds of memory layouts in extension types:
When a legacy type inherits from another legacy type, or when a pure type inherits from another pure type, the situation is straight forward -- the inheriting type includes the full struct of the base type and optionally extends the struct with additional members.
Ideally HPy would simply forbid legacy types inheriting from pure types or vice versa -- having HPy magically add or remove PyObject_HEAD as needed by C slots or methods in the two types would be error prone and complex.
However, pure types may need to extend built-in types like PyLongObject, PyDictObject, or PyUnicodeObject. This is currently only partially supported as follows:
HPy_AsStruct
is not used and and no deallocator (i.e. HPy_tp_destroy
) is defined on the extending type.Ideally pure types would treat other built-in types in the same way as PyObject
-- i.e. all of their internal memory layout would be hidden from the C extension -- but the C Python implementation of HPy does not yet support this.
Support for inheriting from other built-in types in the same way as from PyObject
should be added.
It might be possible to allow pure types to extend any legacy type in the same way (by making the struct of the existing type inaccessible to the slots and methods of the pure type) but this is a stretch goal.
API.md says:
The ctx is an opaque "context" argument that stands for the current interpreter.
But it seems this is not true because some extension code uses substructure in the ctx object.
From my experience with writing numerical codes in pure Python style, we lack a container for homogeneous objects. I wrote a description (https://github.com/paugier/nbabel/blob/master/py/vector.md) of a possible API of an extension that would fix this issue.
It seems to me that it would be a great achievement for HPy to show that such project can be implemented with HPy with very good performance (first with PyPy). We could apply the extension to the N-Body problem (see https://github.com/paugier/nbabel) and show that numerical pure Python codes could be very efficient. It would be a great demonstration for HPy and moreover the extension could be useful in real life codes.
It seems to me that implementing a first limited version of the core of this extension in HPy would require much less work than porting enough Numpy to HPy. Good results with Vector
would be an awesome argument for HPy so Vector
could be a good first serious target for HPy. I would be very interested to get your point of view on these ideas.
When running pip install hpy.devel
the following error is raised:
C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.26.28801\bin\HostX86\x86\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -Ihpy/devel/include -Ic:\users\jani\appdata\local\programs\python\python38-32\include -Ic:\users\jani\appdata\local\programs\python\python38-32\include "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.26.28801\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.26.28801\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\cppwinrt" /Tchpy/universal/src/hpymodule.c /Fobuild\temp.win32-3.8\Release\hpy/universal/src/hpymodule.obj
hpymodule.c
hpy/universal/src/hpymodule.c(3): fatal error C1083: Cannot open include file: 'dlfcn.h': No such file or directory
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\VC\\Tools\\MSVC\\14.26.28801\\bin\\HostX86\\x86\\cl.exe' failed with exit status 2
----------------------------------------
ERROR: Command errored out with exit status 1: 'c:\users\jani\.virtualenvs\pyrect-fkbriomz\scripts\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\jani\\AppData\\Local\\Temp\\pip-install-vl902zqp\\hpy-devel\\setup.py'"'"'; __file__='"'"'C:\\Users\\jani\\AppData\\Local\\Temp\\pip-install-vl902zqp\\hpy-devel\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\jani\AppData\Local\Temp\pip-record-0r3hgfqs\install-record.txt' --single-version-externally-managed --compile --install-headers 'c:\users\jani\.virtualenvs\pyrect-fkbriomz\include\site\python3.8\hpy.devel' Check the logs for full command output.
Using Win 10 and Python 3.8.3 32 bit.
At the last community call we talked about changing the group name to hpy. It seems that org name is taken. Any other thoughts? hpy-dev ?
Currently, it is not possible to specify a docstring if you use the HPyDef_METH
macro. We need to think a way to do it (but possibly keep it optional in case you don't want it)
The goals of HPy_New
are:
Current signature and example of usage (see test/test_hpytype.py:test_HPy_New
):
HPy HPy_New(HPyContext ctx, HPy h_type, void **data);
...
PointObject *point;
HPy h_point = HPy_New(ctx, cls, &point);
point->x = 7;
point->y = 3;
This works, but we get a C warning, since there is an implicit cast from PointObject**
to void**
.
I would like to:
PointObject point
(without the star), the code still compiles but it's horribly wrong.One idea which I had is to rename it as _HPy_New
, and turn HPy_New
into a macro:
HPy _HPy_New(HPyContext ctx, HPy cls, void **data);
#define HPy_New(ctx, cls, data) (_HPy_New((ctx), (cls), (sizeof(**data), (void**)data)))
The trick is to pass (sizeof(**data), (void**)data)
as the 3rd argument: thanks to the comma operator, the result of sizeof()
is ignored, but we get an error if we pass something which is not a pointer to a pointer. The error is not very clear, though:
point.h:6:65: error: invalid type argument of unary ‘*’ (have ‘PointObject’ {aka ‘struct <anonymous>’})
#define HPy_New(ctx, cls, data) (_HPy_New((ctx), (cls), (sizeof(**data), (void**)data)))
point.c:30:5: note: in expansion of macro ‘HPy_New’
HPy_New(ctx, cls, &p);
Any better idea how to handle this?
The title mentions two different problems but I am grouping them in the same issue since I think they can be solved together.
I am already working on a fix in PR #142 but I thought it was useful to report my findings here, for the future.
First problem: consider ctx_CallRealFunctionFromTrampoline
:
hpy/hpy/universal/src/ctx_meth.c
Lines 6 to 15 in fdc6047
in CPython-ABI mode this is not a problem because _py2h
and _h2py
are just no-op casts, but in in universal mode, _py2h
allocates new handles which are never freed:
hpy/hpy/universal/src/handles.c
Lines 137 to 152 in fdc6047
The proper solution is something like this:
switch (sig) {
case HPyFunc_NOARGS: {
HPyFunc_noargs f = (HPyFunc_noargs)func;
_HPyFunc_args_NOARGS *a = (_HPyFunc_args_NOARGS*)args;
HPy h0 = _py2h(a->self);
a->result = _h2py(f(ctx, h0));
_hclose_nodecref(h0);
return;
}
However, there is a better solution: the second problem is that the current implementation of handles is unnecessarily slow. I tried the ujson and piconumpy benchmarks:
In PR #142 I am experimenting with a different approach for hpy.universal
. In particular, _py2h
and _h2py
are implemented like this:
// The main reason for +1/-1 is to make sure that if people casts HPy to
// PyObject* directly, things explode.
static inline HPy _py2h(PyObject *obj) {
if (obj == NULL)
return HPy_NULL;
return (HPy){(HPy_ssize_t)(obj + 1)};
}
static inline PyObject *_h2py(HPy h) {
if HPy_IsNull(h)
return NULL;
return (PyObject *)(h._i - 1);
}
So, they are basically no-op casts again, and the benchmarks are much faster:
Historical note: why do we represent hpy.universal handles as indexes in a list? The original ideas was to support the debug mode, so that we could easily store extra debugging infos for each handles.
The idea that I am trying in PR #142 is different, i.e. to wrap a generic universal ctx into a debug ctx: so, debug handles are wrapper around generic opaque universal handles and the extra infos can be attached directly on the wrappers. Moreover, by doing that we pay the overhead of "heavy" handles only for the modules for which the debug mode is enabled.
EDIT: fixed the PR number
If you don't specify tp_new
, CPython has a different behavior depending on whether the type is a heap type or now. Consider this example:
#include <Python.h>
typedef struct {
PyObject_HEAD
} FooObject;
static PyTypeObject Foo_Type = {
PyVarObject_HEAD_INIT(NULL, 0)
.tp_name = "foo.Foo",
.tp_basicsize = sizeof(FooObject),
.tp_flags = Py_TPFLAGS_DEFAULT,
.tp_doc = "Foo type",
//.tp_new = PyType_GenericNew,
};
static PyType_Slot Bar_slots[] = {
{Py_tp_doc, "Bar type"},
{0, 0},
};
static PyType_Spec Bar_spec = {
"foo.Bar",
sizeof(FooObject),
0,
Py_TPFLAGS_DEFAULT,
Bar_slots
};
static struct PyModuleDef moduledef = {
PyModuleDef_HEAD_INIT,
"foo",
"Module Doc",
-1,
};
PyMODINIT_FUNC
PyInit_foo(void)
{
PyObject* m;
if (PyType_Ready(&Foo_Type) < 0)
return NULL;
PyObject *Bar_type = PyType_FromSpec(&Bar_spec);
if (Bar_type == NULL)
return NULL;
m = PyModule_Create(&moduledef);
if (m == NULL)
return NULL;
Py_INCREF(&Foo_Type);
PyModule_AddObject(m, "Foo", (PyObject *)&Foo_Type);
PyModule_AddObject(m, "Bar", Bar_type);
return m;
}
If you leave Foo_type.tp_new
commented out, you cannot instantiate Foo
objects, but you can instantiate Bar
objects:
>>> import foo
>>> foo.Foo()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: cannot create 'foo.Foo' instances
>>> foo.Bar()
<foo.Bar object at 0x7fec65982f80>
This seems to be done on purpose by CPython, inside typeobject.c:inherit_special
:
What should HPyType_FromSpec
do?
PyType_Spec
: this has obvious advantagesAdditionally: if we choose any solution other than (1), and we introduce a behavior which deviates from PyType_FromSpec
, we could/should consider the possibility to rename HPyType_FromSpec
.
Currently, HPy_InPlacePower
is just a proxy to Py_InPlacePower
: however, its semantics is a bit weird. The official CPython docs say:
PyObject* PyNumber_Power(PyObject *o1, PyObject *o2, PyObject *o3)
Return value: New reference.
See the built-in function pow(). Returns NULL on failure.
This is the equivalent of the Python expression pow(o1, o2, o3),
where o3 is optional. If o3 is to be ignored, pass Py_None in its
place (passing NULL for o3 would cause an illegal memory access).
However, the actual semantics is different: if an object implements __ipow__
, the 3rd arg is always ignored. See e.g. commit 79077d1 and the relevant PyPy commit.
I see two alternatives:
HPy_InPlacePower
to accept only 2 arguments (because __ipow__
accepts only one, after all). But in this case, should we keep this name or change it?As a first step towards #137, convert ndarray to an HPy type, defined using HPyType_FromSpec.
Note that this doesn't require converting any of its methods and slots, as they can be used unchanged through HPy's legacy slots feature.
Line 41 in 00b4107
def get_ctx_sources(self):
""" Extra sources needed only in Universal mode.
"""
return list(map(str, self.src_dir.glob('ctx_*.c')))
This function seems to be called when I run proof-of-concept/test_pof.sh wheel cpython
rather than proof-of-concept/test_pof.sh wheel universal
so the sources seems to be needed only in cpython mode.
I open this as an issue to avoid forgetting.
We should add a test which checks that if we run make autogen
, we get the same output as the files which are committed in the repo. This ensures:
that we didn't forget to call it after changing public_api.h
and/or autogen.py
itself
that we didn't modify the autogenerated code by hand
This is the signature of PyObject_TypeCheck
:
int PyObject_TypeCheck(PyObject *o, PyTypeObject *type)
// Return true if the object o is of type type or a subtype of type. Both parameters must be non-NULL.
The obvious HPy equivalent would be:
int HPy_TypeCheck(HPy o, HPy type)
But note that type
is now a generic HPy
instead of a specific PyTypeObject *
. However, this poses some design issues:
type
is actually a type object?true
, with all kinds of funny results.Note that there is also PyObject_IsInstance
which has the "right" signature but has a much more complex logic.
Possible solutions:
type
, similar to how they already need to ensure that you don't pass NULL
. We can introduce the extra checks in debug mode, though.HPy_FatalError
insteadHPy_CheckType
? HPy_TryTypeCheck
? HPy_TypeCheckMaybeFail
?I think my preferred solution is (1), especially considered what is the typical usage of the API. I suspect that the in the vast majority of cases it is used to define PyXXX_Check
for custom types. For example, numpy does the following:
#define PyArray_Check(op) PyObject_TypeCheck(op, &PyArray_Type)
So from this POV, the probability of passing something which is not a type is very low, and a fatal error in debug mode could be enough.
Moreover, there is also an additional alternative which is a much bigger redesign, linked to this comment on #83: when you define a custom type, you almost always need helper functions such as XXX_Check
, XXX_Cast
, XXX_TryCast
, etc.
If we go in that direction we can introduce a macro which automatically defines all theXXX_*
helpers: XXX_Check
will use _HPy_TypeCheck
which can be made semi-private because it will not be needed anywhere else.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.