Comments (13)
Thanks!
FYI, If you move import mxnet as mx
into foo()
, the bug can disappear. But this is generally not doable because mxnet is usually imported in the main process. It may related to how mxnet works with subprocesses.
from mobulaop.
I tried that, but it does not work. Example code:
from concurrent import futures
import sys
import mxnet as mx
import mobula
# Import Custom Operator Dynamically
mobula.op.load('./AdditionOP')
def foo():
AdditionOP = mobula.op.AdditionOP
a = mx.nd.array([1, 2, 3])
b = mx.nd.array([4, 5, 6])
a.attach_grad()
b.attach_grad()
with mx.autograd.record():
c = AdditionOP(a, b)
dc = mx.nd.array([7, 8, 9])
c.backward(dc)
assert ((a + b).asnumpy() == c.asnumpy()).all()
assert (a.grad.asnumpy() == dc.asnumpy()).all()
assert (b.grad.asnumpy() == dc.asnumpy()).all()
print('Okay :-)')
print('a + b = c \n {} + {} = {}'.format(a.asnumpy(), b.asnumpy(), c.asnumpy()))
def main():
ex = futures.ProcessPoolExecutor(1)
r = ex.submit(foo)
r.result()
if __name__ == "__main__":
main()
from mobulaop.
Thanks for your report!
I will check it.
from mobulaop.
moving import mobula
and mobula.op.load('./AdditionOP')
outside foo()
may work, since MobulaOP will register operator into MXNet when mobula.op.load('./AdditionOP')
is called.
I will add a check to avoid duplicated register.
from mobulaop.
@YutingZhang
Hi! I found the bug is not related to MobulaOP.
It seems that MXNet triggers the bug.
from concurrent import futures
import mxnet as mx
import sys
from mobula.testing import assert_almost_equal
sys.path.append('../../') # Add MobulaOP Path
class AdditionOP(mx.operator.CustomOp):
def __init__(self):
super(AdditionOP, self).__init__()
def forward(self, is_train, req, in_data, out_data, aux):
out_data[0][:] = in_data[0] + in_data[1]
def backward(self, req, out_grad, in_data, out_data, in_grad, aux):
in_grad[0][:] = out_grad[0]
in_grad[1][:] = out_grad[0]
@mx.operator.register("AdditionOP")
class AdditionOPProp(mx.operator.CustomOpProp):
def __init__(self):
super(AdditionOPProp, self).__init__()
def list_arguments(self):
return ['a', 'b']
def list_outputs(self):
return ['output']
def infer_shape(self, in_shape):
return in_shape, [in_shape[0]]
def create_operator(self, ctx, shapes, dtypes):
return AdditionOP()
def foo():
a = mx.nd.array([1, 2, 3])
b = mx.nd.array([4, 5, 6])
a.attach_grad()
b.attach_grad()
print("REC")
with mx.autograd.record():
c = mx.nd.Custom(a, b, op_type='AdditionOP')
dc = mx.nd.array([7, 8, 9])
c.backward(dc)
assert_almost_equal(a + b, c)
assert_almost_equal(a.grad, dc)
assert_almost_equal(b.grad, dc)
print('Okay :-)')
print('a + b = c \n {} + {} = {}'.format(a.asnumpy(), b.asnumpy(), c.asnumpy()))
def main():
ex = futures.ProcessPoolExecutor(1)
r = ex.submit(foo)
r.result()
if __name__ == '__main__':
main()
from mobulaop.
So mx.nd.Custom
is the actual problem ... MxNet just has lots of bugs when running in subprocess ...
from mobulaop.
Yes.
from mobulaop.
@wkcn Send you an email to your live.cn email :)
from mobulaop.
Mail received. Thank you! : )
from mobulaop.
Hi @YutingZhang , the two testcases you gave have been passed in the latest MXNet and MobulaOP : )
from mobulaop.
@wkcn Thanks a lot! Did you work around the problem in MobulaOP? Or is it due to MxNet's update on CustomOP (you also contributed to this)?
from mobulaop.
@YutingZhang It is due to MXNet’s update, and other contributors fixed it.
from mobulaop.
Close it since the problem has been addressed. : )
from mobulaop.
Related Issues (20)
- Does these operators respect the asynchronous execution in MXNet? HOT 7
- Not working with MXNet nightly build (1.5.0b20181222) HOT 6
- Lack of comments HOT 2
- [Question] Using types other than float32? HOT 10
- Does it work if custom op and the framework are compiled with different version of GCC? HOT 1
- ROIAlign custom op runs slowly HOT 13
- [Feature request] Gradient check HOT 4
- LICENSE Problem
- Low performance in gpu mode HOT 4
- Leveraging framework specific math helpers HOT 1
- Build Failed in GCC5. Incomplete type Error. HOT 1
- Question on Example on PyTorch HOT 4
- Question on ATTEN SAMPLER based on MobulaOP HOT 3
- The crash when training model on multiple GPUs HOT 1
- compile error HOT 5
- Is MobulaOP support cupy? HOT 2
- Custom Operators Zoo HOT 1
- undefined symbol: MXShallowCopyNDArray HOT 2
- Implementation ideas of creating Operator HOT 6
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from mobulaop.