Comments (6)
Thank you so much for your responsive answer.
This is the bug-report directory:
I use the Render
in this way:
from nnsmith.materialize import Render, BugReport, Model
from nnsmith.backends import BackendFactory
model_init = Model.init("torch", "cpu")
bug_report = BugReport.load(model_init, "./bug_example/", allow_partial=True)
render = Render()
render.emit_model(bug_report.testcase.model)
render.emit_input(bug_report.testcase.model)
from nnsmith.backends import BackendFactory
render.emit_backend(BackendFactory.init("pt2"))
output = render.render()
with open("./output.py", "w+") as f:
f.write(output)
and then the output.py is like this:
import numpy as np
import torch
import pickle
# Model definition
class M(torch.nn.Module):
def __init__(self):
super().__init__()
self.v5_0 = torch.nn.Parameter(torch.empty([1], dtype=torch.int16), requires_grad=False)
def forward(self, *args):
_args = args
getitem = _args[0]; _args = None
_tensor_constant0 = self._tensor_constant0
mul = torch.mul(_tensor_constant0, getitem); _tensor_constant0 = None
expand = mul.expand(1)
expand_1 = mul.expand(1, 1, 1, 1); mul = None
max_1 = torch.max(expand_1, getitem); expand_1 = getitem = None
return (expand, max_1)
m = M()
# Initialize weight
# None
# Initialize input
inp = [np.zeros([], dtype='int16')]
# Compile the model
opt = torch.compile(m, fullgraph=True, backend='inductor', mode=None)
# Eager run
m_out = m(*[torch.from_numpy(v).to('cpu') for v in inp])
m_out = [v.cpu().detach() for v in m_out] # torch2numpy
m_out = [v.resolve_conj().numpy() if v.is_conj() else v.numpy() for v in m_out] # torch2numpy
# Compiled run
opt_out = opt(*[torch.from_numpy(v).to('cpu') for v in inp])
opt_out = [v.cpu().detach() for v in opt_out] # torch2numpy
opt_out = [v.resolve_conj().numpy() if v.is_conj() else v.numpy() for v in opt_out] # torch2numpy
# Differential testing
for i, (l, r) in enumerate(zip(m_out, opt_out)):
np.testing.assert_allclose(l, r, rtol=1e-2, atol=1e-3, err_msg=f"Result mismatch @ index {i}")
But executing this will raise exception: AttributeError: 'M' object has no attribute '_tensor_constant0'
.
from nnsmith.
Hi I implemented the render in #107 a few months ago. I would not say it is strictly tested but I have not encountered any major issues so far.
You are welcome to check out the examples in the unit tests: https://github.com/ise-uiuc/nnsmith/blob/main/tests/torch/test_render.py
Or please file a concrete bug so that I can help you diagnose. Thanks.
from nnsmith.
Could you let me know ur PyTorch version? Thanks.
from nnsmith.
Could you let me know ur PyTorch version? Thanks.
The version is 2.0.1+cu117.
from nnsmith.
Sorry for the late reply. I am looking into the bug right now. While I have fixed your issue partially in #122, the most critical issue here, aka the undefined variable name _tensor_constant0
is introduced by PyTorch's model-to-code translation when referring to the parameter over a user-provided map.
I will either find other ways to implement the symbolic tracing or report a bug to PyTorch to fix it.
from nnsmith.
OK, got a workaround by referencing parameters as object attributes which makes your example work. Feel free to retry after the PR gets merged.
from nnsmith.
Related Issues (20)
- [Tracking] Make Python >= 3.8 mandatory
- 💡 [Dynamic Graph] - Does nnsmith support dynamic graphs? HOT 3
- 💡 [REQUEST] TF Coverage Tutorial and Script
- TF Coverage Scripts and Tutorial HOT 1
- [Dev] `hydra` -> `click`
- [Question] Customize the number of input/output variables in generated graphs HOT 9
- 💡 [REQUEST] - Tutorial of adding a new operator for GIR HOT 4
- 🐛 [BUG] - <`ONNXModelCPU_tvm_0.9.0_cpu.yaml` file was empty, can't get opset properly properly> HOT 11
- 🐛 [BUG] - There is a problem with relative import in `fuzz.py` HOT 2
- Some questions about the replication of the experiment HOT 6
- Problems encountered while compiling the onnx model HOT 4
- [Help wanted] How to get the shape of the output tensor of a operator HOT 5
- [Help wanted] How to get the result of executing model_exec.py? HOT 7
- [User Question] integer type annotation in TVM HOT 2
- 🐛 [BUG] - <An error occurred when loading the onnx model generated by nnsmith using tvm.delay.> HOT 1
- [Help Wanted] Problems encountered when converting the onnx model to tvm.relay HOT 3
- [Help Wanted] How to only generate sequential models HOT 2
- Help Wanted - How does one generate minimum code examples from NNSmith bug reports HOT 3
- Instruction of TVM COV HOT 4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from nnsmith.