Comments (5)
I have a similar issue. I try to run detection.ipynb on Jetson Nano (jetpack 4.3, python 3.6, tensorflow 1.15) but when it reaches trt.create_inference_graph() it stucks for several minutes and the kernel restarts. Memory usage is 3.3/3.9GB and swap almost empty. Last terminal outputs:
2020-06-05 23:51:45.473972: I tensorflow/compiler/tf2tensorrt/convert/convert_graph.cc:633] Number of TensorRT candidate segments: 2
2020-06-05 23:51:45.688493: F tensorflow/core/util/device_name_utils.cc:92] Check failed: IsJobName(job)
[I 23:55:25.776 NotebookApp] KernelRestarter: restarting kernel (1/5), keep random ports
WARNING:root:kernel bc86b93e-4a68-4470-a522-7bdfd2c6f95a restarted
Appreciate any help.
from tf_trt_models.
I have a similar issue. I try to run detection.ipynb on Jetson Nano (jetpack 4.3, python 3.6, tensorflow 1.15) but when it reaches trt.create_inference_graph() it stucks for several minutes and the kernel restarts. Memory usage is 3.3/3.9GB and swap almost empty. Last terminal outputs:
2020-06-05 23:51:45.473972: I tensorflow/compiler/tf2tensorrt/convert/convert_graph.cc:633] Number of TensorRT candidate segments: 2
2020-06-05 23:51:45.688493: F tensorflow/core/util/device_name_utils.cc:92] Check failed: IsJobName(job)
[I 23:55:25.776 NotebookApp] KernelRestarter: restarting kernel (1/5), keep random ports
WARNING:root:kernel bc86b93e-4a68-4470-a522-7bdfd2c6f95a restartedAppreciate any help.
hello,have you ever solved this problem? I encounter same
from tf_trt_models.
hello,have you ever solved this problem? I encounter same
from tf_trt_models.
I have a similar issue. I try to run detection.ipynb on Jetson Nano (jetpack 4.3, python 3.6, tensorflow 1.15) but when it reaches trt.create_inference_graph() it stucks for several minutes and the kernel restarts. Memory usage is 3.3/3.9GB and swap almost empty. Last terminal outputs:
2020-06-05 23:51:45.473972: I tensorflow/compiler/tf2tensorrt/convert/convert_graph.cc:633] Number of TensorRT candidate segments: 2 2020-06-05 23:51:45.688493: F tensorflow/core/util/device_name_utils.cc:92] Check failed: IsJobName(job) [I 23:55:25.776 NotebookApp] KernelRestarter: restarting kernel (1/5), keep random ports WARNING:root:kernel bc86b93e-4a68-4470-a522-7bdfd2c6f95a restarted
Appreciate any help.
The Kernel gets restarted in my case too.
I raised an issue with nvidia, but their solution didn't work for me.
My current settings are
TF 1.15.5
TensorRT 8.0.0
Ubuntu 18.04
Guess a lot of people are facing this issue when trying to optimize the frozen graph using TensorRT.
Repository owners, please fix this bug.
from tf_trt_models.
Here is the solution to this issue. @dkatsios @roarjn @evil-potato
Add one new parameter to this below code, i.e force_nms_cpu=False
which is not present in this repository version of the code. Make sure you are also having the right TF and Jetpack version installed.
frozen_graph, input_names, output_names = build_detection_graph(
config=config_path,
checkpoint=checkpoint_path,
force_nms_cpu=False,
#score_threshold=0.3,
batch_size=1
)
When I closely looked in the jupyter terminal, the error pointed to something like this.
Tensorflow TensorRT: Could not load dynamic library 'libnvinfer.so.5'
which led me to the below links.
tensorflow/tensorflow#34329
https://forums.developer.nvidia.com/t/tf-trt-error-on-jetson-nano/187611
https://forums.developer.nvidia.com/t/error-while-converting-object-detection-model-to-tensorrt/117127
tensorflow/tensorrt#197
from tf_trt_models.
Related Issues (20)
- tensorflow 2.x HOT 2
- TF-TRT vs UFF-TensorRT HOT 3
- Cannot download pre-build pip wheel on Step3
- FasterRCNN and MASRCNN are not working HOT 1
- Faster RCNN Inception v2
- Testing TF-TRT on XAVIER AGX
- inference time is too long 3s/img at example/classification/classification.ipynb
- Issue installing tf_trt_models HOT 1
- how can i use it to train the for custom task, like face recognition with masks ?
- Installation `SyntaxError: Missing parentheses in call to 'print'` HOT 1
- unexpected performance on ssd_resnet_50_fpn_coco
- Can it run successfully on Jetpack 4.4.1?
- could not do infer in multiprocess
- File "/usr/local/lib/python3.6/dist-packages/google/protobuf/text_format.py", line 947, in _MergeField (message_descriptor.full_name, name)) google.protobuf.text_format.ParseError: 141:9 : Message type "object_detection.protos.BatchNonMaxSuppression" has no field named "use_static_shapes".
- jetson nano inference HOT 9
- Train your own model with ssd_mobilenet_v1_coco
- jetson nano not enough memory
- TensorFlow 2 Models for Jetson Nano
- example/classification/classification.ipynb Model checkpoint load error
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from tf_trt_models.