Comments (10)
Hi,
I got the same error as you. I solved it by using docker image nvcr.io/nvidia/tensorrt:22.06-py3
To export I used : https://github.com/WongKinYiu.git
python export.py --weights ./yolov7.pt --grid
Then :
/tensorrt/bin/trtexec --onnx=yolov7.onnx --minShapes=images:1x3x640x640 --optShapes=images:8x3x640x640 --maxShapes=images:8x3x640x640 --fp16 --workspace=4096 --saveEngine=yolov7-fp16-1x8x8.engine --timingCacheFile=timing.cache
// Test engine
./tensorrt/bin/trtexec --loadEngine=yolov7-fp16-1x8x8.engine
Nevertheless I got some other issue after and I needed to change the following :
In yolov7.cpp add:
#include "NvInferPlugin.h"
and
initLibNvInferPlugins(&gLogger.getTRTLogger(), "");
just before IRuntime* runtime = createInferRuntime(gLogger);
In CMakeLists.txt replace:
target_link_libraries(yolov7 nvinfer)
by
target_link_libraries(yolov7 nvinfer nvinfer_plugin)
For now I'm still blocked with this issue:
void doInference(nvinfer1::IExecutionContext&, float*, float*, int, cv::Size): Assertion `engine.getNbBindings() == 2' failed
but as mentionned by @Linaom1214 it's an issue related with the export to onnx with NMS, I just have to find how I can export without nms.
from tensorrt-for-yolo-series.
开源出的代码自己都不测下吗,最基本的编译过程缺少vInferPlugin插件,导致编译错误,第二模型序列化无法engine.create_execution_context,多少人遇到类似问题,官方工程化指向这个工程,麻烦你给别人少弄点弯路!!
from tensorrt-for-yolo-series.
开源出的代码自己都不测下吗,最基本的编译过程缺少vInferPlugin插件,导致编译错误,第二模型序列化无法engine.create_execution_context,多少人遇到类似问题,官方工程化指向这个工程,麻烦你给别人少弄点弯路!!
我很明确的回复您了, 现在的C++代码不支持,NMS插件, 您提的好几个issue 我都一一回复了!!!!
from tensorrt-for-yolo-series.
Hi,
I got the same error as you. I the beginning of https://github.com/WongKinYiu/yolov7/tree/main/deploy/triton-inference-server and now I didn't get this issue anymore:
// Pytorch Yolov7 -> ONNX with grid, EfficientNMS plugin and dynamic batch size python export.py --weights ./yolov7.pt --grid --end2end --dynamic-batch --simplify --topk-all 100 --iou-thres 0.65 --conf-thres 0.35 --img-size 640 640 // ONNX -> TensorRT with trtexec and docker docker run -it --rm --gpus=all nvcr.io/nvidia/tensorrt:22.06-py3 // Copy onnx -> container: docker cp yolov7.onnx :/workspace/ // Export with FP16 precision, min batch 1, opt batch 8 and max batch 8 ./tensorrt/bin/trtexec --onnx=yolov7.onnx --minShapes=images:1x3x640x640 --optShapes=images:8x3x640x640 --maxShapes=images:8x3x640x640 --fp16 --workspace=4096 --saveEngine=yolov7-fp16-1x8x8.engine --timingCacheFile=timing.cache // Test engine ./tensorrt/bin/trtexec --loadEngine=yolov7-fp16-1x8x8.engine // Copy engine -> host: docker cp :/workspace/yolov7-fp16-1x8x8.engine .
Nevertheless I got some other issue after and I needed to change the following :
In yolov7.cpp add:
#include "NvInferPlugin.h"
and
initLibNvInferPlugins(&gLogger.getTRTLogger(), "");
just before IRuntime* runtime = createInferRuntime(gLogger);
In CMakeLists.txt replace:
target_link_libraries(yolov7 nvinfer) by target_link_libraries(yolov7 nvinfer nvinfer_plugin)
the please refer #18 .The real reason is that the reop of yolov7 did not understand that the model supported by the code of this reop does not include end-to-end
from tensorrt-for-yolo-series.
按照流程说明下来出现了这个错误: 这是因为什么的版本问题造成的吗? ./yolov7 ../yolov7.engine -i ../../../../assets/dog.jpg [08/01/2022-19:43:48] [E] [TRT] 1: [stdArchiveReader.cpp::StdArchiveReader::40] Error Code 1: Serialization (Serialization assertion stdVersionRead == serializationVersion failed.Version tag does not match. Note: Current Version: 205, Serialized Engine Version: 213)
不知道为什么有人把本项目的v7 C++代码列到了end2end里,我已经给V7仓库pr了相信他们会会快修复的。
from tensorrt-for-yolo-series.
Hi,
I got the same error as you. I the beginning of https://github.com/WongKinYiu/yolov7/tree/main/deploy/triton-inference-server and now I didn't get this issue anymore:
// Pytorch Yolov7 -> ONNX with grid, EfficientNMS plugin and dynamic batch size python export.py --weights ./yolov7.pt --grid --end2end --dynamic-batch --simplify --topk-all 100 --iou-thres 0.65 --conf-thres 0.35 --img-size 640 640 // ONNX -> TensorRT with trtexec and docker docker run -it --rm --gpus=all nvcr.io/nvidia/tensorrt:22.06-py3 // Copy onnx -> container: docker cp yolov7.onnx :/workspace/ // Export with FP16 precision, min batch 1, opt batch 8 and max batch 8 ./tensorrt/bin/trtexec --onnx=yolov7.onnx --minShapes=images:1x3x640x640 --optShapes=images:8x3x640x640 --maxShapes=images:8x3x640x640 --fp16 --workspace=4096 --saveEngine=yolov7-fp16-1x8x8.engine --timingCacheFile=timing.cache // Test engine ./tensorrt/bin/trtexec --loadEngine=yolov7-fp16-1x8x8.engine // Copy engine -> host: docker cp :/workspace/yolov7-fp16-1x8x8.engine .
Nevertheless I got some other issue after and I needed to change the following :
In yolov7.cpp add:
#include "NvInferPlugin.h"
and
initLibNvInferPlugins(&gLogger.getTRTLogger(), "");
just before IRuntime* runtime = createInferRuntime(gLogger);
In CMakeLists.txt replace:
target_link_libraries(yolov7 nvinfer) by target_link_libraries(yolov7 nvinfer nvinfer_plugin)the please refer #18 .The real reason is that the reop of yolov7 did not understand that the model supported by the code of this reop does not include end-to-end
I agree sorry, I typed it too fast yesterday, I updated my answer and thanks for your work.
from tensorrt-for-yolo-series.
Hi,
I got the same error as you. I the beginning of https://github.com/WongKinYiu/yolov7/tree/main/deploy/triton-inference-server and now I didn't get this issue anymore:
// Pytorch Yolov7 -> ONNX with grid, EfficientNMS plugin and dynamic batch size python export.py --weights ./yolov7.pt --grid --end2end --dynamic-batch --simplify --topk-all 100 --iou-thres 0.65 --conf-thres 0.35 --img-size 640 640 // ONNX -> TensorRT with trtexec and docker docker run -it --rm --gpus=all nvcr.io/nvidia/tensorrt:22.06-py3 // Copy onnx -> container: docker cp yolov7.onnx :/workspace/ // Export with FP16 precision, min batch 1, opt batch 8 and max batch 8 ./tensorrt/bin/trtexec --onnx=yolov7.onnx --minShapes=images:1x3x640x640 --optShapes=images:8x3x640x640 --maxShapes=images:8x3x640x640 --fp16 --workspace=4096 --saveEngine=yolov7-fp16-1x8x8.engine --timingCacheFile=timing.cache // Test engine ./tensorrt/bin/trtexec --loadEngine=yolov7-fp16-1x8x8.engine // Copy engine -> host: docker cp :/workspace/yolov7-fp16-1x8x8.engine .
Nevertheless I got some other issue after and I needed to change the following :
In yolov7.cpp add:
#include "NvInferPlugin.h"
and
initLibNvInferPlugins(&gLogger.getTRTLogger(), "");
just before IRuntime* runtime = createInferRuntime(gLogger);
In CMakeLists.txt replace:
target_link_libraries(yolov7 nvinfer) by target_link_libraries(yolov7 nvinfer nvinfer_plugin)the please refer #18 .The real reason is that the reop of yolov7 did not understand that the model supported by the code of this reop does not include end-to-end
I agree sorry, I typed it too fast yesterday, I updated my answer and thanks for your work.
按照流程说明下来出现了这个错误: 这是因为什么的版本问题造成的吗? ./yolov7 ../yolov7.engine -i ../../../../assets/dog.jpg [08/01/2022-19:43:48] [E] [TRT] 1: [stdArchiveReader.cpp::StdArchiveReader::40] Error Code 1: Serialization (Serialization assertion stdVersionRead == serializationVersion failed.Version tag does not match. Note: Current Version: 205, Serialized Engine Version: 213)
如果yolov5-rt那个项目中的代码有效,感谢您的反馈!
from tensorrt-for-yolo-series.
yolov5-rt-stack那个项目的代码是可以使用的,或者在torch1.12.0版本下直接使用yolov7官方仓库的export.py导出带NMS的onnx文件,然后用8.2以上的tensorrt自带的trtexec转化.engine也可以。
from tensorrt-for-yolo-series.
yolov5-rt-stack那个项目的代码是可以使用的,或者在torch1.12.0版本下直接使用yolov7官方仓库的export.py导出带NMS的onnx文件,然后用8.2以上的tensorrt自带的trtexec转化.engine也可以。
好的, 感谢反馈,之前给他们的pr并没有合并, 相关的代码我都有严格的测试,v7仓库莫名奇妙就把我这个v7 的c++ demo 放上去了,给您带来困扰了! 不知道大家对端到端的需求多不多,有需要的话考虑专门做一个端到端的分支
from tensorrt-for-yolo-series.
yolov5-rt-stack那个项目的代码是可以使用的,或者在torch1.12.0版本下直接使用yolov7官方仓库的export.py导出带NMS的onnx文件,然后用8.2以上的tensorrt自带的trtexec转化.engine也可以。
好的, 感谢反馈,之前给他们的pr并没有合并, 相关的代码我都有严格的测试,v7仓库莫名奇妙就把我这个v7 的c++ demo 放上去了,给您带来困扰了! 不知道大家对端到端的需求多不多,有需要的话考虑专门做一个端到端的分支
Now I add the C++ support
https://github.com/Linaom1214/TensorRT-For-YOLO-Series/blob/main/cpp/README.MD
from tensorrt-for-yolo-series.
Related Issues (20)
- En715 Jetson xaiver Nx Yolov7.trt Not detect HOT 2
- yolov7,official,int8,onnx-> trt报错 HOT 3
- c++ endtoend 关于预测的置信度绘制 HOT 4
- memory leak: Destroy function does not work
- Detection duplicates with fp16 on Jetson Nano (TensorRT v8.2.1.8) HOT 2
- Support for windows?
- License? HOT 4
- 关于V8 tensorrt 出现乱框的情况 HOT 33
- TensorRT Conversion Issue "TypeError: pybind11::init(): factory function returned nullptr" HOT 2
- yolox 自己训练的模型 trt推理 位置不对 HOT 1
- int8量化的时候,输入是多个,怎么修改呢? calib_shape = [calib_batch_size] + list(inputs[0].shape[1:])不对吧 HOT 4
- 怎么将这个项目与RealSense深度相机结合起来? HOT 3
- yolov9能支持么? HOT 2
- Jetson nano yolov8部署 onnx ->> trt HOT 2
- Is the TRT conversion I used in YOLOV7 not available in YOLOV9?
- Can I use this code for exporting the RTDETR Model
- 有v8obb的部分嘛
- calibration数据集制作 HOT 1
- yolox和yolov7的官方代码似乎已经做了nms onnx推理 HOT 1
- 你好,我导出的yolov5s int8模型进行推理时,没有检测到任何目标,det的shape为【0,6】,请问如何解决
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from tensorrt-for-yolo-series.