Code Monkey home page Code Monkey logo

Comments (10)

CloudRider-pixel avatar CloudRider-pixel commented on June 15, 2024

Hi,

I got the same error as you. I solved it by using docker image nvcr.io/nvidia/tensorrt:22.06-py3

To export I used : https://github.com/WongKinYiu.git
python export.py --weights ./yolov7.pt --grid
Then :
/tensorrt/bin/trtexec --onnx=yolov7.onnx --minShapes=images:1x3x640x640 --optShapes=images:8x3x640x640 --maxShapes=images:8x3x640x640 --fp16 --workspace=4096 --saveEngine=yolov7-fp16-1x8x8.engine --timingCacheFile=timing.cache
// Test engine
./tensorrt/bin/trtexec --loadEngine=yolov7-fp16-1x8x8.engine

Nevertheless I got some other issue after and I needed to change the following :

In yolov7.cpp add:

#include "NvInferPlugin.h"

and

initLibNvInferPlugins(&gLogger.getTRTLogger(), "");

just before IRuntime* runtime = createInferRuntime(gLogger);

In CMakeLists.txt replace:

target_link_libraries(yolov7 nvinfer)
by
target_link_libraries(yolov7 nvinfer nvinfer_plugin)

For now I'm still blocked with this issue:
void doInference(nvinfer1::IExecutionContext&, float*, float*, int, cv::Size): Assertion `engine.getNbBindings() == 2' failed
but as mentionned by @Linaom1214 it's an issue related with the export to onnx with NMS, I just have to find how I can export without nms.

from tensorrt-for-yolo-series.

jia0511 avatar jia0511 commented on June 15, 2024

开源出的代码自己都不测下吗,最基本的编译过程缺少vInferPlugin插件,导致编译错误,第二模型序列化无法engine.create_execution_context,多少人遇到类似问题,官方工程化指向这个工程,麻烦你给别人少弄点弯路!!

from tensorrt-for-yolo-series.

Linaom1214 avatar Linaom1214 commented on June 15, 2024

开源出的代码自己都不测下吗,最基本的编译过程缺少vInferPlugin插件,导致编译错误,第二模型序列化无法engine.create_execution_context,多少人遇到类似问题,官方工程化指向这个工程,麻烦你给别人少弄点弯路!!

我很明确的回复您了, 现在的C++代码不支持,NMS插件, 您提的好几个issue 我都一一回复了!!!!

from tensorrt-for-yolo-series.

Linaom1214 avatar Linaom1214 commented on June 15, 2024

Hi,

I got the same error as you. I the beginning of https://github.com/WongKinYiu/yolov7/tree/main/deploy/triton-inference-server and now I didn't get this issue anymore:

// Pytorch Yolov7 -> ONNX with grid, EfficientNMS plugin and dynamic batch size python export.py --weights ./yolov7.pt --grid --end2end --dynamic-batch --simplify --topk-all 100 --iou-thres 0.65 --conf-thres 0.35 --img-size 640 640 // ONNX -> TensorRT with trtexec and docker docker run -it --rm --gpus=all nvcr.io/nvidia/tensorrt:22.06-py3 // Copy onnx -> container: docker cp yolov7.onnx :/workspace/ // Export with FP16 precision, min batch 1, opt batch 8 and max batch 8 ./tensorrt/bin/trtexec --onnx=yolov7.onnx --minShapes=images:1x3x640x640 --optShapes=images:8x3x640x640 --maxShapes=images:8x3x640x640 --fp16 --workspace=4096 --saveEngine=yolov7-fp16-1x8x8.engine --timingCacheFile=timing.cache // Test engine ./tensorrt/bin/trtexec --loadEngine=yolov7-fp16-1x8x8.engine // Copy engine -> host: docker cp :/workspace/yolov7-fp16-1x8x8.engine .

Nevertheless I got some other issue after and I needed to change the following :

In yolov7.cpp add:

#include "NvInferPlugin.h"

and

initLibNvInferPlugins(&gLogger.getTRTLogger(), "");

just before IRuntime* runtime = createInferRuntime(gLogger);

In CMakeLists.txt replace:

target_link_libraries(yolov7 nvinfer) by target_link_libraries(yolov7 nvinfer nvinfer_plugin)

the please refer #18 .The real reason is that the reop of yolov7 did not understand that the model supported by the code of this reop does not include end-to-end

from tensorrt-for-yolo-series.

Linaom1214 avatar Linaom1214 commented on June 15, 2024

按照流程说明下来出现了这个错误: 这是因为什么的版本问题造成的吗? ./yolov7 ../yolov7.engine -i ../../../../assets/dog.jpg [08/01/2022-19:43:48] [E] [TRT] 1: [stdArchiveReader.cpp::StdArchiveReader::40] Error Code 1: Serialization (Serialization assertion stdVersionRead == serializationVersion failed.Version tag does not match. Note: Current Version: 205, Serialized Engine Version: 213)

不知道为什么有人把本项目的v7 C++代码列到了end2end里,我已经给V7仓库pr了相信他们会会快修复的。

from tensorrt-for-yolo-series.

CloudRider-pixel avatar CloudRider-pixel commented on June 15, 2024

Hi,
I got the same error as you. I the beginning of https://github.com/WongKinYiu/yolov7/tree/main/deploy/triton-inference-server and now I didn't get this issue anymore:
// Pytorch Yolov7 -> ONNX with grid, EfficientNMS plugin and dynamic batch size python export.py --weights ./yolov7.pt --grid --end2end --dynamic-batch --simplify --topk-all 100 --iou-thres 0.65 --conf-thres 0.35 --img-size 640 640 // ONNX -> TensorRT with trtexec and docker docker run -it --rm --gpus=all nvcr.io/nvidia/tensorrt:22.06-py3 // Copy onnx -> container: docker cp yolov7.onnx :/workspace/ // Export with FP16 precision, min batch 1, opt batch 8 and max batch 8 ./tensorrt/bin/trtexec --onnx=yolov7.onnx --minShapes=images:1x3x640x640 --optShapes=images:8x3x640x640 --maxShapes=images:8x3x640x640 --fp16 --workspace=4096 --saveEngine=yolov7-fp16-1x8x8.engine --timingCacheFile=timing.cache // Test engine ./tensorrt/bin/trtexec --loadEngine=yolov7-fp16-1x8x8.engine // Copy engine -> host: docker cp :/workspace/yolov7-fp16-1x8x8.engine .
Nevertheless I got some other issue after and I needed to change the following :
In yolov7.cpp add:
#include "NvInferPlugin.h"
and
initLibNvInferPlugins(&gLogger.getTRTLogger(), "");
just before IRuntime* runtime = createInferRuntime(gLogger);
In CMakeLists.txt replace:
target_link_libraries(yolov7 nvinfer) by target_link_libraries(yolov7 nvinfer nvinfer_plugin)

the please refer #18 .The real reason is that the reop of yolov7 did not understand that the model supported by the code of this reop does not include end-to-end

I agree sorry, I typed it too fast yesterday, I updated my answer and thanks for your work.

from tensorrt-for-yolo-series.

Linaom1214 avatar Linaom1214 commented on June 15, 2024

Hi,
I got the same error as you. I the beginning of https://github.com/WongKinYiu/yolov7/tree/main/deploy/triton-inference-server and now I didn't get this issue anymore:
// Pytorch Yolov7 -> ONNX with grid, EfficientNMS plugin and dynamic batch size python export.py --weights ./yolov7.pt --grid --end2end --dynamic-batch --simplify --topk-all 100 --iou-thres 0.65 --conf-thres 0.35 --img-size 640 640 // ONNX -> TensorRT with trtexec and docker docker run -it --rm --gpus=all nvcr.io/nvidia/tensorrt:22.06-py3 // Copy onnx -> container: docker cp yolov7.onnx :/workspace/ // Export with FP16 precision, min batch 1, opt batch 8 and max batch 8 ./tensorrt/bin/trtexec --onnx=yolov7.onnx --minShapes=images:1x3x640x640 --optShapes=images:8x3x640x640 --maxShapes=images:8x3x640x640 --fp16 --workspace=4096 --saveEngine=yolov7-fp16-1x8x8.engine --timingCacheFile=timing.cache // Test engine ./tensorrt/bin/trtexec --loadEngine=yolov7-fp16-1x8x8.engine // Copy engine -> host: docker cp :/workspace/yolov7-fp16-1x8x8.engine .
Nevertheless I got some other issue after and I needed to change the following :
In yolov7.cpp add:
#include "NvInferPlugin.h"
and
initLibNvInferPlugins(&gLogger.getTRTLogger(), "");
just before IRuntime* runtime = createInferRuntime(gLogger);
In CMakeLists.txt replace:
target_link_libraries(yolov7 nvinfer) by target_link_libraries(yolov7 nvinfer nvinfer_plugin)

the please refer #18 .The real reason is that the reop of yolov7 did not understand that the model supported by the code of this reop does not include end-to-end

I agree sorry, I typed it too fast yesterday, I updated my answer and thanks for your work.

按照流程说明下来出现了这个错误: 这是因为什么的版本问题造成的吗? ./yolov7 ../yolov7.engine -i ../../../../assets/dog.jpg [08/01/2022-19:43:48] [E] [TRT] 1: [stdArchiveReader.cpp::StdArchiveReader::40] Error Code 1: Serialization (Serialization assertion stdVersionRead == serializationVersion failed.Version tag does not match. Note: Current Version: 205, Serialized Engine Version: 213)

如果yolov5-rt那个项目中的代码有效,感谢您的反馈!

from tensorrt-for-yolo-series.

Cuzny avatar Cuzny commented on June 15, 2024

yolov5-rt-stack那个项目的代码是可以使用的,或者在torch1.12.0版本下直接使用yolov7官方仓库的export.py导出带NMS的onnx文件,然后用8.2以上的tensorrt自带的trtexec转化.engine也可以。

from tensorrt-for-yolo-series.

Linaom1214 avatar Linaom1214 commented on June 15, 2024

yolov5-rt-stack那个项目的代码是可以使用的,或者在torch1.12.0版本下直接使用yolov7官方仓库的export.py导出带NMS的onnx文件,然后用8.2以上的tensorrt自带的trtexec转化.engine也可以。

好的, 感谢反馈,之前给他们的pr并没有合并, 相关的代码我都有严格的测试,v7仓库莫名奇妙就把我这个v7 的c++ demo 放上去了,给您带来困扰了! 不知道大家对端到端的需求多不多,有需要的话考虑专门做一个端到端的分支

from tensorrt-for-yolo-series.

Linaom1214 avatar Linaom1214 commented on June 15, 2024

yolov5-rt-stack那个项目的代码是可以使用的,或者在torch1.12.0版本下直接使用yolov7官方仓库的export.py导出带NMS的onnx文件,然后用8.2以上的tensorrt自带的trtexec转化.engine也可以。

好的, 感谢反馈,之前给他们的pr并没有合并, 相关的代码我都有严格的测试,v7仓库莫名奇妙就把我这个v7 的c++ demo 放上去了,给您带来困扰了! 不知道大家对端到端的需求多不多,有需要的话考虑专门做一个端到端的分支

Now I add the C++ support
https://github.com/Linaom1214/TensorRT-For-YOLO-Series/blob/main/cpp/README.MD

from tensorrt-for-yolo-series.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.