Code Monkey home page Code Monkey logo

deepstream-yolo's Introduction

DeepStream-Yolo

NVIDIA DeepStream SDK 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 configuration for YOLO models


Important: please export the ONNX model with the new export file, generate the TensorRT engine again with the updated files, and use the new config_infer_primary file according to your model


Future updates

  • DeepStream tutorials
  • Updated INT8 calibration
  • Support for segmentation models
  • Support for classification models

Improvements on this repository

  • Support for INT8 calibration
  • Support for non square models
  • Models benchmarks
  • Support for Darknet models (YOLOv4, etc) using cfg and weights conversion with GPU post-processing
  • Support for YOLO-NAS, PPYOLOE+, PPYOLOE, DAMO-YOLO, YOLOX, YOLOR, YOLOv8, YOLOv7, YOLOv6 and YOLOv5 using ONNX conversion with GPU post-processing
  • GPU bbox parser (it is slightly slower than CPU bbox parser on V100 GPU tests)
  • Support for DeepStream 5.1
  • Custom ONNX model parser (NvDsInferYoloCudaEngineGet)
  • Dynamic batch-size for Darknet and ONNX exported models
  • INT8 calibration (PTQ) for Darknet and ONNX exported models
  • New output structure (fix wrong output on DeepStream < 6.2) - it need to export the ONNX model with the new export file, generate the TensorRT engine again with the updated files, and use the new config_infer_primary file according to your model

Getting started

Requirements

DeepStream 6.2 on x86 platform

DeepStream 6.1.1 on x86 platform

DeepStream 6.1 on x86 platform

DeepStream 6.0.1 / 6.0 on x86 platform

DeepStream 5.1 on x86 platform

DeepStream 6.2 on Jetson platform

DeepStream 6.1.1 on Jetson platform

DeepStream 6.1 on Jetson platform

DeepStream 6.0.1 / 6.0 on Jetson platform

DeepStream 5.1 on Jetson platform

Suported models

Basic usage

1. Download the repo

git clone https://github.com/marcoslucianops/DeepStream-Yolo.git
cd DeepStream-Yolo

2. Download the cfg and weights files from Darknet repo to the DeepStream-Yolo folder

3. Compile the lib

  • DeepStream 6.2 on x86 platform

    CUDA_VER=11.8 make -C nvdsinfer_custom_impl_Yolo
    
  • DeepStream 6.1.1 on x86 platform

    CUDA_VER=11.7 make -C nvdsinfer_custom_impl_Yolo
    
  • DeepStream 6.1 on x86 platform

    CUDA_VER=11.6 make -C nvdsinfer_custom_impl_Yolo
    
  • DeepStream 6.0.1 / 6.0 on x86 platform

    CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
    
  • DeepStream 5.1 on x86 platform

    CUDA_VER=11.1 make -C nvdsinfer_custom_impl_Yolo
    
  • DeepStream 6.2 / 6.1.1 / 6.1 on Jetson platform

    CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
    
  • DeepStream 6.0.1 / 6.0 / 5.1 on Jetson platform

    CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo
    

4. Edit the config_infer_primary.txt file according to your model (example for YOLOv4)

[property]
...
custom-network-config=yolov4.cfg
model-file=yolov4.weights
...

NOTE: By default, the dynamic batch-size is set. To use implicit batch-size, uncomment the line

...
force-implicit-batch-dim=1
...

5. Run

deepstream-app -c deepstream_app_config.txt

NOTE: The TensorRT engine file may take a very long time to generate (sometimes more than 10 minutes).

NOTE: If you want to use YOLOv2 or YOLOv2-Tiny models, change the deepstream_app_config.txt file before run it

...
[primary-gie]
...
config-file=config_infer_primary_yoloV2.txt
...

Docker usage

  • x86 platform

    nvcr.io/nvidia/deepstream:6.2-devel
    nvcr.io/nvidia/deepstream:6.2-triton
    
  • Jetson platform

    nvcr.io/nvidia/deepstream-l4t:6.2-samples
    nvcr.io/nvidia/deepstream-l4t:6.2-triton
    

    NOTE: To compile the nvdsinfer_custom_impl_Yolo, you need to install the g++ inside the container

    apt-get install build-essential
    

    NOTE: With DeepStream 6.2, the docker containers do not package libraries necessary for certain multimedia operations like audio data parsing, CPU decode, and CPU encode. This change could affect processing certain video streams/files like mp4 that include audio track. Please run the below script inside the docker images to install additional packages that might be necessary to use all of the DeepStreamSDK features:

    /opt/nvidia/deepstream/deepstream/user_additional_install.sh
    

NMS Configuration

To change the nms-iou-threshold, pre-cluster-threshold and topk values, modify the config_infer file

[class-attrs-all]
nms-iou-threshold=0.45
pre-cluster-threshold=0.25
topk=300

NOTE: Make sure to set cluster-mode=2 in the config_infer file.

Extract metadata

You can get metadata from DeepStream using Python and C/C++. For C/C++, you can edit the deepstream-app or deepstream-test codes. For Python, your can install and edit deepstream_python_apps.

Basically, you need manipulate the NvDsObjectMeta (Python / C/C++) and NvDsFrameMeta (Python / C/C++) to get the label, position, etc. of bboxes.

My projects: https://www.youtube.com/MarcosLucianoTV

deepstream-yolo's People

Contributors

marcoslucianops avatar pieris98 avatar faizan1234567 avatar satchelwu avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.