Code Monkey home page Code Monkey logo

deepstream_lpr_app's People

Contributors

fc-camel avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deepstream_lpr_app's Issues

Error: no input dimensions given

while running

./tlt-converter -k nvidia_tlt -p image_input,1x3x48x96,4x3x48x96,16x3x48x96
models/LP/LPR/ch_lprnet_baseline18_deployable.etlt -t fp16 -e models/LP/LPR/lpr_ch_onnx_b16.engine

I am getting

Error: no input dimensions given

I am using jetson nano (2gb) anf my jetpack version is 4.4.1

Multi GPU deployment

I set the GPU ID of all configuration files to 1, but gpu0 is still called when running. Why?
捕获

deepstream-lpr-app: /dvs/p4/build/sw/rel/gpgpu/MachineLearning/myelin_trt8/src/compiler/optimizer/cublas_impl.cpp:477: void add_heuristic_results_to_tactics(std::vector<cublasLtMatmulHeuristicResult_t>&, std::vector<myelin::ir::tactic_attribute_t>&, myelin::ir::tactic_attribute_t&, bool): Assertion `false && "Invalid size written"' failed. Aborted (core dumped)

This app is running fine is Deepstream-Devel container [nvcr.io/nvidia/deepstream:6.0-devel]

when I am running with same app in Deepstream-Base container facing below issue.

root@373e8a951251:/app/deepstream_lpr_app/deepstream-lpr-app# ./deepstream-lpr-app 1 2 0 /app/metro_Trim.mp4 out.h264
Request sink_0 pad from streammux
Warning: 'input-dims' parameter has been deprecated. Use 'infer-dims' instead.
Warning: 'input-dims' parameter has been deprecated. Use 'infer-dims' instead.
Now playing: 1
ERROR: [TRT]: 1: [graphContext.h::MyelinGraphContext::24] Error Code 1: Myelin (cuBLASLt error 1 querying major version.)
ERROR: [TRT]: 1: [graphContext.h::MyelinGraphContext::24] Error Code 1: Myelin (cuBLASLt error 1 querying major version.)
ERROR: nvdsinfer_backend.cpp:394 Failed to setOptimizationProfile with idx:0 
ERROR: nvdsinfer_backend.cpp:228 Failed to initialize TRT backend, nvinfer error:NVDSINFER_INVALID_PARAMS
0:00:02.993528390 15515 0x558d7dc0fe30 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<secondary-infer-engine2> NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1896> [UID = 3]: create backend context from engine from file :/app/deepstream_lpr_app/models/LP/LPR/us_lprnet_baseline18_deployable.etlt_b16_gpu0_fp16.engine failed
0:00:02.994778147 15515 0x558d7dc0fe30 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<secondary-infer-engine2> NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 3]: deserialize backend context from engine from file :/app/deepstream_lpr_app/models/LP/LPR/us_lprnet_baseline18_deployable.etlt_b16_gpu0_fp16.engine failed, try rebuild
0:00:02.994800887 15515 0x558d7dc0fe30 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<secondary-infer-engine2> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 3]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: ShapedWeights.cpp:173: Weights td_dense/kernel:0 has been transposed with permutation of (1, 0)! If you plan on overwriting the weights with the Refitter API, the new weights must be pre-transposed.
WARNING: [TRT]: Tensor DataType is determined at build time for tensors not marked as input or output.
WARNING: [TRT]: Detected invalid timing cache, setup a local cache instead
deepstream-lpr-app: /dvs/p4/build/sw/rel/gpgpu/MachineLearning/myelin_trt8/src/compiler/optimizer/cublas_impl.cpp:477: void add_heuristic_results_to_tactics(std::vector<cublasLtMatmulHeuristicResult_t>&, std::vector<myelin::ir::tactic_attribute_t>&, myelin::ir::tactic_attribute_t&, bool): Assertion `false && "Invalid size written"' failed.
Aborted (core dumped)

improve the recall and accuracy

I captured a video on a park which has many Chinese license plate cars to test the app. It turns out that the recall of license plates is too low when using the pruned and quantized LPD model, and the recognition result is wrong at most time when the LPD model detects the license plate. Any advice to improve the performance? Thanks in advance.

Deepstream error make file

I'm using Jetson nano
hi, I'm having a problem .. I ran the code "./tlt-converter -k nvidia_tlt -p image_input ...." it worked fine, but when I run "make" this error appears:

security@security-desktop:~/ladob/deepstream_lpr_app$ make
make[1]: Entering directory '/home/security/ladob/deepstream_lpr_app/nvinfer_custom_lpr_parser'
make[1]: Nothing to be done for 'all'.
make[1]: Leaving directory '/home/security/ladob/deepstream_lpr_app/nvinfer_custom_lpr_parser'
make[1]: Entering directory '/home/security/ladob/deepstream_lpr_app/deepstream-lpr-app'
cc -o deepstream-lpr-app deepstream_lpr_app.o deepstream_nvdsanalytics_meta.o `pkg-config --libs gstreamer-1.0` -L/opt/nvidia/deepstream/deepstream/lib/ -lnvdsgst_meta -lnvds_meta -lm -lstdc++ -Wl,-rpath,/opt/nvidia/deepstream/deepstream/lib/
/usr/bin/ld: skipping incompatible /opt/nvidia/deepstream/deepstream/lib//libnvdsgst_meta.so when searching for -lnvdsgst_meta
/usr/bin/ld: cannot find -lnvdsgst_meta
/usr/bin/ld: skipping incompatible /opt/nvidia/deepstream/deepstream/lib//libnvds_meta.so when searching for -lnvds_meta
/usr/bin/ld: cannot find -lnvds_meta
collect2: error: ld returned 1 exit status
Makefile:67: recipe for target 'deepstream-lpr-app' failed
make[1]: *** [deepstream-lpr-app] Error 1
make[1]: Leaving directory '/home/security/ladob/deepstream_lpr_app/deepstream-lpr-app'
Makefile:2: recipe for target 'all' failed
make: *** [all] Error 2

Jetson Nanon 2gb

When I follow the steps I just get "Killed" after I run inference.
The model crashed first at tlt-converter with output killed, which I fixed by adding -w 1000000.

Is this model able to run on Jetson Nano 2gb?

Can't find the output file

Hello
How are you?
Thanks for contributing to this project.
I ran this project by using DeepStream 6.0.1 on Jetson (JetPack 4.6.1).
image

The detected & recognized info are outputted on terminal stdout but I can NOT find the output video file.

Element Could not be created. Exiting.

I've gotten this message returned after running the execution script. I've reinstalled Deepstream twice and confirmed all the include file should be available when executing with sudo. I even downloaded a sample video and named it to match the corresponding name, since a sample video was not included with the package. I've followed the instructions verbatim twice. I'm running on an AGX Orin with Deepstream 6.1.

Trying various parameters:
vetted@ORIN1:/deepstream_lpr_app/deepstream-lpr-app$ sudo ./deepstream-lpr-app 1 2 0 us_car_test2.mp4 us_car_test2.mp4 output.264
[sudo] password for vetted:
One element could not be created. Exiting.
vetted@ORIN1:/deepstream_lpr_app/deepstream-lpr-app$ sudo ./deepstream-lpr-app 1 3 0 us_car_test2.mp4 us_car_test2.mp4 output.264
One element could not be created. Exiting.
vetted@ORIN1:/deepstream_lpr_app/deepstream-lpr-app$ sudo ./deepstream-lpr-app 1 3 0 us_car_test2.mp4 us_car_test2.mp4
One element could not be created. Exiting.
vetted@ORIN1:/deepstream_lpr_app/deepstream-lpr-app$ sudo ./deepstream-lpr-app 1 2 0 us_car_test2.mp4 us_car_test2.mp4
One element could not be created. Exiting.
vetted@ORIN1:/deepstream_lpr_app/deepstream-lpr-app$ sudo ./deepstream-lpr-app 1 1 0 us_car_test2.mp4 us_car_test2.mp4 output.264
One element could not be created. Exiting.
vetted@ORIN1:/deepstream_lpr_app/deepstream-lpr-app$ sudo ./deepstream-lpr-app 1 1 1 us_car_test2.mp4 us_car_test2.mp4 output.264
One element could not be created. Exiting.

Where is sample mp4 file for test LPR (US)

According to README file https://github.com/NVIDIA-AI-IOT/deepstream_lpr_app/blob/master/README.md
I can't find us_car_test2.mp4 for test with ./deepstream-lpr-app command.

Please help, Thank you.

Get label

Hi

How can I get the label or text output for the plate detected?

Thanks

TLT CONVERTER

Hi,
I get an error while tlt_convert.
Although I changed "-p" to "-d",
"./tlt-converter -k nvidia_tlt -d image_input, 1x3x48x96,4x3x48x96,16x3x48x96 ./us_lprnet_baseline18_deployable.etlt -t fp16 -e lpr_us_onnine_b
terminate called after throwing an instance of 'std :: invalid_argument'
what (): stoi
Aborted (core dumped) "error. Can you help?

Are spaces detectable?

This image is misleading:

image

In fact this plate would be detected as NV12345. Note the missing “spaces”. This might not be an issue for US plates, but it is for EU.

I tried the model with an EU (German) video feed. I was surprised, that it perfectly detects EU number plates, but it has a big problem: The loss of the “spaces” causes ambiguities.

For instance: A Berlin number plate can look like so: “B NO 1234”. A Bonn number plate could look like so “BN O 1234”. Due to the loss of “spaces” both are indicated as “BNO1234” which is ambiguous.

Is there anything which can be done to tackle this?

tao-converter

Hello everyone,the following error occurs when I use tao-converter for conversion
[ERROR] ../builder/myelin/myelinBuilder.cpp (418) - Myelin Error in operator(): 1 (myelinVersionMismatch : Compiled assuming that device 0 was SM 75, but device 0 is SM 0.
My environment is
deepstream-app version 5.1.0
DeepStreamSDK 5.1.0
CUDA Driver Version: 11.4
CUDA Runtime Version: 11.1
TensorRT Version: 7.2
cuDNN Version: 8.1
libNVWarp360 Version: 2.0.1d3
The GPU is Tesla T4
What should I do now?

deploy this project on jetson orin nano 4G

I followed the steps in the GitHub readme for deployment, but due to my Jetson Orin Nano being the 4GB version and having insufficient memory, TensorRT couldn't successfully convert model to the engine file. Therefore, I directly copied the entire modified DeepStream LPR app project, including the engine file, from Jetson AGX Orin to Jetson Orin Nano and ran it directly:

./deepstream-lpr-app 2 2 0 infer ~/889_1699793313.mp4 output.264

And it worked!

Config for latest LPDNet, pruned_v2.2

Hello, I am trying to use the latest LPDNet, pruned_v2.2.
However, when I run the model with the config file, I get this error.
Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:726> [UID = 10]: Failed to parse bboxes using custom parse function Mismatch in the number of output buffers.Expected 4 output buffers, detected in the network :2

My config is as same as the example
parse-bbox-func-name=NvDsInferParseCustomBatchedNMSTLT
custom-lib-path=/opt/nvidia/deepstream/deepstream/lib/libnvds_infercustomparser.so
Which custom-lib does match the latest model?

Cannot find binding of given name: output_cov/Sigmoid

Hi,
According to the instructions in the README, I still encounter this error after using the tlt-converner conversion model. How can I solve it?
Opening in BLOCKING MODE

Using winsys: x11 
0:00:04.895485739  4539     0x31cbb860 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1701> [UID = 2]: deserialized trt engine from :/home/nx/code/yolov3_tlt/./models/LPR/lpr_ch_onnx_b16.engine
INFO: [FullDims Engine Info]: layers num: 3
0   INPUT  kFLOAT image_input     3x48x96         min: 1x3x48x96       opt: 4x3x48x96       Max: 16x3x48x96      
1   OUTPUT kINT32 tf_op_layer_ArgMax 24              min: 0               opt: 0               Max: 0               
2   OUTPUT kFLOAT tf_op_layer_Max 24              min: 0               opt: 0               Max: 0               

ERROR: [TRT]: INVALID_ARGUMENT: Cannot find binding of given name: output_bbox/BiasAdd
0:00:04.895800298  4539     0x31cbb860 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1669> [UID = 2]: Could not find output layer 'output_bbox/BiasAdd' in engine
ERROR: [TRT]: INVALID_ARGUMENT: Cannot find binding of given name: output_cov/Sigmoid
0:00:04.895898249  4539     0x31cbb860 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1669> [UID = 2]: Could not find output layer 'output_cov/Sigmoid' in engine

wrong value "0:ROI enable" in ./deepstream-lpr-app -h

./deepstream-lpr-app -h
Usage: ./deepstream-lpr-app [1:us model|2: ch_model] [1:file sink|2:fakesink|3:display sink] [0:ROI disable|0:ROI enable] <In mp4 filename> <in mp4 filename> ... <out H264 filename>

It should be 1:ROI enable

How to use RTSP Server as input.

I try to use RTSP as input in lpr_app_infer_us_config.yml

source-list:
   use-nvmultiurisrcbin: 1
   list: rtsp://192.168.11.244:8554/media.smp

source-attr-all:
  enable: 1
  type: 3
  num-sources: 1
  gpu-id: 0
  cudadec-memtype: 0
  latency: 100
  rtsp-reconnect-interval-sec: 0
.........................

When I run command ./deepstream-lpr-app lpr_app_infer_us_config.yml

It show error like this

ERROR from element file_src_0: Resource not found.
Error details: gstfilesrc.c(532): gst_file_src_start (): /GstPipeline:pipeline/GstFileSrc:file_src_0:
No such file "rtsp://192.168.11.244:8554/media.smp"
Returned, stopping playback
Average fps 0.000233
Totally 0 plates are inferred
Deleting pipeline

Please suggest how to use RTSP Server as input of deepstream_lpr_app, Thank you.

Note: I have been test This RTSP Server, It's working properly.

image

image

make error

make all
make[1]: Entering directory '/home/abdo/Downloads/deepstream_lpr_app test/ddd/deepstream_lpr_app-master/nvinfer_custom_lpr_parser'
g++ -o libnvdsinfer_custom_impl_lpr.so nvinfer_custom_lpr_parser.cpp -Wall -Werror -std=c++11 -shared -fPIC -Wno-error=deprecated-declarations -I/opt/nvidia/deepstream/deepstream/sources/includes -Wl,--start-group -lnvinfer -lnvparsers -Wl,--end-group
collect2: fatal error: ld terminated with signal 11 [Segmentation fault], core dumped
compilation terminated.
Makefile:37: recipe for target 'libnvdsinfer_custom_impl_lpr.so' failed
make[1]: *** [libnvdsinfer_custom_impl_lpr.so] Error 1
make[1]: Leaving directory '/home/abdo/Downloads/deepstream_lpr_app test/ddd/deepstream_lpr_app-master/nvinfer_custom_lpr_parser'
Makefile:2: recipe for target 'all' failed
make: *** [all] Error 2

===========================

deepstream 6.0
JetPack 4.6.1
jetson nano

Create lpr engine file

Hi, I am working on License plate recognition problem. When I run deepstream app i am facing following issue.

**Starting pipeline

0:00:00.209260524 226 0x1a87b20 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 3]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: ShapedWeights.cpp:173: Weights td_dense/kernel:0 has been transposed with permutation of (1, 0)! If you plan on overwriting the weights with the Refitter API, the new weights must be pre-transposed.
WARNING: [TRT]: Tensor DataType is determined at build time for tensors not marked as input or output.
WARNING: [TRT]: Detected invalid timing cache, setup a local cache instead
python3: /dvs/p4/build/sw/rel/gpgpu/MachineLearning/myelin_trt8/src/compiler/optimizer/cublas_impl.cpp:477: void add_heuristic_results_to_tactics(std::vector<cublasLtMatmulHeuristicResult_t>&, std::vectormyelin::ir::tactic_attribute_t&, myelin::ir::tactic_attribute_t&, bool): Assertion `false && "Invalid size written"' failed.
Aborted (core dumped)**

I am using DS 6.0. Can anyone please help me on how to solve this issue?

Hello

Can I replace Car detection model LPD model LPR model with my own model ?I want to replace the car recognition model with the model of the high-speed rail that I trained with Yolo.

cannot find -lnvds_yml_parser

cc -o deepstream-lpr-app deepstream_lpr_app.o deepstream_nvdsanalytics_meta.o ds_yml_parse.o pkg-config --libs gstreamer-1.0 -L/opt/nvidia/deepstream/deepstream/lib/ -lnvdsgst_meta -lnvds_meta -lm -lstdc++ -lnvds_yml_parser -lyaml-cpp -lgstrtspserver-1.0 -Wl,-rpath,/opt/nvidia/deepstream/deepstream/lib/
/usr/bin/ld: cannot find -lnvds_yml_parser
collect2: error: ld returned 1 exit status
Makefile:74: recipe for target 'deepstream-lpr-app' failed
make[1]: *** [deepstream-lpr-app] Error 1
make[1]: Leaving directory '/media/csitc/M2/projects/deepstream_lpr_app/deepstream-lpr-app'
Makefile:2: recipe for target 'all' failed

How to save each output file from each source list.

According to lpr_app_infer_us_config.yml
Please advise how to save each output file from each source list, Thank you.

Note: I also check deepstream_lpr_app Command Line, It's seems save output to only 1 file.

Usage: ./deepstream-lpr-app [1:us model|2: ch_model] [1:file sink|2:fakesink|3:display sink] [0:ROI disable|0:ROI enable] [infer|triton|tritongrpc] <In mp4 filename> <in mp4 filename> ... <out H264 filename>

image

Is this sample supposed to run under DS7.0?

Running it in a triton multiarch docker container.

Builds the engine, but then...

0:00:12.523514275 96792 0x55b79b26d960 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-infer-engine1> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2198> [UID = 1]: Use deserialized engine model: /root/deepstream_lpr_app/models/tao_pretrained_models/trafficcamnet/resnet18_trafficcamnet_pruned.etlt_b4_gpu0_int8.engine
0:00:12.525469904 96792 0x55b79b26d960 INFO                 nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<primary-infer-engine1> [UID 1]: Load new model:trafficamnet_config.txt sucessfully
Running...
qtdemux pad video/x-h264
qtdemux pad video/x-h264
h264parser already linked. Ignoring.
h264parser already linked. Ignoring.
Frame Number = 0 Vehicle Count = 0 Person Count = 0 License Plate Count = 0
Segmentation fault (core dumped)

``

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.