Code Monkey home page Code Monkey logo

deepstream-yolo-pose's People

Contributors

yunghuihsu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

deepstream-yolo-pose's Issues

yolov8-pose tritonserver infer

hello! Thank you for making a good source.

I am running yolov8-pose with tritonserver and running
pose_src_pad_buffer_probe function, and I confirmed that an error occurs in this part
out[..., :4] = map_to_zero_one(out[..., :4]). Is there any solution?

No output

Hi, I tried running the python app and it works, atleast I can see the **PERF status and it doesn't crash. I want to see the results on my display but I don't see them. Normally, when using the c++ apps, we can set sink and tiled display directly in the config file but how do I do the same here?

Thank you in advance for your help.
Ayan

cannot export the onnx file

Hello Yung-Hui,
I try to use the same code to export the yolov8_pose.onnx file.

yolo export model=[yolov8s-pose.pt](http://yolov8s-pose.pt/) format=onnx device=0 \
          imgsz=640 \
          half=true \
          dynamic=true \
          simplify=true

But I have the error showing below:
Ultralytics YOLOv8.0.131 ๐Ÿš€ Python-3.8.12 torch-1.12.0a0+2c916ef CUDA:0 (NVIDIA GeForce RTX 3070, 8191MiB)
YOLOv8s-pose summary (fused): 187 layers, 11615724 parameters, 0 gradients, 30.2 GFLOPs

PyTorch: starting from yolov8s-pose.pt with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 56, 8400) (22.4 MB)

ONNX: starting export with onnx 1.10.1 opset 14...
ONNX: export failure โŒ 0.6s: "slow_conv2d_cpu" not implemented for 'Half'

Custom model?

Is this framework able to run custom model with more tahn one calss? trainedn with yolov8 pose so the structure won't change much beside the niumber of classes

Unable to run on Orin Nano with deepstream 6.3

Thank you for detailed instruction, I faced a issue while running the inference,

Ubuntu 20.4
Deepstream 6.3
Jetson Orin Nano

'''

File "deepstream_YOLOv8-Pose_rtsp.py", line 563, in
sys.exit(main(stream_path))
File "deepstream_YOLOv8-Pose_rtsp.py", line 415, in main
set_tracker_config("configs/config_tracker.txt", tracker)
File "/home/user/deepstream-yolo-pose/utils/utils.py", line 161, in set_tracker_config
tracker.set_property('enable_batch_process', tracker_enable_batch_process)
TypeError: object of type GstNvTracker' does not have property enable_batch_process'

'''

Error with gst-resource-error-quark

Hello Yunghui,
I got an error shows below:
Error: gst-resource-error-quark: Not found (3): gstrtspsrc.c(6536): gst_rtspsrc_send (): /GstPipeline:pipeline0/GstBin:source-bin-00/GstURIDecodeBin:uri-decode-bin/GstRTSPSrc:source:
Not Found (404)
[NvMultiObjectTracker] De-initialized

The I used this script to handle mp4 file was working good but bad for the rtsp link. Could you give me some advice?

Question: Some confusing of your code.

Hello Yunghui:

    # Add probe to get informed of the meta data generated, we add probe to
    # the sink pad of the osd element, since by that time, the buffer would have
    # had got all the metadata.
    # either nvosd.get_static_pad("sink") or pgie.get_static_pad("src") works
    pgiepad = pgie.get_static_pad("src")
    if not pgiepad:
        sys.stderr.write(" Unable to get pgiepad src pad of tracker \n")
    pgiepad.add_probe(Gst.PadProbeType.BUFFER, pose_src_pad_buffer_probe, 0)

    osdpad = nvosd.get_static_pad("sink")
    if not osdpad:
        sys.stderr.write(" Unable to get osdpad sink pad of tracker \n")
    osdpad.add_probe(Gst.PadProbeType.BUFFER, osd_sink_pad_buffer_probe, 0)

    pgie_src_pad = pgie.get_static_pad("src")
    if not pgie_src_pad:
        sys.stderr.write(" Unable to get src pad \n")
    else:
        pgie_src_pad.add_probe(Gst.PadProbeType.BUFFER, pose_src_pad_buffer_probe, 0)

in your pipeline you are adding the pose_src_pad_buffer_probe twice? Is it necessary?

Many many thanks!
PreddyDaddy

cannot build the engine with TensorRT 8.6.0.2 on Orin AGX

The model conversion fails with:
[E] Error[10]: Could not find any implementation for node /model.22/Range_2.

It does work on other devices, but of course the engine is not transferrable.
This is on a device running JetPack 6.0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.