Code Monkey home page Code Monkey logo

yolov7_openvino_cpp-python's Introduction

YOLOv7_OpenVINO

This repository will demostrate how to deploy a offical YOLOv7 pre-trained model with OpenVINO runtime api

1. Install requirements

1.1 Python

  $ pip install -r python/requirements.txt

1.2 C++ (Ubuntu)

Please follow the Guides to install OpenVINO and OpenCV

2. Prepare the model

Download YOLOv7 pre-trained weight from YOLOv7

3. Export the ONNX model and convert it to OpenVINO IR

  $ git clone [email protected]:WongKinYiu/yolov7.git
  $ cd yolov7
  $ pip install -r requirements
  $ python export.py --weights yolov7.pt
  $ ovc yolov7.onnx

4. Run inference

The input image can be found in YOLOv7's repository

4.1 Python

 $ python python/image.py -m path_to/yolov7.xml -i data/horse.jpg -d "CPU"

You can also try running the code with Preprocessing API for performance optimization.

 $ python python/image.py -m path_to/yolov7.xml -i data/horse.jpg -d "CPU" -p
  • -i = path to image or video source;
  • -m = Path to IR .xml or .onnx file;
  • -d = Device name, e.g "CPU";
  • -p = with/without preprocessing api
  • -bs = Batch size;
  • -n = number of infer requests;

4.2 C++ (Ubuntu)

Compile the source code

  $ cd cpp
  $ mkdir build && cd build
  $ source '~/intel/openvino_2023.2/bin/setupvars.sh'
  $ cmake ..
  $ make

You can also uncomment the code in CMakeLists.txt to trigger Preprocessing API for performance optimization.

Run inference

 $ yolov7 path_to/yolov7.xml ../../data/horses.jpg 'CPU'

5. Results

horse_res

6. Run with webcam

You can also run the sample with webcam for real-time detection

$ python python/webcam.py -m path_to/yolov7.xml -i 0

Tips: you can switch the device name to "GPU" to improve the performance.

7. Further optimization

Try this notebook (yolov7-optimization) and quantize your YOLOv7 model to INT8.

yolov7_openvino_cpp-python's People

Contributors

openvino-dev-contest avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

yolov7_openvino_cpp-python's Issues

output processing is slow

Why does the output processing (yolov7 tiny) takes so long compere to SSD_mobilenet ?
It makes the use of it not relevant...

why does it needs to go over 4 nested loops ?

Use docker openvino/ubuntu20_dev:latest

./yolov7 yolov7.onnx horses.jpg 'CPU'

terminate called after throwing an instance of 'ov::Exception'
what(): vector::_M_range_check: __n (which is 1) >= this->size() (which is 1)
Aborted (core dumped)

Inference with 1280 images

Hello, I followed this notebook to convert yolov7-e6 model to IR, then quantized the model on my own dataset with dataloader's image size 1280
https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/226-yolov7-optimization/226-yolov7-optimization.ipynb
When I run inference with the model using main_preprocessing.cpp, I get weird results where the bbox does not line up with the object.
Screenshot from 2023-03-09 14-54-51
All I changed was int img_h = 1280 and int img_w = 1280; Do I need to change anything else? This didn't happen when I convert yolov7.pt model and quantize on the same dataset with image size 640

Downloading Yolo7 modex

Do these directions in the repo need be updated??

I cant help but notice I dont think an export.py exists in the WongKinYiu/yolov7.git anymore??

  $ git clone [email protected]:WongKinYiu/yolov7.git
  $ cd yolov7/models
  $ python export.py --weights yolov7.pt

Also what were the directions on converting yolo7.pt to onnx? I cant remember how todo that at the moment. Not the tiny version but the regular larger size I could swear you used to be able to do that right in the WongKinYiu repo code.

[Bug] The line `img.transpose(2, 0, 1)` should be `img = img.transpose(2, 0, 1)`. NumPy's transpose operation does not support in-place assignment.

Hi, thanks for providing this repo
I found a bug in yolov7.py.
It will cause producing incorrect outputs.

The code in line 228 should be img = img.transpose(2, 0, 1) rather than img.transpose(2, 0, 1).

NumPy's transpose operation does not support in-place assignment.
Reference: https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.transpose.html

Yolov7-seg support

Hello @OpenVINO-dev-contest ,
thank you for this awesome work.
Does this script work with yolov7-seg? I have trained and exported my yolov7-seg to Onnx format. but I can't run it with this script, the following error appeared while using the python script:

Traceback (most recent call last):
  File "python/image.py", line 23, in <module>
    yolov7_detector.infer_image(args.input)
  File "/home/mad/YOLOv7_OpenVINO_cpp-python/python/yolov7.py", line 237, in infer_image
    self.infer_queue.wait_all()
  File "/home/mad/YOLOv7_OpenVINO_cpp-python/python/yolov7.py", line 186, in postprocess
    output.append(self.sigmoid(infer_request.get_output_tensor(0).data[batch_id].reshape(-1, self.size[0]*3, 5+self.class_num)))
ValueError: cannot reshape array of size 2948400 into shape (19200,85)

If possible any help or insight into this matter? I will appreciate it

Process multiple video feeds ansyc

Hey Ethan,

For the Python version....would it be really hard to modify the code to process multiple video feeds at once? Am thinking about tinkering around with this.

If it seems doable could you ever process and keep track in the pipe for detections in each feed, like hold some sort of memory for people detected in feed 1, feed 2, feed 3, and feed 4, etc... as frames are processed async style from all feeds?

Thanks!

Python Run Issue

Dump preprocessor: Input "images" (color BGR):
User's input tensor: [1,640,640,3], [N,H,W,C], f32
Model's expected tensor: [1,3,640,640], [N,C,H,W], f32
Pre-processing steps (2):
convert color (RGB): ([1,640,640,3], [N,H,W,C], f32, BGR) -> ([1,640,640,3], [N,H,W,C], f32, RGB)
scale (255,255,255): ([1,640,640,3], [N,H,W,C], f32, RGB) -> ([1,640,640,3], [N,H,W,C], f32, RGB)
Implicit pre-processing steps (1):
convert layout [N,C,H,W]: ([1,640,640,3], [N,H,W,C], f32, RGB) -> ([1,3,640,640], [N,C,H,W], f32, RGB)

Yolov7 Tiny setting confidence Thres

Hello,

Am testing out the Tiny Yolo in the IoT app I made which runs much faster than non Tiny yolo v7 on CPU. ~4 FPS running non tiny Yolo v7 and 8-10 with the Tiny Yolo and the asnyc inference.

The problem I have is some angles of people in conference room I get some poor object detection, like its very flaky. Would you have any suggestions to try?

image

Seems like lowering the confidence to self.conf_thres = 0.1 even still yields poor flaky classifying where the person in the screenshot is circled RED. I adjusted the parameter in the YOLOV7_OPENVINO class in yolo7.py:

class YOLOV7_OPENVINO(object):
    def __init__(self, model_path, device, pre_api, batchsize, nireq, use_flask):
        # set the hyperparameters
        self.classes = [
        "person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", "traffic light",
        "fire hydrant", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow",
        "elephant", "bear", "zebra", "giraffe", "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee",
        "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard",
        "tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple",
        "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch",
        "potted plant", "bed", "dining table", "toilet", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone",
        "microwave", "oven", "toaster", "sink", "refrigerator", "book", "clock", "vase", "scissors", "teddy bear",
        "hair drier", "toothbrush"
       ]
        self.batchsize = batchsize
        self.use_flask = use_flask
        self.routes = webapp_utils.WebAppUtils()
        self.img_size = (640, 640) 
        self.conf_thres = 0.1

I'm still interested in quantization model compression on Yolo v7 (non-tiny) and compare the overall performance but having lots of issues trying to get the code to run....the notebook you linked in #5 , where when I run on Ubuntu the kernal dies half way and I cant get the code to run on Windows 10 but still trying.

getting setup

Hi,

In the README is there a choice to do either Python or C++ on Ubuntu? Either do I need to do Python and the C++ step on Ubuntu?

image

Cool repo, thanks for making this...

fps im getting is varing too much

image

image

sometimes im getting 3 fps, then sometime its giving 10 fps,

when i print it in cmd its showing ~33 fps. i have applied simple logic for getting fps, because your code of fps is not working

fps=0
while:
                fps = (fps + (1. / (time.time() - t1))) / 2
                
                #fps=(stop_time-start_time)
                print("fps",fps)

if its possible can you add fps code & saving the resulting video? it would be really great for doing benchmark for fps

fps code is not working

once it finishes running, I'm getting this error

im loading video in it

  File "python/webcam.py", line 25, in <module>
    yolov7_detector.infer_cam(args.input)
  File "D:\openVINO\YOLOv7_OpenVINO_grid\python\yolov7.py", line 295, in infer_cam
    img = self.letterbox(frame, self.img_size)
  File "D:\openVINO\YOLOv7_OpenVINO_grid\python\yolov7.py", line 60, in letterbox
    shape = img.shape[:2]  # current shape [height, width]
AttributeError: 'NoneType' object has no attribute 'shape'

webcam.py

Hi would you have any tips for getting up and running on webcam.py? Am running this on Windows 10 with the Pypi version of openVino installed...

image

If I comment out the imshow in yolo7.py the code seems to run:

    def draw(self, img, boxinfo):
        for xyxy, conf, cls in boxinfo:
            self.plot_one_box(xyxy, img, label=self.classes[int(cls)], color=self.colors[int(cls)], line_thickness=2)
        #cv2.imshow('YOLOv7 results', img) 
        return 0

But doesn't appear to print the FPS when exiting (CNTRL-C in terminal) which I was hoping monitor frame rates with the code inside the infer_cam function:

            if c==27: 
                self.infer_queue.wait_all()
                break 
        cap.release() 
        cv2.destroyAllWindows() 
        end_time = time.time()
        # Calculate the average FPS\n",
        fps = count / (end_time - start_time)
        print("throughput: {:.2f} fps".format(fps))

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.