Code Monkey home page Code Monkey logo

detection-model-for-pedestrian-experiments's Introduction

Detection model for pedestrian experiments

Brief introduction

This model is based on the model of Keras-yolov3. In order to detect the red and blue caps in various pedestrian flow experiments, we train the model with the help of LabelImg. The current model could be helpful for extracting pedestrians’ positions from experimental video.


Download the trained file

Since the trained file is too large (235M), we cannot upload it to Github. Now you can download it via this link: https://pan.baidu.com/s/179-sSklF45q5O9kGuWz_Ug

And then, put it into the folder of /logs/000.

Quick Start

It is very easy to use with cmd:

python yolo_video.py --image

which is exactly the same as in Keras-yolov3. It is possible that you may see lots of warnings, but they do not matter. Then input the name of the image, and you can get two results:

The first one is “_result.jpg”, which clearly shows the detection results of our model. Here is an example when 175 people are in the experiment (we use circular road):

The second one is “_result.csv”, which saves the positions of detected pedestrians. The first column shows the colors (red or blue). The second column shows the probabilities, which are not written on the image. The other four columns are the data of bounding boxes. If you want to use the center positions of these pedestrians, just calculate the averaged values of column 3,5 and column 4,6.

If you want to end this program, just input exit when you are asked to input the file name.

PS: We offer an even easier way to run it in Windows: just double-click the file named yolo_simple.py! The results are the same.

Some notes

  1. In this program, the labels (including the probabilities) are not shown on the image. If we really show them, the results under high-density condition will be quite bad, as below:

Nevertheless, such situation could be produced if you delete “#” in lines 166-170 of yolo.py. These lines are used to show the labels.

  1. The current values are Score=0.5, IOU=0.4. If you want to change the values, just edit line 27 and 28 of yolo.py (the default values of class YOLO()).

  2. The current setting is that in one image, for one type, the maximum value for detecting objects is 200. It is a hyper parameter of Yolo v3 (The default value in Keras-yolov3 is 20, which is not clearly mentioned in the documents). We think 200 is enough for most pedestrian flow experiments. If you really have more caps in one experiment, you should edit two files:
    (1) For training: “max_boxes” in line 36 of /yolo3/utils.py, the default values of get_random_data().
    (2) For testing: “max_boxes” in line 191 of /yolo3/model.py, the default values of yolo_eval().

  3. The current model could only detect the red and blue caps, since the two colors are frequently used in pedestrian flow experiments. If you want to detect the pedestrians with caps of other colors, just train the Yolo v3 model by yourself.

PS: if you can read Chinese, the following links may be very helpful for you to train the model!
https://blog.csdn.net/u012746060/article/details/81183006
https://blog.csdn.net/weixin_45488478/article/details/98397947

  1. The current model only detects the positions of pedestrians. In the future, we will try to track their trajectories, and make this model more useful.

PS: We have published some papers about the pedestrian flow experiments recorded by UAV, and I think they are very interesting and useful. You can read them if you have interests:

Jin et al., 2018. Microscopic events under high-density condition in uni-directional pedestrian flow experiment
https://www.sciencedirect.com/science/article/pii/S0378437118304667
Jin et al., 2019. Single-file pedestrian flow experiments under high-density conditions
https://www.sciencedirect.com/science/article/pii/S0378437119309744

In the future, more papers will be presented, especially the paper about this model.

detection-model-for-pedestrian-experiments's People

Contributors

chengjie-jin avatar

Stargazers

 avatar  avatar  avatar

Forkers

chraibi

detection-model-for-pedestrian-experiments's Issues

Training data

Could you add the training data as a release here in GitHub or maybe even better upload them together with the code
on https://zenodo.org/

Zenodo combines very nicely with GitHub and gives you a DOI number for better citation of your work!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.