Code Monkey home page Code Monkey logo

towards-realtime-mot's Introduction

Towards-Realtime-MOT

NEWS:

  • [2021.08.19] A pure C++ re-implementation by samylee. Helpful if you want to deploy JDE in your own project!
  • [2021.06.01] A nice re-implementation (and document) by Baidu PaddlePaddle team.
  • [2020.07.14] Our paper is accepted to ECCV 2020!
  • [2020.01.29] More models uploaded! The fastest one runs at around 38 FPS!.
  • [2019.10.11] Training and evaluation data uploaded! Please see DATASET_ZOO.md for details.
  • [2019.10.01] Demo code and pre-trained model released!

Introduction

This repo is the a codebase of the Joint Detection and Embedding (JDE) model. JDE is a fast and high-performance multiple-object tracker that learns the object detection task and appearance embedding task simutaneously in a shared neural network. Techical details are described in our ECCV 2020 paper. By using this repo, you can simply achieve MOTA 64%+ on the "private" protocol of MOT-16 challenge, and with a near real-time speed at 22~38 FPS (Note this speed is for the entire system, including the detection step! ) .

We hope this repo will help researches/engineers to develop more practical MOT systems. For algorithm development, we provide training data, baseline models and evaluation methods to make a level playground. For application usage, we also provide a small video demo that takes raw videos as input without any bells and whistles.

Requirements

  • Python 3.6
  • Pytorch >= 1.2.0
  • python-opencv
  • py-motmetrics (pip install motmetrics)
  • cython-bbox (pip install cython_bbox)
  • (Optional) ffmpeg (used in the video demo)
  • (Optional) syncbn (compile and place it under utils/syncbn, or simply replace with nn.BatchNorm here)
  • maskrcnn-benchmark (Their GPU NMS is used in this project)

Video Demo

Usage:

python demo.py --input-video path/to/your/input/video --weights path/to/model/weights
               --output-format video --output-root path/to/output/root

Docker demo example

docker build -t towards-realtime-mot docker/

docker run --rm --gpus all -v $(pwd)/:/Towards-Realtime-MOT -ti towards-realtime-mot /bin/bash
cd /Towards-Realtime-MOT;
python demo.py --input-video path/to/your/input/video --weights path/to/model/weights
               --output-format video --output-root path/to/output/root

Dataset zoo

Please see DATASET_ZOO.md for detailed description of the training/evaluation datasets.

Pretrained model and baseline models

Darknet-53 ImageNet pretrained model: [DarkNet Official]

Trained models with different input resolutions:

Model MOTA IDF1 IDS FP FN FPS Link
JDE-1088x608 73.1 68.9 1312 6593 21788 22.2 [Google] [Baidu]
JDE-864x480 70.8 65.8 1279 5653 25806 30.3 [Google] [Baidu]
JDE-576x320 63.7 63.3 1307 6657 32794 37.9 [Google] [Baidu]

The performance is tested on the MOT-16 training set, just for reference. Running speed is tested on an Nvidia Titan Xp GPU. For a more comprehensive comparison with other methods you can test on MOT-16 test set and submit a result to the MOT-16 benchmark. Note that the results should be submitted to the private detector track.

Test on MOT-16 Challenge

python track.py --cfg ./cfg/yolov3_1088x608.cfg --weights /path/to/model/weights

By default the script runs evaluation on the MOT-16 training set. If you want to evaluate on the test set, please add --test-mot16 to the command line. Results are saved in text files in $DATASET_ROOT/results/*.txt. You can also add --save-images or --save-videos flags to obtain the visualized results. Visualized results are saved in $DATASET_ROOT/outputs/

Training instruction

  • Download the training datasets.
  • Edit cfg/ccmcpe.json, config the training/validation combinations. A dataset is represented by an image list, please see data/*.train for example.
  • Run the training script:
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python train.py 

We use 8x Nvidia Titan Xp to train the model, with a batch size of 32. You can adjust the batch size (and the learning rate together) according to how many GPUs your have. You can also train with smaller image size, which will bring faster inference time. But note the image size had better to be multiples of 32 (the down-sampling rate).

Train with custom datasets

Adding custom datsets is quite simple, all you need to do is to organize your annotation files in the same format as in our training sets. Please refer to DATASET_ZOO.md for the dataset format.

Related Resources

  • FairMOT: An improved method based on the JDE framework, SOTA performance.
  • CSTrack: Better disentangled detection/embedding heads for JDE.
  • JDE-Paddle: A nice re-implementation (and document) by Baidu PaddlePaddle team.
  • JDE-CPP: A pure C++ re-implementation by samylee. Helpful if you want to deploy JDE in your own project!

Acknowledgement

A large portion of code is borrowed from ultralytics/yolov3 and longcw/MOTDT, many thanks to their wonderful work!

Citation

If you find this repo useful in your project or research, please consider citing it:

@article{wang2019towards,
  title={Towards Real-Time Multi-Object Tracking},
  author={Wang, Zhongdao and Zheng, Liang and Liu, Yixuan and Wang, Shengjin},
  journal={The European Conference on Computer Vision (ECCV)},
  year={2020}
}

towards-realtime-mot's People

Contributors

cclauss avatar falaktheoptimist avatar lyxlynn avatar partheshsoni avatar penolove avatar zhongdao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

towards-realtime-mot's Issues

Undefined name: 'cpu_soft_nms' in utils/utils.py

flake8 testing of https://github.com/Zhongdao/Towards-Realtime-MOT on Python 3.8.0

$ flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics

./utils/utils.py:421:12: F821 undefined name 'cpu_soft_nms'
    keep = cpu_soft_nms(np.ascontiguousarray(dets, dtype=np.float32),
           ^
1     F821 undefined name 'cpu_soft_nms'
1

E901,E999,F821,F822,F823 are the "showstopper" flake8 issues that can halt the runtime with a SyntaxError, NameError, etc. These 5 are different from most other flake8 issues which are merely "style violations" -- useful for readability but they do not effect runtime safety.

  • F821: undefined name name
  • F822: undefined name name in __all__
  • F823: local variable name referenced before assignment
  • E901: SyntaxError or IndentationError
  • E999: SyntaxError -- failed to compile a file into an Abstract Syntax Tree

Issue in demo.py

When i run the demo.py, I am facing issue:
2019-10-30 14:06:51 [INFO]: start tracking...
Lenth of the video: 43190 frames
2019-10-30 14:06:58 [INFO]: Processing frame 0 (100000.00 fps)
Segmentation fault (core dumped)

Anybody help me

No module named 'maskrcnn_benchmark

I'm sorry, this would be very naive question but I couldnt run demo because utils.py couldnt properly import maskrcnn_benchmark.layers.nms as nms

$ python demo.py --input-video path/to/your/input/video --weights path/to/model/weights
Traceback (most recent call last):
File "demo.py", line 8, in
from tracker.multitracker import JDETracker
File "/home/hoge/work/fisheye/Towards-Realtime-MOT/tracker/multitracker.py", line 10, in
from utils.utils import *
File "/home/hoge/work/fisheye/Towards-Realtime-MOT/utils/utils.py", line 14, in
import maskrcnn_benchmark.layers.nms as nms
ModuleNotFoundError: No module named 'maskrcnn_benchmark'

I tried to install by following this install.md instruction
https://github.com/facebookresearch/maskrcnn-benchmark/blob/master/INSTALL.md
and I manage to install properly.
But how should I edit importing sentense like import maskrcnn_benchmark.layers.nms as nms?

my current working tree structure is like this.
dir(fisheye)/
├──Towards-Realtime-MOT (git cloned dir)
├── maskrcnn-benchmark (git cloned dir)

in the case I tried to edit utils.py path for maskrcnn by myself with begginer python knowledge like below.
from ..maskrcnn-benchmark import maskrcnn_benchmark.layers.nms as nms

However I failed to import with this error massage
from ..maskrcnn-benchmark import maskrcnn_benchmark.layers.nms as nms
SyntaxError: invalid syntax

Can someone help me please?

Error in demo.py

python demo.py --input-video test/MOT16-11.mp4 --weights weights/jde.uncertainty.pt --output-format text --output-root results/
Namespace(cfg='cfg/yolov3.cfg', conf_thres=0.5, img_size=(1088, 608), input_video='test/MOT16-11.mp4', iou_thres=0.5, min_box_area=200, nms_thres=0.4, output_format='text', output_root='results/', track_buffer=30, weights='weights/jde.uncertainty.pt')

2019-10-15 10:35:17 [INFO]: start tracking...
Lenth of the video: 900 frames
2019-10-15 10:35:21 [INFO]: Processing frame 0 (100000.00 fps)
2019-10-15 10:35:21 [INFO]: too many indices for array
no result was genrated, why there is too many indices for array?

The future of MOT

@Zhongdao Hi, Thank you for your sharing .I have been confused about too little information on multi-object tracking. Could you introduce your blog and some researchers or blog about multi-object tracking? Thank you very much!

CUHK-SYSU dataset error

"CUHKSYSU/images/s6933.jpg" file path is not exist, however "CUHKSYSU/labels_with_ids/s6933.txt" is exist , casuse mismatch, can you slove the problem, please?

dataloader can't be loop

def eval_seq(opt, dataloader, data_type, result_filename, save_dir=None, show_image=True, frame_rate=30):
if save_dir:
mkdir_if_missing(save_dir)
tracker = JDETracker(opt, frame_rate=frame_rate)
timer = Timer()
results = []
frame_id = 0
for path, img, img0 in dataloader:
if frame_id % 20 == 0:
logger.info('Processing frame {} ({:.2f} fps)'.format(frame_id, 1./max(1e-5, timer.average_time)))
dataloader can't be loop

demo with 864x408 image

Hi there, excellent work with real time MOT!
how can we run demo.py using 864x408 images as input? do we need another trained model? something like JDE-864x408-uncertainty?

demo error

Traceback (most recent call last):
File "demo.py", line 8, in
from tracker.multitracker import JDETracker
File "/HDD/lq/data/Towards-Realtime-MOT/tracker/multitracker.py", line 10, in
from utils.utils import *
File "/HDD/lq/data/Towards-Realtime-MOT/utils/utils.py", line 13, in
import maskrcnn_benchmark.layers.nms as nms
File "/HDD/lq/data/Towards-Realtime-MOT/maskrcnn-benchmark/maskrcnn_benchmark/layers/init.py", line 10, in
from .nms import nms
File "/HDD/lq/data/Towards-Realtime-MOT/maskrcnn-benchmark/maskrcnn_benchmark/layers/nms.py", line 5, in
from apex import amp
File "/usr/local/lib/python3.6/dist-packages/apex/init.py", line 18, in
from apex.interfaces import (ApexImplementation,
File "/usr/local/lib/python3.6/dist-packages/apex/interfaces.py", line 10, in
class ApexImplementation(object):
File "/usr/local/lib/python3.6/dist-packages/apex/interfaces.py", line 14, in ApexImplementation
implements(IApex)
File "/usr/lib/python3/dist-packages/zope/interface/declarations.py", line 485, in implements
raise TypeError(_ADVICE_ERROR % 'implementer')
TypeError: Class advice impossible in Python3. Use the @Implementer class decorator instead.

ImportError

Traceback (most recent call last):
  File "demo.py", line 8, in <module>
    from tracker.multitracker import JDETracker
  File "/data/share7/Towards-Realtime-MOT-master/tracker/multitracker.py", line 13, in <module>
    from models import *
  File "/data/share7/Towards-Realtime-MOT-master/models.py", line 8, in <module>
    from utils.syncbn import SyncBN
ImportError: cannot import name 'SyncBN'

When I run the demo, I get this error.I see that there is no such file under the directory。

Help everybody

Hi all,
I spent two week to run demo.py. And I have a little bit experience . If you need to help, please feel free to contact me via skype: quangthanh1987.
Thanks and Best Regards,

ImportError undefined symbol: _ZN6caffe26detail37_typeMetaDataInstance_preallocated_32E

Traceback (most recent call last):
  File "demo.py", line 8, in <module>
    from tracker.multitracker import JDETracker
  File "/data/shareJ/YDS/Towards-Realtime-MOT-master/tracker/multitracker.py", line 10, in <module>
    from utils.utils import *
  File "/data/shareJ/YDS/Towards-Realtime-MOT-master/utils/utils.py", line 14, in <module>
    import maskrcnn_benchmark.layers.nms as nms
  File "/data/shareJ/YDS/Towards-Realtime-MOT-master/maskrcnn-benchmark-master/maskrcnn_benchmark/layers/__init__.py", line 10, in <module>
    from .nms import nms
  File "/data/shareJ/YDS/Towards-Realtime-MOT-master/maskrcnn-benchmark-master/maskrcnn_benchmark/layers/nms.py", line 3, in <module>
    from maskrcnn_benchmark import _C
ImportError: /data/shareJ/YDS/Towards-Realtime-MOT-master/maskrcnn-benchmark-master/maskrcnn_benchmark/_C.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZN6caffe26detail37_typeMetaDataInstance_preallocated_32E

运行时报了这个错。。。

yolov3-tiny

Thanks for your job. Do you have a plan to upload the cfg file and weight of yolov3-tiny?

demo

您好!当我想运行demo的时候,发现没有权重文件。这个权重文件是train生成的吗?现在我可以通过运行train生成权重文件,然后运行demo吗?

demo.py Error

I run the follow command, but server errors happen.

python demo.py --input-video ./results/test.mp4 --weights ./jde.1088x608.uncertainty.pt --output-format video --output-root ./results/

Unable to init server: Could not connect: Connection refused
Unable to init server: Could not connect: Connection refused

(demo.py:14943): Gdk-CRITICAL **: 01:48:57.208: gdk_cursor_new_for_display: assertion 'GDK_IS_DISPLAY (display)' failed
Namespace(cfg='./cfg/yolov3.cfg', conf_thres=0.5, img_size=(1088, 608), input_video='./results/test.mp4', iou_thres=0.5, min_box_area=200, nms_thres=0.4, output_format='video', output_root='./results/', track_buffer=30, weights='./jde.1088x608.uncertainty.pt')

2019-10-28 01:48:57 [INFO]: start tracking...
Lenth of the video: 1500 frames
2019-10-28 01:48:57 [INFO]: 'module' object is not callable
ffmpeg version 3.4.6-0ubuntu0.18.04.1 Copyright (c) 2000-2019 the FFmpeg developers
built with gcc 7 (Ubuntu 7.3.0-16ubuntu3)
configuration: --prefix=/usr --extra-version=0ubuntu0.18.04.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-librsvg --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared
libavutil 55. 78.100 / 55. 78.100
libavcodec 57.107.100 / 57.107.100
libavformat 57. 83.100 / 57. 83.100
libavdevice 57. 10.100 / 57. 10.100
libavfilter 6.107.100 / 6.107.100
libavresample 3. 7. 0 / 3. 7. 0
libswscale 4. 8.100 / 4. 8.100
libswresample 2. 9.100 / 2. 9.100
libpostproc 54. 7.100 / 54. 7.100
[image2 @ 0x55815dee38c0] Could find no file with path './results/frame/%05d.jpg' and index in the range 0-4
./results/frame/%05d.jpg: No such file or directory

Error in demo.py

Thanks for this code. I'm trying to run demo.py by following all the given instructions. I got following two errors. Can you please help me with this?

2019-10-09 11:56:11 [INFO]: start tracking...
Lenth of the video: 4706 frames
2019-10-09 11:56:11 [INFO]: [Errno 2] No such file or directory: 'weights/latest.pt'
ffmpeg version 4.2 Copyright (c) 2000-2019 the FFmpeg developers
built with gcc 7.3.0 (crosstool-NG 1.23.0.449-a04d0)
configuration: --prefix=******* --cc=/home/conda/feedstock_root/build_artifacts/ffmpeg_1566210161358/_build_env/bin/x86_64-conda_cos6-linux-gnu-cc --disable-doc --disable-openssl --enable-avresample --enable-gnutls --enable-gpl --enable-hardcoded-tables --enable-libfreetype --enable-libopenh264 --enable-libx264 --enable-pic --enable-pthreads --enable-shared --enable-static --enable-version3 --enable-zlib --enable-libmp3lame
libavutil 56. 31.100 / 56. 31.100
libavcodec 58. 54.100 / 58. 54.100
libavformat 58. 29.100 / 58. 29.100
libavdevice 58. 8.100 / 58. 8.100
libavfilter 7. 57.100 / 7. 57.100
libavresample 4. 0. 0 / 4. 0. 0
libswscale 5. 5.100 / 5. 5.100
libswresample 3. 5.100 / 3. 5.100
libpostproc 55. 5.100 / 55. 5.100
[image2 @ 0x55b2a58bdcc0] Could find no file with path 'results/frame/%05d.jpg' and index in the range 0-4
results/frame/%05d.jpg: No such file or directory

CPU support

Since the project requires maskrcnn-benchmark, it only works on GPU, will there be a CPU version?

模型加载的问题

您好,我加载模型时出现以下问题:
2019-10-11 13:38:53 [INFO]: "filename 'storages' not found"
ffmpeg version 2.8.15-0ubuntu0.16.04.1 Copyright (c) 2000-2018 the FFmpeg developers
built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.10) 20160609
configuration: --prefix=/usr --extra-version=0ubuntu0.16.04.1 --build-suffix=-ffmpeg --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --cc=cc --cxx=g++ --enable-gpl --enable-shared --disable-stripping --disable-decoder=libopenjpeg --disable-decoder=libschroedinger --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librtmp --enable-libschroedinger --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzvbi --enable-openal --enable-opengl --enable-x11grab --enable-libdc1394 --enable-libiec61883 --enable-libzmq --enable-frei0r --enable-libx264 --enable-libopencv
libavutil 54. 31.100 / 54. 31.100
libavcodec 56. 60.100 / 56. 60.100
libavformat 56. 40.101 / 56. 40.101
libavdevice 56. 4.100 / 56. 4.100
libavfilter 5. 40.101 / 5. 40.101
libavresample 2. 1. 0 / 2. 1. 0
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 2.101 / 1. 2.101
libpostproc 53. 3.100 / 53. 3.100
[image2 @ 0x9bf940] Could find no file with path '/media/xuemin/CE1E49B3007246A9/results/frame/%05d.jpg' and index in the range 0-4
/media/xuemin/CE1E49B3007246A9/results/frame/%05d.jpg: No such file or directory

请教下您知道哪里出问题了吗?谢谢

Fps test results gradually increase?

hi
I'm really thank you for this wonderful work.
I test on the MOT16-01 and find that the fps gradually increase, like 7.66 10.00 11.49 12.32....
Is it related to the fps computing method or some other reason,do you have any ideas?

Thank you for your any reply!

2019-11-11 14:20:52 [INFO]: Processing frame 20 (7.66 fps)
2019-11-11 14:20:54 [INFO]: Processing frame 40 (10.00 fps)
2019-11-11 14:20:57 [INFO]: Processing frame 60 (11.49 fps)
2019-11-11 14:20:59 [INFO]: Processing frame 80 (12.32 fps)
2019-11-11 14:21:01 [INFO]: Processing frame 100 (12.82 fps)
2019-11-11 14:21:04 [INFO]: Processing frame 120 (12.98 fps)
2019-11-11 14:21:06 [INFO]: Processing frame 140 (13.22 fps)
2019-11-11 14:21:09 [INFO]: Processing frame 160 (13.51 fps)
2019-11-11 14:21:11 [INFO]: Processing frame 180 (13.70 fps)
2019-11-11 14:21:13 [INFO]: Processing frame 200 (13.89 fps)
2019-11-11 14:21:16 [INFO]: Processing frame 220 (13.97 fps)
2019-11-11 14:21:18 [INFO]: Processing frame 240 (14.19 fps)
2019-11-11 14:21:20 [INFO]: Processing frame 260 (14.34 fps)
2019-11-11 14:21:22 [INFO]: Processing frame 280 (14.51 fps)
2019-11-11 14:21:24 [INFO]: Processing frame 300 (14.54 fps)
2019-11-11 14:21:27 [INFO]: Processing frame 320 (14.56 fps)
2019-11-11 14:21:29 [INFO]: Processing frame 340 (14.64 fps)
2019-11-11 14:21:31 [INFO]: Processing frame 360 (14.67 fps)
2019-11-11 14:21:34 [INFO]: Processing frame 380 (14.72 fps)
2019-11-11 14:21:36 [INFO]: Processing frame 400 (14.84 fps)
2019-11-11 14:21:38 [INFO]: Processing frame 420 (14.90 fps)
2019-11-11 14:21:40 [INFO]: Processing frame 440 (14.94 fps)

Snipaste_2019-11-11_14-20-23
Snipaste_2019-11-11_14-22-52
1
2

error in demo.py

Thank you for your work, I encountered some errors while running the demo.py, can you help me?

`File "/usr/local/lib/python3.5/dist-packages/apex/interfaces.py", line 10, in <module>
    class ApexImplementation(object):

  File "/usr/local/lib/python3.5/dist-packages/apex/interfaces.py", line 14, in ApexImplementation
    implements(IApex)

  File "/usr/local/lib/python3.5/dist-packages/zope/interface/declarations.py", line 483, in implements
    raise TypeError(_ADVICE_ERROR % 'implementer')

TypeError: Class advice impossible in Python3.  Use the @implementer class decorator instead.`

demo issue

I meet the following mistake:
x/$ python demo.py --input-video input/MOT16-11.mp4 --weights weights/jde.1088x608.uncertainty.pt --output-format video --output-root output/
/home/x/anaconda3/envs/Towards_MOT/lib/python3.6/site-packages/sklearn/utils/linear_assignment_.py:21: DeprecationWarning: The linear_assignment_ module is deprecated in 0.21 and will be removed from 0.23. Use scipy.optimize.linear_sum_assignment instead.
DeprecationWarning)

Namespace(cfg='cfg/yolov3.cfg', conf_thres=0.5, img_size=(1088, 608), input_video='input/MOT16-11.mp4', iou_thres=0.5, min_box_area=200, nms_thres=0.4, output_format='video', output_root='output/', track_buffer=30, weights='weights/jde.1088x608.uncertainty.pt')

2019-10-28 20:05:44 [INFO]: start tracking...
The value: vw=960, vh=540 dw=1088 dh=608
Lenth of the video: 900 frames
2019-10-28 20:05:46 [INFO]: Processing frame 0 (100000.00 fps)
2019-10-28 20:05:47 [INFO]: cuDNN error: CUDNN_STATUS_EXECUTION_FAILED
ffmpeg: error while loading shared libraries: libopencv_core.so.2.4: cannot open shared object file: No such file or directory

How can I solve this?Thank you!

Successful repetition demo.py

The author's pedestrian tracking algorithm is excellent, better than yolov3 + deep sort, but the detection rate does not seem to be high?

here is a simpler demo, and question about detection model

Hi, thanks for your great contribution, I write a simpler demo code, which is good for fresh, but I am wondering that is there any faster-rcnn detection model or other model which is better than yolo3?

import os.path as osp
import cv2
import logging
import argparse
import motmetrics as mm

from tracker.multitracker import JDETracker
from utils import visualization as vis
from utils.log import logger
from utils.timer import Timer
from utils.evaluation import Evaluator
import utils.datasets as datasets
import torch
from utils.utils import *
class opt_c(object):
    def __init__(self):
        self.img_size=(1088,608)
        self.cfg="cfg/yolov3.cfg"
        self.weights="/home/apptech/Towards-Realtime-MOT/jde.1088x608.uncertainty.pt"
        self.conf_thres=0.5
        self.track_buffer=30
        self.nms_thres=0.4
        self.min_box_area=200
opt=opt_c()

def letterbox(img, height=608, width=1088, color=(127.5, 127.5, 127.5)):  # resize a rectangular image to a padded rectangular 
    shape = img.shape[:2]  # shape = [height, width]
    ratio = min(float(height)/shape[0], float(width)/shape[1])
    new_shape = (round(shape[1] * ratio), round(shape[0] * ratio)) # new_shape = [width, height]
    dw = (width - new_shape[0]) / 2  # width padding
    dh = (height - new_shape[1]) / 2  # height padding
    top, bottom = round(dh - 0.1), round(dh + 0.1)
    left, right = round(dw - 0.1), round(dw + 0.1)
    img = cv2.resize(img, new_shape, interpolation=cv2.INTER_AREA)  # resized, no border
    img = cv2.copyMakeBorder(img, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color)  # padded rectangular
    return img, ratio, dw, dh
def eval_seq(opt,save_dir=None, show_image=True, frame_rate=30):

    tracker = JDETracker(opt, frame_rate=frame_rate)
    results = []
    frame_id = 0
    cam=cv2.VideoCapture(0)

    while True:
        _,img0=cam.read()
        img, _, _, _ = letterbox(img0)
        # Normalize RGB
        img = img[:, :, ::-1].transpose(2, 0, 1)
        img = np.ascontiguousarray(img, dtype=np.float32)
        img /= 255.0
        # run tracking
        blob = torch.from_numpy(img).cuda().unsqueeze(0)
        online_targets = tracker.update(blob, img0)
        online_tlwhs = []
        online_ids = []
        for t in online_targets:
            tlwh = t.tlwh
            tid = t.track_id
            vertical = tlwh[2] / tlwh[3] > 1.6
            if tlwh[2] * tlwh[3] > opt.min_box_area and not vertical:
                online_tlwhs.append(tlwh)
                online_ids.append(tid)
        # save results
        results.append((frame_id + 1, online_tlwhs, online_ids))
        online_im = vis.plot_tracking(img0, online_tlwhs, online_ids, frame_id=frame_id)

        cv2.imshow('online_im', online_im)
        
        if cv2.waitKey(1) & 0xFF == ord('q'):
            break

        frame_id += 1


def main():

    # run tracking
    eval_seq(opt)
    
if __name__ == '__main__':
    main()



import error

import motmetrics as mm
ModuleNotFoundError: No module named 'motmetrics'

hello!while i am running demo.py,the above error occured, but i can not find the motmetrics module.why?

help,help,help

i want to test the model by my video,so i run demo.py with my video, but no affect i see.
so could anybody gives me a tutorial about what's the order runing the code?

utils.cython_bbox

when i run the demo.py,i got this:
from utils.cython_bbox import bbox_ious ModuleNotFoundError: No module named 'utils.cython_bbox'
Is anything I have to download or compile?
Wish u help

[INFO]: invalid load key, '\x00'

Have error line: [INFO]: invalid load key, '\x00' when run demo.py
in JDETracker function to load pretrained weights

tracker = JDETracker(opt, frame_rate=frame_rate)
File "tracker/multitracker.py", line 158, in init
self.model.load_state_dict(torch.load(opt.weights, map_location='cpu')['model'], strict=False)
File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 386, in load
return _load(f, map_location, pickle_module, **pickle_load_args)
File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 563, in _load
magic_number = pickle_module.load(f, **pickle_load_args)
_pickle.UnpicklingError: invalid load key, '\x00'.

Some questions to the network architecture

Hi, this is a wonderful work on MOT.
I have some questions to the architecture.
Paper indicate that network choose FPN as base architecture, and I study Feature Pyramid Network (FPN) (Lin et al. 2017) the backbone is ResNets. But the implement backbone seems to be YOLOv3.
Is that network choose FPN as base architecture just the concept or use the specific network ?
I check YOLOv3 has three different scale feature maps, are these means the FPN?
I can't find the feature map with the different scale fused by skip connection, can somebody point where the code is?
And another question is RPN anchor choose from yolo layer with different scales?
thanks

issue in demo file

When I run demo.py. I have an issue like this:
Traceback (most recent call last):
File "demo.py", line 8, in
from tracker.multitracker import JDETracker
File "/home/thanhpham/PycharmProjects/HumanDetection01/venv/MOT/tracker/multitracker.py", line 13, in
from models import *
File "/home/thanhpham/PycharmProjects/HumanDetection01/venv/MOT/models.py", line 8, in
from utilss.syncbn import SyncBN
ImportError: cannot import name 'SyncBN'


From the author of paper. I changed the name file utils to utilss my code:
The code of demo.py file

from tracker.multitracker import JDETracker
from utilss import visualization as vis
from utilss.utilss import *
from utilss.io import read_results
from utilss.log import logger
from utilss.timer import Timer
from utilss.evaluation import Evaluator
import utilss.datasets as datasets
import torch
from track import eval_seq


When run command line:
bash compile.sh to install syncbn
I face issue like this:

Traceback (most recent call last):
File "setup.py", line 2, in
from torch.utilss.cpp_extension import CUDAExtension, BuildExtension
ModuleNotFoundError: No module named 'torch'
~/PycharmProjects/HumanDetection01/venv/MOT/utilss/syncbn
Although, I installed torch already.


Please give me some advices

the cython_bbox library is not matching with:python3.5 :cannot import name 'bbox_ious'

thanks for this great job! when i run this demo ,i meet a error:
python3 demo.py --input-video /home/lyp/Videos/deploy1-155175756,155175757.mp4 --weights /home/lyp/project/mot-project/towards-realtime-mot/Towards-Realtime-MOT/jde.uncertainty.pt --output-format video --output-root /home/lyp/project/mot-project/towards-realtime-mot/ /usr/local/lib/python3.5/dist-packages/sklearn/utils/linear_assignment_.py:21: DeprecationWarning: The linear_assignment_ module is deprecated in 0.21 and will be removed from 0.23. Use scipy.optimize.linear_sum_assignment instead. DeprecationWarning) Traceback (most recent call last): File "demo.py", line 8, in <module> from tracker.multitracker import JDETracker File "/home/lyp/project/mot-project/towards-realtime-mot/Towards-Realtime-MOT/tracker/multitracker.py", line 14, in <module> from tracker import matching File "/home/lyp/project/mot-project/towards-realtime-mot/Towards-Realtime-MOT/tracker/matching.py", line 7, in <module> from utils.cython_bbox import bbox_ious ImportError: cannot import name 'bbox_ious'
i guess my python version is not matching with the cython_bbox.cpython-36m-x86_64-linux-gnu.so.
i find a 3.5 version file from https://github.com/microsoft/CNTK/tree/master/Examples/Image/Detection/utils/cython_modules, but this version file do not have the bbox_ious fuction.
Can you tell me about the file dir about cython_bbox.cpython-36m-x86_64-linux-gnu.so? i want a version with 3.5,thanks!

error in demo.py

Traceback (most recent call last):
File "demo.py", line 8, in
from tracker.multitracker import JDETracker
File "/home/fan60526/Towards-Realtime-MOT/tracker/multitracker.py", line 10, in
from utils.utils import *
File "/home/fan60526/Towards-Realtime-MOT/utils/utils.py", line 13, in
import maskrcnn_benchmark.layers.nms as nms
File "/home/fan60526/maskrcnn-benchmark/maskrcnn_benchmark/layers/init.py", line 10, in
from .nms import nms
File "/home/fan60526/maskrcnn-benchmark/maskrcnn_benchmark/layers/nms.py", line 3, in
from maskrcnn_benchmark import _C
ImportError: /home/fan60526/maskrcnn-benchmark/maskrcnn_benchmark/_C.cpython-37m-x86_64-linux-gnu.so: undefined symbol: THCudaFree

Hello, I encountered the above error when running demo.py, request to help me solve, I am using python3.7 Cuda version 10.1

training results

Hello, I would like to ask if your training results can be trained through the training data you published.thank you!

Import Error

Thanks for your work and in your project I see some import errors, such as SyncBN in models.py, _C in nms.py, and maskrcnn_benchmark in utils.py. Can you provide these files?

demo output is not correct

i can run the demo.py sucessfully , but the result.mp4 is the same as the input video , there is no detection and tracking. why is this phenomenon happens。

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.