Code Monkey home page Code Monkey logo

bot-sort's Introduction

BoT-SORT

BoT-SORT: Robust Associations Multi-Pedestrian Tracking

Nir Aharon, Roy Orfaig, Ben-Zion Bobrovsky

PWC

PWC

https://arxiv.org/abs/2206.14651

Highlights 🚀

  • YOLOX & YOLOv7 support
  • Multi-class support
  • Camera motion compensation
  • Re-identification

Coming Soon

  • Trained YOLOv7 models for MOTChallenge.
  • YOLOv7 detector.
  • Multi-class support.
  • Create OpenCV VideoStab GMC python binding or write Python version.
  • Deployment code.

Abstract

The goal of multi-object tracking (MOT) is detecting and tracking all the objects in a scene, while keeping a unique identifier for each object. In this paper, we present a new robust state-of-the-art tracker, which can combine the advantages of motion and appearance information, along with camera-motion compensation, and a more accurate Kalman filter state vector. Our new trackers BoT-SORT, and BoT-SORT-ReID rank first in the datasets of MOTChallenge [29, 11] on both MOT17 and MOT20 test sets, in terms of all the main MOT metrics: MOTA, IDF1, and HOTA. For MOT17: 80.5 MOTA, 80.2 IDF1, and 65.0 HOTA are achieved.

Visualization results on MOT challenge test set

MOT20-06.mp4
MOT17-14.mp4
MOT17-04.BOT-SORT-YOLOv7.COCO.mp4

Tracking performance

Results on MOT17 challenge test set

Tracker MOTA IDF1 HOTA
BoT-SORT 80.6 79.5 64.6
BoT-SORT-ReID 80.5 80.2 65.0

Results on MOT20 challenge test set

Tracker MOTA IDF1 HOTA
BoT-SORT 77.7 76.3 62.6
BoT-SORT-ReID 77.8 77.5 63.3

Installation

The code was tested on Ubuntu 20.04

BoT-SORT code is based on ByteTrack and FastReID.
Visit their installation guides for more setup options.

Setup with Anaconda

Step 1. Create Conda environment and install pytorch.

conda create -n botsort_env python=3.7
conda activate botsort_env

Step 2. Install torch and matched torchvision from pytorch.org.
The code was tested using torch 1.11.0+cu113 and torchvision==0.12.0

Step 3. Install BoT-SORT.

git clone https://github.com/NirAharon/BoT-SORT.git
cd BoT-SORT
pip3 install -r requirements.txt
python3 setup.py develop

Step 4. Install pycocotools.

pip3 install cython; pip3 install 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'

Step 5. Others

# Cython-bbox
pip3 install cython_bbox

# faiss cpu / gpu
pip3 install faiss-cpu
pip3 install faiss-gpu

Data Preparation

Download MOT17 and MOT20 from the official website. And put them in the following structure:

<dataets_dir>
      │
      ├── MOT17
      │      ├── train
      │      └── test    
      │
      └── MOT20
             ├── train
             └── test

For training the ReID, detection patches must be generated as follows:

cd <BoT-SORT_dir>

# For MOT17 
python3 fast_reid/datasets/generate_mot_patches.py --data_path <dataets_dir> --mot 17

# For MOT20
 python3 fast_reid/datasets/generate_mot_patches.py --data_path <dataets_dir> --mot 20

Link dataset to FastReID export FASTREID_DATASETS=<BoT-SORT_dir>/fast_reid/datasets. If left unset, the default is fast_reid/datasets

Model Zoo

Download and store the trained models in 'pretrained' folder as follow:

<BoT-SORT_dir>/pretrained
  • We used the publicly available ByteTrack model zoo trained on MOT17, MOT20 and ablation study for YOLOX object detection.

  • Ours trained ReID models can be downloaded from MOT17-SBS-S50, MOT20-SBS-S50.

  • For multi-class MOT use YOLOX or YOLOv7 trained on COCO (or any custom weights).

Training

Train the ReID Module

After generating MOT ReID dataset as described in the 'Data Preparation' section.

cd <BoT-SORT_dir>

# For training MOT17 
python3 fast_reid/tools/train_net.py --config-file ./fast_reid/configs/MOT17/sbs_S50.yml MODEL.DEVICE "cuda:0"

# For training MOT20
python3 fast_reid/tools/train_net.py --config-file ./fast_reid/configs/MOT20/sbs_S50.yml MODEL.DEVICE "cuda:0"

Refer to FastReID repository for addition explanations and options.

Tracking

By submitting the txt files produced in this part to MOTChallenge website and you can get the same results as in the paper.
Tuning the tracking parameters carefully could lead to higher performance. In the paper we apply ByteTrack's calibration.

  • Test on MOT17
cd <BoT-SORT_dir>
python3 tools/track.py <dataets_dir/MOT17> --default-parameters --with-reid --benchmark "MOT17" --eval "test" --fp16 --fuse
python3 tools/interpolation.py --txt_path <path_to_track_result>
  • Test on MOT20
cd <BoT-SORT_dir>
python3 tools/track.py <dataets_dir/MOT20> --default-parameters --with-reid --benchmark "MOT20" --eval "test" --fp16 --fuse
python3 tools/interpolation.py --txt_path <path_to_track_result>
  • Evaluation on MOT17 validation set (the second half of the train set)
cd <BoT-SORT_dir>

# BoT-SORT
python3 tools/track.py <dataets_dir/MOT17> --default-parameters --benchmark "MOT17" --eval "val" --fp16 --fuse

# BoT-SORT-ReID
python3 tools/track.py <dataets_dir/MOT17> --default-parameters --with-reid --benchmark "MOT17" --eval "val" --fp16 --fuse
  • Other experiments

Other parameters can be used without passing --default-parameters flag.
For evaluating the train and validation sets we recommend using the official MOTChallenge evaluation code from TrackEval.

# For all the available tracking parameters, see:
python3 tools/track.py -h 
  • Experiments with YOLOv7

Other parameters can be used without passing --default-parameters flag.
For evaluating the train and validation sets we recommend using the official MOTChallenge evaluation code from TrackEval.

# For all the available tracking parameters, see:
python3 tools/track_yolov7.py -h 

Demo

Demo with BoT-SORT(-ReID) based YOLOX and multi-class.

cd <BoT-SORT_dir>

# Original example
python3 tools/demo.py video --path <path_to_video> -f yolox/exps/example/mot/yolox_x_mix_det.py -c pretrained/bytetrack_x_mot17.pth.tar --with-reid --fuse-score --fp16 --fuse --save_result

# Multi-class example
python3 tools/mc_demo.py video --path <path_to_video> -f yolox/exps/example/mot/yolox_x_mix_det.py -c pretrained/bytetrack_x_mot17.pth.tar --with-reid --fuse-score --fp16 --fuse --save_result

Demo with BoT-SORT(-ReID) based YOLOv7 and multi-class.

cd <BoT-SORT_dir>
python3 tools/mc_demo_yolov7.py --weights pretrained/yolov7-d6.pt --source <path_to_video/images> --fuse-score --agnostic-nms (--with-reid)

Note

Our camera motion compensation module is based on the OpenCV contrib C++ version of VideoStab Global Motion Estimation, which currently does not have a Python version.
Motion files can be generated using the C++ project called 'VideoCameraCorrection' in the GMC folder.
The generated files can be used from the tracker.

In addition, python-based motion estimation techniques are available and can be chosen by passing
'--cmc-method' <files | orb | ecc> to demo.py or track.py.

Citation

@article{aharon2022bot,
  title={BoT-SORT: Robust Associations Multi-Pedestrian Tracking},
  author={Aharon, Nir and Orfaig, Roy and Bobrovsky, Ben-Zion},
  journal={arXiv preprint arXiv:2206.14651},
  year={2022}
}

Acknowledgement

A large part of the codes, ideas and results are borrowed from ByteTrack, StrongSORT, FastReID, YOLOX and YOLOv7. Thanks for their excellent work!

bot-sort's People

Contributors

niraharon avatar orfaig avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bot-sort's Issues

Testing it for full occlusion

Hi @NirAharon Thank you for the paper and repo.. it is very impressive.
I just wanted to know weather you have tested it for full occlusion like if 2 people occlude with 100% iou will they regain their previous ids?

where is trackers?

hi,good work but i can not find trackors from you code.
something i miss?
2022-11-21 22-16-03 的屏幕截图

cv2.error

Thanks for your excellent work. I test the BoT-SORT on MOT17 and MOT20. It works and shows good results. But when I test on my own video dataset, it shows errors.
My video format is mp4, 3.6MiB, dimension is 1920 x 1080, 1.01 Mbits/s with 25fps. Here is the error:
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------python tools/demo.py video -f yolox/exps/example/mot/yolox_x_mix_det.py -c yolox/pretrained/bytetrack_x_mot17.pth.tar --path ./videos/C06_1606.mp4 --with-reid --fp16 --fuse --save_result
2022-07-06 12:27:41.536 | INFO | main:main:305 - Args: Namespace(ablation=False, appearance_thresh=0.2, aspect_ratio_thresh=1.6, camid=0, ckpt='yolox/pretrained/bytetrack_x_mot17.pth.tar', cmc_method='orb', conf=None, demo='video', device=device(type='cuda'), exp_file='yolox/exps/example/mot/yolox_x_mix_det.py', experiment_name='yolox_x_mix_det', fast_reid_config='fast_reid/configs/MOT17/sbs_S50.yml', fast_reid_weights='fast_reid/pretrained/mot17_ablation_sbs_S50.pth', fp16=True, fps=30, fuse=True, fuse_score=False, match_thresh=0.8, min_box_area=10, mot20=True, name=None, new_track_thresh=0.7, nms=None, path='./videos/C06_1606.mp4', proximity_thresh=0.5, save_result=True, track_buffer=30, track_high_thresh=0.6, track_low_thresh=0.1, trt=False, tsize=None, with_reid=True)
/home/kerwin/miniconda3/envs/FairMOT/lib/python3.8/site-packages/torch/functional.py:568: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /opt/conda/conda-bld/pytorch_1646755903507/work/aten/src/ATen/native/TensorShape.cpp:2228.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
2022-07-06 12:27:47.778 | INFO | main:main:315 - Model Summary: Params: 99.00M, Gflops: 793.21
2022-07-06 12:27:47.780 | INFO | main:main:323 - loading checkpoint
2022-07-06 12:27:49.678 | INFO | main:main:327 - loaded checkpoint done.
2022-07-06 12:27:49.678 | INFO | main:main:330 - Fusing model...
/home/kerwin/miniconda3/envs/FairMOT/lib/python3.8/site-packages/torch/_tensor.py:1104: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at /opt/conda/conda-bld/pytorch_1646755903507/work/build/aten/src/ATen/core/TensorBody.h:475.)
return self._grad
2022-07-06 12:27:50.370 | INFO | main:imageflow_demo:228 - video save_path is ./YOLOX_outputs/yolox_x_mix_det/track_vis/2022_07_06_12_27_50/C06_1606.mp4
Skip loading parameter 'heads.weight' to the model due to incompatible shapes: (487, 2048) in the checkpoint but (0, 2048) in the model! You might want to double check if this is expected.
2022-07-06 12:27:51.439 | INFO | main:imageflow_demo:238 - Processing frame 0 (100000.00 fps)
Traceback (most recent call last):
File "tools/demo.py", line 366, in
main(exp, args)
File "tools/demo.py", line 354, in main
imageflow_demo(predictor, vis_folder, current_time, args)
File "tools/demo.py", line 251, in imageflow_demo
online_targets = tracker.update(detections, img_info)
File "/home/kerwin/temp/BoT-SORT/tracker/bot_sort.py", line 286, in update
warp = self.gmc.apply(img, dets)
File "/home/kerwin/temp/BoT-SORT/tracker/gmc.py", line 62, in apply
return self.applyFeaures(raw_frame, detections)
File "/home/kerwin/temp/BoT-SORT/tracker/gmc.py", line 205, in applyFeaures
H, inliesrs = cv2.estimateAffinePartial2D(prevPoints, currPoints, cv2.RANSAC)
cv2.error: OpenCV(4.6.0) /io/opencv/modules/calib3d/src/ptsetreg.cpp:1108: error: (-215:Assertion failed) count >= 0 && to.checkVector(2) == count in function 'estimateAffinePartial2D'

AttributeError: 'NoneType' object has no attribute 'groups'

File "/root/userfolder/BoT-SORT/fast_reid/fastreid/data/datasets/mot17.py", line 79, in process_dir
pid, camid = map(int, pattern.search(img_path).groups())
AttributeError: 'NoneType' object has no attribute 'groups'

anyone facing this problem please?

multi-class

Hi @NirAharon ,

I noticed that you implemented support for multi-class, which is great. I also saw that it looks like class is resolved with a vote, which is an interesting way to handle things. Another implementation I've seen has isolated classes from each other such that a given track can only have one class identity. I was curious if you'd thought of this other way. I'm honestly not sure whether it'd make a huge amount of difference but it does seem like isolating classes from each other might be a good idea.

track

Hello, how can I track.py my own dataset?

opencv version

Hi,
Thank you for providing this code.
I would'like to know which version opencv in file VideoCameraCorrection is,thanks.Best Wishes.

Ablation study on the MOT17 validation set

Hello, in the ablation experiment part of the paper, MOTA can reach 77.66 of your reproduced ByteTrack (Baseline), while the official ByteTrack Only 76.6 is in the same validation set. When I use tracking threshold of 0.6, new track threshold of 0.7 and, first association matching threshold of 0.8 pointed out in your paper , the official byteTrack can only reach 76.2. Can you tell me how to get ByteTrack (Baseline) results in the ablation experiment part of your paper.

file declaration?

thanks for your code. I want to know what is difference bot_sort.py and mc_bot_sort.py?
what is the difference mc_demoxxx.py and demo.py

Looking forward to your reply

Not enough values to unpack for reid

python tools/demo.py video -f yolox/exps/example/mot/yolox_x_mix_det.py -c yolox/pretrained/bytetrack_x_mot17.pth.tar --path ./videos/palace.mp4 --with-reid --fp16 --fuse --save_result
2022-07-06 00:46:58.875 | INFO | main:main:305 - Args: Namespace(ablation=False, appearance_thresh=0.2, aspect_ratio_thresh=1.6, camid=0, ckpt='yolox/pretrained/bytetrack_x_mot17.pth.tar', cmc_method='orb', conf=None, demo='video', device=device(type='cuda'), exp_file='yolox/exps/example/mot/yolox_x_mix_det.py', experiment_name='yolox_x_mix_det', fast_reid_config='fast_reid/configs/MOT17/sbs_S50.yml', fast_reid_weights='fast_reid/pretrained/mot17_ablation_sbs_S50.pth', fp16=True, fps=30, fuse=True, fuse_score=False, match_thresh=0.8, min_box_area=10, mot20=True, name=None, new_track_thresh=0.7, nms=None, path='./videos/palace.mp4', proximity_thresh=0.5, save_result=True, track_buffer=30, track_high_thresh=0.6, track_low_thresh=0.1, trt=False, tsize=None, with_reid=True)
/home/kerwin/miniconda3/envs/FairMOT/lib/python3.8/site-packages/torch/functional.py:568: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /opt/conda/conda-bld/pytorch_1646755903507/work/aten/src/ATen/native/TensorShape.cpp:2228.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
2022-07-06 00:47:01.318 | INFO | main:main:315 - Model Summary: Params: 99.00M, Gflops: 793.21
2022-07-06 00:47:01.320 | INFO | main:main:323 - loading checkpoint
2022-07-06 00:47:01.808 | INFO | main:main:327 - loaded checkpoint done.
2022-07-06 00:47:01.808 | INFO | main:main:330 - Fusing model...
/home/kerwin/miniconda3/envs/FairMOT/lib/python3.8/site-packages/torch/_tensor.py:1104: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at /opt/conda/conda-bld/pytorch_1646755903507/work/build/aten/src/ATen/core/TensorBody.h:475.)
return self._grad
2022-07-06 00:47:02.450 | INFO | main:imageflow_demo:228 - video save_path is ./YOLOX_outputs/yolox_x_mix_det/track_vis/2022_07_06_00_47_02/palace.mp4
Skip loading parameter 'heads.weight' to the model due to incompatible shapes: (487, 2048) in the checkpoint but (0, 2048) in the model! You might want to double check if this is expected.
2022-07-06 00:47:02.899 | INFO | main:imageflow_demo:238 - Processing frame 0 (100000.00 fps)
Traceback (most recent call last):
File "tools/demo.py", line 366, in
main(exp, args)
File "tools/demo.py", line 354, in main
imageflow_demo(predictor, vis_folder, current_time, args)
File "tools/demo.py", line 251, in imageflow_demo
online_targets = tracker.update(detections, img_info)
File "/home/kerwin/temp/BoT-SORT/tracker/bot_sort.py", line 256, in update
features_keep = self.encoder.inference(img, dets)
File "/home/kerwin/temp/BoT-SORT/fast_reid/fast_reid_interfece.py", line 75, in inference
H, W, _ = np.shape(image)
ValueError: not enough values to unpack (expected 3, got 0)

FPS issue

Hi,

Weights: yolov7.pt
Faiss gpu
input video resolution 1920 × 1080
gpu 2060 super

I am getting extremely slow fps. The actual inference step is very fast.
This line online_targets = tracker.update(detections, im0) takes 1.2 seconds.
Is this expected?
My GPU utilization is about 25%
CPU utilization is also around 20%

Extreme differences in execution time from frame to frame

I get extreme differences in execution time with the default setup when running BoTSORT with Yolov5:

0: 480x640 1 person, Done. YOLO:(0.038s), StrongSORT:(0.073s)
0: 480x640 1 person, 1 tv, 1 keyboard, Done. YOLO:(0.038s), StrongSORT:(0.066s)
0: 480x640 1 person, 1 tv, 1 keyboard, Done. YOLO:(0.040s), StrongSORT:(0.044s)
0: 480x640 1 person, 1 keyboard, Done. YOLO:(0.039s), StrongSORT:(9.272s)  ################# !
0: 480x640 1 person, 1 tv, Done. YOLO:(0.039s), StrongSORT:(0.090s)
0: 480x640 1 person, 1 tv, 1 keyboard, Done. YOLO:(0.040s), StrongSORT:(8.932s) ############ !
0: 480x640 1 person, 1 tv, 1 keyboard, Done. YOLO:(0.039s), StrongSORT:(9.535s) ############ !
0: 480x640 1 person, 1 tv, Done. YOLO:(0.040s), StrongSORT:(0.037s)
0: 480x640 1 person, 1 tv, Done. YOLO:(0.041s), StrongSORT:(0.687s) ######################## !
0: 480x640 1 person, 1 tv, 1 keyboard, Done. YOLO:(0.039s), StrongSORT:(0.020s)

Is this BoTSORT's normal behavior? If not, do you have any ideas why this is the case

KITTI

Is KITTI dataset supported?

OpenCV

The OpenCV downloaded now has no 'videostab' folder. Which version of opencv do you use? Can you share the 'videostab' folder?

train

how to train own datasets

The use of "new_id" variable?

hi, thanks for this great repo. I want to ask, in "tracker/bot_sort.py" there is a new_id variable in the "re_activate()" function. However after I try to search through the code, I can't find any use for this variable because the value is always False. Can you explain this problem?

AssertionError: Error: all query identities do not appear in gallery

Hi,
Thank you for providing this code.

I have a question:

In https://github.com/NirAharon/BoT-SORT/blob/main/fast_reid/fastreid/data/datasets/mot20.py:

Line 50: self.query_dir = osp.join(self.data_dir, 'query')
Line 52: self.extra_gallery_dir = osp.join(self.data_dir, 'images')
Line 65: query = lambda: self.process_dir(self.query_dir, is_train=False)
Line 66: gallery = lambda: self.process_dir(self.gallery_dir, is_train=False) + (self.process_dir(self.extra_gallery_dir, is_train=False) if self.extra_gallery else [])
Line 73: img_paths = glob.glob(osp.join(dir_path, '*.bmp'))

What are these query and images folders?

Hence, https://github.com/NirAharon/BoT-SORT/blob/main/fast_reid/fastreid/evaluation/rank.py always throw assertion error in Line 90 and 154:
AssertionError: Error: all query identities do not appear in gallery

Becasue num_valid_q is indeed zero.

BoTSORT vs StrongSORT

Hi! I systematically evaluate new real-time tracking modules for Yolov5. I am using a Yolov5 model to generate detections which I then pass to both StrongSORT and BoTSORT. I get the following results on MOT16:

Yolov5 StrongSORT

HOTA: StrongSORT                   HOTA      DetA      AssA      DetRe     DetPr     AssRe     AssPr     LocA      RHOTA     HOTA(0)   LocA(0)   HOTALocA(0)
COMBINED                           54.087    51.797    56.978    56.54     75.637    62.799    77.756    82.107    56.675    69.878    77.185    53.935 

CLEAR: StrongSORT                  MOTA      MOTP      MODA      CLR_Re    CLR_Pr    MTR       PTR       MLR       sMOTA     CLR_TP    CLR_FN    CLR_FP    IDSW      MT        PT        ML        Frag          
COMBINED                           61.268    79.594    61.629    68.19     91.223    35.203    47.389    17.408    47.353    75287     35120     7244      399       182       245       90        2130   

Identity: StrongSORT               IDF1      IDR       IDP       IDTP      IDFN      IDFP       
COMBINED                           68.563    59.907    80.142    66142     44265     16389

FPS: ~20

Yolov5 BoTSORT (no camera motion compensation, the implemented ones: ecc and orb, are too expensive computationally for real-time applications)

Used parameters for BoTSORT can be found here:

HOTA: BoTSORT                      HOTA      DetA      AssA      DetRe     DetPr     AssRe     AssPr     LocA      RHOTA     HOTA(0)   LocA(0)   HOTALocA(0)
COMBINED                           52.943    51.653    54.766    56.387    75.51     60.55     76.832    82.008    55.485    68.441    77.037    52.725

CLEAR: BoTSORT                     MOTA      MOTP      MODA      CLR_Re    CLR_Pr    MTR       PTR       MLR       sMOTA     CLR_TP    CLR_FN    CLR_FP    IDSW      MT        PT        ML        Frag          
COMBINED                           60.993    79.48     61.52     68.097    91.192    34.816    47.195    17.988    47.019    75184     35223     7262      582       180       244       93        2111  

Identity: BoTSORT                  IDF1      IDR       IDP       IDTP      IDFN      IDFP       
COMBINED                           66.54     58.114    77.823    64162     46245     18284  

FPS: ~3

Am I missing something @NirAharon, @orfaig?

About MC-MOT

Thanks for your wonderful work! I have a question, whether this code supports multi-class MOT? like visdrone dataset. Looking forward to your reply.

the problem of Kalman Filter

Hi @NirAharon , when I use Kalman Filter predict mean and covariance, I find the value of mean[1](center y) sometimes is negative. It seems to be unreasonable. Besides, the parameters of the set noise factors on the paper are applicabel to 30 FPS. How should I adjust it if i use it 25 FPS

Is it possible to fine-tune the ReID model with images sampled from videos?

Hi, thanks for sharing the repo, which is really awesome!
I have a dataset of images sampled from several videos.
The original framerate of the videos are 30FPS. But the images are sampled from the videos every 10 frames, i.e., a 100 frames videos are sampled as 10 images, the object in each image is labeled with id and bbox.
I am wondering if it is possible to fine-tune the ReID model with such a dataset?

Thanks!

speed of demo?

Is the expectation that the demo runs at around 4.3fps at 1280x720? I thought I saw mention of 30fps in the paper. Wondering if i have something configured wrong.

The code does not seem to be consistent with the formula given in 3.3

        if self.args.with_reid:
            emb_dists = matching.embedding_distance(strack_pool, detections) / 2.0#计算特征的cost矩阵
            raw_emb_dists = emb_dists.copy()
            emb_dists[emb_dists > self.appearance_thresh] = 1.0
            emb_dists[ious_dists_mask] = 1.0
            dists = np.minimum(ious_dists, emb_dists)#使用矩阵中每个元素的最小值作为成本矩阵C的最终值

(1)the first question is Why should multiply 0.5 in formula 12 ?how 0.5 comes?
(2)Another problem is that the code is different from Formula 12, which seems to divide the cosine distance matrix by 2 before comparing the distance
(3)The last question is, won't taking the minimum loss effective information?

train ReID logs

Hi @NirAharon
I am training ReID model on MOT20, but the training results is abnormal. As shown in the figure below,loss_cls is instability and lr remains the same after 2000 iters. Do you have the same problem when you train this model.
reid

How to set tracking classes in "mc_demo.py"?

HI, This is a nice work of tracking task, I have a question about the scripts:
I want to realize multi-target tracking, bug i don‘t kown Where to set the tracking classesin "mc_demo.py", What should I do?
And What's the different of "mc_demo.py" and "mc_demo_yolov7.py"?

Problem with loading ReID model.

Loading checkpoint from pretrained/mot17_sbs_S50.pth
Skip loading parameter 'heads.weight' to the model due to incompatible shapes: (487, 2048) in the checkpoint but (0, 2048) in the model! You might want to double check if this is expected.
Some model parameters or buffers are not found in the checkpoint:
heads.weight

Can you help me with this? I would like to use ReID model during tracking.

cMOTA

Thanks for your amazing work. I'm interested in the cMOTA and understand it theoretically. But I have no idea to implement it efficiently. I don't know how to get it besides calculating MOTA for T times (T is the total number of frames).
Could you please share the script for the calculation of cMOTA? Thank you!

Question about FPS

Hi there, thanks for your great work! I used BoT-SORT with yolov7-e6e backbone on my custom datasets, which may have about 100 reid on different classes. I got nearly 5 FPS using all default config but 1920x1920 img size, I use time module to find the reason. I found

warp = seslf.gmc.apply(img, dets)
STrack.multi_gmc(strack_pool, warp)
STrack.multi_gmc(unconfirmed, warp)

cost about 0.2 second, its that normal?

Error in tracker/gmc.py", line 205, in applyFeaures

Hi, when demo I keep getting the same error. Can anyone help me figure out why?

My command: python3 tools/demo.py video --path ./videos/cam1_ds.mp4 -f yolox/exps/example/mot/yolox_s_mix_det.py -c pretrained/bytetrack_s_mot17.pth.tar --with-reid --fuse-score --fp16 --fuse --save_result

CMC

Thank you for your wonderful work! I have a question, can CMC module be used for other trackers, such as JDE? How to use it? Looking forward to your reply, thank you again.

Doubts about reid

Hello! I want to ask if I am using demo When the. py file tests its own video.
"Python3 tools/demo.py video -- path<path_to_video>- f yolox/expos/example/mot/yolox_x_mix_det.py - cprepared/bytetrack_x_mot17. pth. tar -- with reid -- fuse score -- fp16 -- fuse -- save_result"
Does this commcommand line call the reid module?and line call the reid module?

MOT17 val metric

Hi,could you please provide the code to get metrics on MOT val set? Current codes just write files in txt.

REID

Why do I report this error when training the pedestrian re recognition model? The mot17 data set generated according to your document.
File "rank_cy.pyx", line 20, in rank_cy.evaluate_cy
File "rank_cy.pyx", line 28, in rank_cy.evaluate_cy
File "rank_cy.pyx", line 240, in rank_cy.eval_market1501_cy
AssertionError: Error: all query identities do not appear in gallery

Trouble Public MOT20 Results

Thanks for sharing your work.
I modified your track.py script to run on the detection files from MOT20-train (public detector protocols). Your code ran fine, but the tracking results seem very poor (combined HOTA = 48.5, combined MOTA = 49.62 on train set of MOT20, Evaluate with TrackEval). In general I would like to use your code with a MOT-style det.txt file, but I am concerned that it is not currently working right.

added code :

detections = []
for _,li in enumerate(lines):
      new = [float(x) for x in li.split(",")][:6]             
      new = new + [1] + [1] + [0]
      new[4] = new[2] + new[4]
      new[5] = new[3] + new[5]
      if int(new[0]) == (frame_id):
          detections.append(new[2:])
detections = np.array(detections)
trackerTimer.tic()
online_targets = tracker.update(detections, img_info["raw_img"])
trackerTimer.toc()

I really appreciate it if you answer these questions. Thanks very much.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.