Code Monkey home page Code Monkey logo

open-mmlab / mmtracking Goto Github PK

View Code? Open in Web Editor NEW
3.4K 47.0 575.0 2.91 MB

OpenMMLab Video Perception Toolbox. It supports Video Object Detection (VID), Multiple Object Tracking (MOT), Single Object Tracking (SOT), Video Instance Segmentation (VIS) with a unified framework.

Home Page: https://mmtracking.readthedocs.io/en/latest/

License: Apache License 2.0

Shell 0.69% Python 99.23% Dockerfile 0.08%
single-object-tracking video-object-detection multi-object-tracking video-instance-segmentation tracking

mmtracking's Introduction

English | 简体中文

Introduction

MMTracking is an open source video perception toolbox by PyTorch. It is a part of OpenMMLab project.

The master branch works with PyTorch1.5+.

Major features

  • The First Unified Video Perception Platform

    We are the first open source toolbox that unifies versatile video perception tasks include video object detection, multiple object tracking, single object tracking and video instance segmentation.

  • Modular Design

    We decompose the video perception framework into different components and one can easily construct a customized method by combining different modules.

  • Simple, Fast and Strong

    Simple: MMTracking interacts with other OpenMMLab projects. It is built upon MMDetection that we can capitalize any detector only through modifying the configs.

    Fast: All operations run on GPUs. The training and inference speeds are faster than or comparable to other implementations.

    Strong: We reproduce state-of-the-art models and some of them even outperform the official implementations.

What's New

We release MMTracking 1.0.0rc0, the first version of MMTracking 1.x.

Built upon the new training engine, MMTracking 1.x unifies the interfaces of datasets, models, evaluation, and visualization.

We also support more methods in MMTracking 1.x, such as StrongSORT for MOT, Mask2Former for VIS, PrDiMP for SOT.

Please refer to dev-1.x branch for the using of MMTracking 1.x.

Installation

Please refer to install.md for install instructions.

Getting Started

Please see dataset.md and quick_run.md for the basic usage of MMTracking.

A Colab tutorial is provided. You may preview the notebook here or directly run it on Colab.

There are also usage tutorials, such as learning about configs, an example about detailed description of vid config, an example about detailed description of mot config, an example about detailed description of sot config, customizing dataset, customizing data pipeline, customizing vid model, customizing mot model, customizing sot model, customizing runtime settings and useful tools.

Benchmark and model zoo

Results and models are available in the model zoo.

Video Object Detection

Supported Methods

Supported Datasets

Single Object Tracking

Supported Methods

Supported Datasets

Multi-Object Tracking

Supported Methods

Supported Datasets

Video Instance Segmentation

Supported Methods

Supported Datasets

Contributing

We appreciate all contributions to improve MMTracking. Please refer to CONTRIBUTING.md for the contributing guideline and this discussion for development roadmap.

Acknowledgement

MMTracking is an open source project that welcome any contribution and feedback. We wish that the toolbox and benchmark could serve the growing research community by providing a flexible as well as standardized toolkit to reimplement existing methods and develop their own new video perception methods.

Citation

If you find this project useful in your research, please consider cite:

@misc{mmtrack2020,
    title={{MMTracking: OpenMMLab} video perception toolbox and benchmark},
    author={MMTracking Contributors},
    howpublished = {\url{https://github.com/open-mmlab/mmtracking}},
    year={2020}
}

License

This project is released under the Apache 2.0 license.

Projects in OpenMMLab

  • MMCV: OpenMMLab foundational library for computer vision.
  • MIM: MIM installs OpenMMLab packages.
  • MMClassification: OpenMMLab image classification toolbox and benchmark.
  • MMDetection: OpenMMLab detection toolbox and benchmark.
  • MMDetection3D: OpenMMLab's next-generation platform for general 3D object detection.
  • MMYOLO: OpenMMLab YOLO series toolbox and benchmark.
  • MMRotate: OpenMMLab rotated object detection toolbox and benchmark.
  • MMSegmentation: OpenMMLab semantic segmentation toolbox and benchmark.
  • MMOCR: OpenMMLab text detection, recognition and understanding toolbox.
  • MMPose: OpenMMLab pose estimation toolbox and benchmark.
  • MMHuman3D: OpenMMLab 3D human parametric model toolbox and benchmark.
  • MMSelfSup: OpenMMLab self-supervised learning Toolbox and Benchmark.
  • MMRazor: OpenMMLab Model Compression Toolbox and Benchmark.
  • MMFewShot: OpenMMLab FewShot Learning Toolbox and Benchmark.
  • MMAction2: OpenMMLab's next-generation action understanding toolbox and benchmark.
  • MMTracking: OpenMMLab video perception toolbox and benchmark.
  • MMFlow: OpenMMLab optical flow toolbox and benchmark.
  • MMEditing: OpenMMLab image and video editing toolbox.
  • MMGeneration: OpenMMLab Generative Model toolbox and benchmark.
  • MMDeploy: OpenMMlab deep learning model deployment toolset.

mmtracking's People

Contributors

2448845600 avatar akiozihao avatar amanikiruga avatar basetrade avatar ceykmc avatar dyhbupt avatar fcakyon avatar gt9505 avatar hellock avatar irvingzhang0512 avatar islinxu avatar jingweizhang12 avatar jrcyyzb avatar luomaoling avatar memz99 avatar noahcao avatar oceanpang avatar omidsa75 avatar pixeli99 avatar qingrenn avatar quincylin1 avatar seerkfang avatar shliang0603 avatar songtianhui avatar toumakazusa3 avatar vidsgr avatar vvsssssk avatar xiliu8006 avatar yulv-git avatar yuzhms avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mmtracking's Issues

Difference between tractor in mmtrack and the original tractor implementation.

Hi, as is reported in model zoo, the tractor here outperforms the original tractor by a large margin.
image
I'm wondering what differences made the improvements.

I also tried to remove the reid module in tractor, and the MOTA metric does not change at all when ID switches increase, I do not know why, do you have any ideas?

KeyError: 'detections'

hello,I met an error when i test the demo.mp4 .Thank you for helping me to solve it.

(mmdet) yxy@yxy:/media/yxy/4TB/ayxy/mmtracking$ python demo/demo_mot.py configs/mot/deepsort/sort_faster-rcnn_fpn_4e_mot17-public-half.py --input demo/demo.mp4 --output mot.mp4
2021-01-13 21:36:14,270 - mmtrack - INFO - load detector from: https://download.openmmlab.com/mmtracking/mot/faster_rcnn/faster-rcnn_r50_fpn_4e_mot17-half-64ee2ed4.pth
[ ] 0/8, elapsed: 0s, ETA:Traceback (most recent call last):
File "demo/demo_mot.py", line 88, in
main()
File "demo/demo_mot.py", line 64, in main
result = inference_mot(model, img, frame_id=i)
File "/media/yxy/4TB/ayxy/mmtracking/mmtrack/apis/inference.py", line 78, in inference_mot
data = test_pipeline(data)
File "/media/yxy/4TB/ayxy/mmdetection/mmdet/datasets/pipelines/compose.py", line 40, in call
data = t(data)
File "/media/yxy/4TB/ayxy/mmtracking/mmtrack/datasets/pipelines/loading.py", line 100, in call
detections = results['detections']

KeyError: 'detections'

KeyError in reid checkpoint

The reid checkpoint loss keyworks "meta". so when I used model with reid checkpoint, it is wrong. For example:

$ python demo/demo_mot.py configs/mot/deepsort/deepsort_faster-rcnn_fpn_4e_mot17-private-half.py --input demo/demo.mp4 --output mot.mp4
2021-01-06 19:52:33,660 - mmtrack - INFO - load detector from: checkpoints/fasterrcnn_r50_fpn_4e_mot17-half-64ee2ed4.pth
2021-01-06 19:52:33,861 - mmtrack - INFO - load reid from:checkpoints/reid/tracktor_reid_r50_iter25245-a452f51f.pth
Traceback (most recent call last):
  File "demo/demo_mot.py", line 88, in <module>
    main()
  File "demo/demo_mot.py", line 57, in main
    model = init_model(args.config, args.checkpoint, device=args.device)
  File "/home/smile/mmtracking/mmtrack/apis/inference.py", line 34, in init_model
    model = build_model(config.model)
  File "/home/smile/mmtracking/mmtrack/models/builder.py", line 69, in build_model
    return build(cfg, MODELS)
  File "/home/smile/mmtracking/mmtrack/models/builder.py", line 34, in build
    return build_from_cfg(cfg, registry, default_args)
  File "/home/smile/tools/mmcv/mmcv/utils/registry.py", line 171, in build_from_cfg
    return obj_cls(**args)
  File "/home/smile/mmtracking/mmtrack/models/mot/deep_sort.py", line 35, in __init__
    self.init_weights(pretrains)
  File "/home/smile/mmtracking/mmtrack/models/mot/deep_sort.py", line 49, in init_weights
    self.init_module('reid', pretrain['reid'])
  File "/home/smile/mmtracking/mmtrack/models/mot/base.py", line 35, in init_module
    if 'CLASSES' in checkpoint['meta']:
KeyError: 'meta'

And:

>>> model = torch.load("checkpoints/reid/tracktor_reid_r50_iter25245-a452f51f.pth")
>>> model["meta"]
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
KeyError: 'meta'

So I think the reid checkpoint you release is broken.

Error when using dist_train/dist_test

Hello! Sorry for disturbing again, but I have new problems, and it confuse me a lot.
It appears that dist_test/dist_train cannot work.
When I run the dist_test.sh using the command:
bash ./tools/dist_test.sh configs/mot/tracktor/tracktor_faster-rcnn_r50_fpn_4e_mot17-public-half.py 2 --eval track
I got the error below:
TypeError: can't pickle _thread.RLock objects
return Popen(process_obj)
File "/usr/local/miniconda3/lib/python3.6/multiprocessing/popen_spawn_posix.py", line 32, in init
super().init(process_obj)
File "/usr/local/miniconda3/lib/python3.6/multiprocessing/popen_fork.py", line 19, in init
self._launch(process_obj)
File "/usr/local/miniconda3/lib/python3.6/multiprocessing/popen_spawn_posix.py", line 47, in _launch
reduction.dump(process_obj, fp)
File "/usr/local/miniconda3/lib/python3.6/multiprocessing/reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
TypeError: can't pickle _thread.RLock objects
Traceback (most recent call last):
File "/usr/local/miniconda3/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/usr/local/miniconda3/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/miniconda3/lib/python3.6/site-packages/torch/distributed/launch.py", line 253, in
main()
File "/usr/local/miniconda3/lib/python3.6/site-packages/torch/distributed/launch.py", line 249, in main
cmd=cmd)
subprocess.CalledProcessError: Command '['/usr/bin/python', '-u', './tools/test.py', '--local_rank=1', 'configs/mot/tracktor/tracktor_faster-rcnn_r50_fpn_4e_mot17-public-half.py', '--laun cher', 'pytorch', '--eval', 'track']' returned non-zero exit status 1.

But when I test the model using one-gpu command:
python ./tools/test.py configs/mot/tracktor/tracktor_faster-rcnn_r50_fpn_4e_mot17-public-half.py --eval track
It successfully works. Could you plz help me solve the problem? Thanks a lot!

RuntimeError: Sizes of tensors must match except in dimension 1. Got 4 and 5 (The offending index is 0)

when I run the demo_mot.py using command:

python demo/demo_mot.py configs/mot/tracktor/tracktor_faster-rcnn_r50_fpn_4e_mot17-private-half.py --input demo/demo.mp4 --output mot.mp4

I got errors below:

File "demo/demo_mot.py", line 94, in
main()
File "demo/demo_mot.py", line 69, in main
result = inference_mot(model, img, frame_id=i)
File "/data_nas/liyuanqian/mmtracking/code/mmtrack/apis/inference.py", line 92, in inference_mot
result = model(return_loss=False, rescale=True, **data)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/mmcv/runner/fp16_utils.py", line 84, in new_func
return old_func(*args, **kwargs)
File "/data_nas/liyuanqian/mmtracking/code/mmtrack/models/mot/base.py", line 154, in forward
return self.forward_test(img, img_metas, **kwargs)
File "/data_nas/liyuanqian/mmtracking/code/mmtrack/models/mot/base.py", line 131, in forward_test
return self.simple_test(imgs[0], img_metas[0], **kwargs)
File "/data_nas/liyuanqian/mmtracking/code/mmtrack/models/mot/tracktor.py", line 144, in simple_test
**kwargs)
File "/data_nas/liyuanqian/mmtracking/code/mmtrack/models/mot/trackers/tracktor_tracker.py", line 146, in track
feats, img_metas, model.detector, frame_id, rescale)
File "/data_nas/liyuanqian/mmtracking/code/mmtrack/models/mot/trackers/tracktor_tracker.py", line 66, in regress_tracks
x, img_metas, [bboxes], None, rescale=rescale)
File "/opt/mmdet/mmdet/models/roi_heads/test_mixins.py", line 94, in simple_test_bboxes
proposals[i] = torch.cat((supplement, proposal), dim=0)
RuntimeError: Sizes of tensors must match except in dimension 1. Got 4 and 5 (The offending index is 0)

can anybody help me? thanks

when I run demo/

when I use demo/demo.mp4 as input , the demo/demo_mot.py is well done and get the ideal result .But when I use a new mp4 video as input, this program goes error. And display as follow:
python demo/demo_mot.py configs/mot/deepsort/sort_faster-rcnn_fpn_4e_mot17-private.py --input demo/0dbdc35ab8fc73539b569ce5d8338cfd.mp4 --output afn.mp4
2021-01-08 16:39:14,745 - mmtrack - INFO - load detector from: https://download.openmmlab.com/mmtracking/mot/faster_rcnn/faster-rcnn_r50_fpn_4e_mot17-ffa52ae7.pth
[ ] 0/150, elapsed: 0s, ETA:Traceback (most recent call last):
File "demo/demo_mot.py", line 88, in
main()
File "demo/demo_mot.py", line 73, in main
model.show_result(
File "/mnt/data1/mmtracking/mmtrack/models/mot/base.py", line 267, in show_result
img = imshow_tracks(
File "/mnt/data1/mmtracking/mmtrack/core/utils/visualization.py", line 23, in imshow_tracks
return _cv2_show_tracks(*args, **kwargs)
File "/mnt/data1/mmtracking/mmtrack/core/utils/visualization.py", line 44, in _cv2_show_tracks
assert bboxes.shape[1] == 5
AssertionError

Can anyone be kind to tell me how to solve this problem? Thanks very much.

How to obtain the Reid feature using mmtracking?

Hi, thanks for the neat work!

I noted that some of the tracking models include the Reid model(e.g. deepsort_faster-rcnn_fpn_4e_mot17-private-half.py). how can I obtain the Reid feature for the person being tracked?

What is the difference between load_from and pretrain?

Hello~ Thanks a lot for your awesome job and I appreciate for your effort! However, I have some problems hoping you to help me solve it.
When I use the default configure at configs/det/faster-rcnn_r50_fpn_4e_mot17-half.py to train faster-rcnn detector by MMtracking, I got Nan losses. But when I change the downloaded state dict, which is pretrained faster-rcnn on COCO dataset, from 'load_from' entry to 'pretrain' entry of detector, the Nan losses disappears. I wonder how this happen? What's the difference between 'load_from' and 'pretrain', since both of them seem not to strictly load parameters?
Thanks a lot again!

I check again, finding that the 'pretrain' entry for detection seems NOT load pretrain dicts as I expected, and directly train from randomly initiated parameters. So how to use the pretrained faster rcnn model dicts anyway?

More results

Hi !
Any plans to present more results of MOT? such as MOT15, MOT20.
Thanks

About the code

Hi!
Thanks for your time.
I have a question about the code.
On the mmtrack/models/mot/trackers/tracktor_tracker.py, line 150:
valid_inds = (ious < self.regression['match_iou_thr']).all(dim=1)

This line seems invalid most of the results of the regression of targets, since generally the regression of each target is largely overlapped with the un-regressed box (with high video frame rate).

Is that true?
Best

about data augmentation and OHEM

Thanks for your contribution for the tracking field. But could you provide the code about data augmentation and Ohem version? Thanks a lot!

Can't download model from model zoo.

Can't download models from page
Error messages:

<Error>
  <Code>AccessDenied</Code>
  <Message>You have no right to access this object because of bucket acl.</Message>
  <RequestId>5FF404C49C2407EF3DDB78D7</RequestId>
  <HostId>download.openmmlab.com</HostId>
</Error>

when I run demo_mot

I take a video as input and start reporting errors when it runs to frame 3527:
python demo/demo_mot.py configs/mot/tracktor/tracktor_faster-rcnn_r50_fpn_4e_mot17-private.py --input /media/moens/Moens/Dataset/school-vedio/01.avi --checkpoint checkpoint/faster-rcnn_r50_fpn_4e_mot17-ffa52ae7\ (1).pth --fps 30 --output mot1.mp4 --show

cv2.error: OpenCV(4.5.1) /tmp/pip-req-build-7m_g9lbm/opencv/modules/video/src/ecc.cpp:572: error: (-7:Iterations do not converge) The algorithm stopped before its convergence. The correlation is going to be minimized. Images may be uncorrelated or non-overlapped in function 'findTransformECC'

Can anyone be kind to tell me how to solve this problem?I don't know if it's the problem of my video.
Thanks very much.

Roadmap of MMTracking

We keep this issue open to collect feature requests from users and hear your voice.

You can either:

  1. Suggest a new feature by leaving a comment.
  2. Vote for a feature request with 👍 or be against with 👎. (Remember that developers are busy and cannot respond to all feature requests, so vote for your most favorable one!)
  3. Tell us that you would like to help implement one of the features in the list or review the PRs. (This is the greatest things to hear about!)

Recently we do not have enough bandwidth/developers to support new methods. If you are interested in joining us as an intern or a full-time researcher/engineer, feel free to let us know. You can directly drop an email to [email protected], [email protected], or [email protected].

Besides the developments from OpenMMLab, we also welcome all contributions from the community. You can make a PR of your work to this repository following CONTRIBUTING.md.

selsa vid fp16 training error

I only add fp16 settings into original selsa fasterrcnnr 50 training config:

# fp16 settings
fp16 = dict(loss_scale=512.)

When I try to train selsa vid model on fp16 mode, I get this error:

Traceback (most recent call last):
  File "tools/train.py", line 168, in <module>
    main()
  File "tools/train.py", line 157, in main
    train_model(
  File "mmtracking\mmtrack\apis\train.py", line 135, in train_model
    runner.run(data_loaders, cfg.workflow, cfg.total_epochs)
  File "C:\Users\FCA\Miniconda3\envs\mmtracking\lib\site-packages\mmcv\runner\epoch_based_runner.py", line 125, in run
    epoch_runner(data_loaders[i], **kwargs)
  File "C:\Users\FCA\Miniconda3\envs\mmtracking\lib\site-packages\mmcv\runner\epoch_based_runner.py", line 50, in train
    self.run_iter(data_batch, train_mode=True)
  File "C:\Users\FCA\Miniconda3\envs\mmtracking\lib\site-packages\mmcv\runner\epoch_based_runner.py", line 29, in run_iter
    outputs = self.model.train_step(data_batch, self.optimizer,
  File "C:\Users\FCA\Miniconda3\envs\mmtracking\lib\site-packages\mmcv\parallel\data_parallel.py", line 67, in train_step
    return self.module.train_step(*inputs[0], **kwargs[0])
  File "mmtracking\mmtrack\models\vid\base.py", line 215, in train_step
    losses = self(**data)
  File "C:\Users\FCA\Miniconda3\envs\mmtracking\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "C:\Users\FCA\Miniconda3\envs\mmtracking\lib\site-packages\mmcv\runner\fp16_utils.py", line 84, in new_func
    return old_func(*args, **kwargs)
  File "mmtracking\mmtrack\models\vid\base.py", line 149, in forward
    return self.forward_train(img, img_metas, **kwargs)
  File "mmtracking\mmtrack\models\vid\selsa.py", line 137, in forward_train
    all_x = self.detector.extract_feat(all_imgs)
  File "C:\Users\FCA\Miniconda3\envs\mmtracking\lib\site-packages\mmdet\models\detectors\two_stage.py", line 82, in extract_feat       
    x = self.backbone(img)
  File "C:\Users\FCA\Miniconda3\envs\mmtracking\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "C:\Users\FCA\Miniconda3\envs\mmtracking\lib\site-packages\mmdet\models\backbones\resnet.py", line 627, in forward
    x = self.conv1(x)
  File "C:\Users\FCA\Miniconda3\envs\mmtracking\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "C:\Users\FCA\Miniconda3\envs\mmtracking\lib\site-packages\torch\nn\modules\conv.py", line 423, in forward
    return self._conv_forward(input, self.weight)
  File "C:\Users\FCA\Miniconda3\envs\mmtracking\lib\site-packages\torch\nn\modules\conv.py", line 419, in _conv_forward
    return F.conv2d(input, weight, self.bias, self.stride,
RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.cuda.HalfTensor) should be the same

error in documentation to fix demo command

There is an error in the demo command, here at install
When I run the demo using command,

python demo/demo_mot.py configs/mot/deepsort/sort_faster-rcnn_fpn_4e_mot17-private.py -i demo/demo.mp4 -o mot.mp4
I get the error below:

usage: demo_mot.py [-h] [--input INPUT] [--output OUTPUT] [--checkpoint CHECKPOINT] [--device DEVICE] [--show] [--backend {cv2,plt}] [--fps FPS] config
demo_mot.py: error: unrecognized arguments: -i demo/demo.mp4 -o mot.mp4

This is fixed by changing the input slightly
python demo/demo_mot.py configs/mot/deepsort/sort_faster-rcnn_fpn_4e_mot17-private.py --input demo/demo.mp4 --output mot.mp4

cannot train selsa with different detector

Thanks for the great library.

I want to train selsa with different detector (retinanet for instance) so I change detector base config to retinanet in the given selsa config:

_base_ = [
    '../../_base_/models/retinanet_r50_fpn.py',
    '../../_base_/datasets/custom_cocovid_dataset.py',
    '../../_base_/default_runtime.py'
]

model = dict(
    type='SELSA',
    pretrains=None,
    detector=dict(
        roi_head=dict(
            type='SelsaRoIHead',
            bbox_head=dict(
                type='SelsaBBoxHead',
                num_shared_fcs=2,
                aggregator=dict(
                    type='SelsaAggregator',
                    in_channels=1024,
                    num_attention_blocks=16
                ),
                num_classes=1,
            )
        )
    )
)

But it gives error:

  File "tools/train.py", line 168, in <module>
    main()
  File "tools/train.py", line 141, in main
    model = build_model(cfg.model)
  File "mmtracking\mmtrack\models\builder.py", line 69, in build_model
    return build(cfg, MODELS)
  File "mmtracking\mmtrack\models\builder.py", line 34, in build
    return build_from_cfg(cfg, registry, default_args)
  File "C:\Users\FCA\Miniconda3\envs\mmtracking\lib\site-packages\mmcv\utils\registry.py", line 171, in build_from_cfg
    return obj_cls(**args)
  File "mmtracking\mmtrack\models\vid\selsa.py", line 23, in __init__
    self.detector = build_detector(detector)
  File "mmtracking\mmtrack\models\builder.py", line 60, in build_detector
    return build(cfg, DETECTORS)
  File "mmtracking\mmtrack\models\builder.py", line 34, in build
    return build_from_cfg(cfg, registry, default_args)
  File "C:\Users\FCA\Miniconda3\envs\mmtracking\lib\site-packages\mmcv\utils\registry.py", line 171, in build_from_cfg
    return obj_cls(**args)
TypeError: __init__() got an unexpected keyword argument 'roi_head'

What changes should I make to train selsa with other detectors?

MOT Deepsort configs and models are incompatible

I have downloaded project, build docker image, installed requirements with pip, fixed qt xcb problem with "pip install napari pyside2==5.14".

Then I trying to run demo with config/ checkpoint combinations from here https://github.com/open-mmlab/mmtracking/blob/master/configs/mot/deepsort/README.md :

# python demo/demo_mot.py  configs/mot/deepsort/deepsort_faster-rcnn_fpn_4e_mot17-public-half.py --input demo/demo.mp4 --output output.mkv  --checkpoint https://download.openmmlab.com/mmtracking/mot/faster_rcnn/faster-rcnn_r50_fpn_4e_mot17-half-64ee2ed4.pth  --device cpu 
No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda'
2021-04-24 18:49:27,740 - mmtrack - INFO - load detector from: https://download.openmmlab.com/mmtracking/mot/faster_rcnn/faster-rcnn_r50_fpn_4e_mot17-half-64ee2ed4.pth
2021-04-24 18:49:27,740 - mmtrack - INFO - Use load_from_http loader
2021-04-24 18:49:27,917 - mmtrack - INFO - load reid from: https://download.openmmlab.com/mmtracking/mot/reid/tracktor_reid_r50_iter25245-a452f51f.pth
2021-04-24 18:49:27,925 - mmtrack - INFO - Use load_from_http loader
Use load_from_http loader
The model and loaded state dict do not match exactly

unexpected key in source state_dict: backbone.conv1.weight, backbone.bn1.weight, backbone.bn1.bias, backbone.bn1.running_mean, backbone.bn1.running_var, backbone.bn1.num_batches_tracked, backbone.layer1.0.conv1.weight, backbone.layer1.0.bn1.weight, backbone.layer1.0.bn1.bias, backbone.layer1.0.bn1.running_mean, backbone.layer1.0.bn1.running_var, backbone.layer1.0.bn1.num_batches_tracked, backbone.layer1.0.conv2.weight, backbone.layer1.0.bn2.weight, backbone.layer1.0.bn2.bias, backbone.layer1.0.bn2.running_mean, backbone.layer1.0.bn2.running_var, backbone.layer1.0.bn2.num_batches_tracked, backbone.layer1.0.conv3.weight, backbone.layer1.0.bn3.weight, backbone.layer1.0.bn3.bias, backbone.layer1.0.bn3.running_mean, backbone.layer1.0.bn3.running_var, backbone.layer1.0.bn3.num_batches_tracked, backbone.layer1.0.downsample.0.weight, backbone.layer1.0.downsample.1.weight, backbone.layer1.0.downsample.1.bias, backbone.layer1.0.downsample.1.running_mean, backbone.layer1.0.downsample.1.running_var, backbone.layer1.0.downsample.1.num_batches_tracked, backbone.layer1.1.conv1.weight, backbone.layer1.1.bn1.weight, backbone.layer1.1.bn1.bias, backbone.layer1.1.bn1.running_mean, backbone.layer1.1.bn1.running_var, backbone.layer1.1.bn1.num_batches_tracked, backbone.layer1.1.conv2.weight, backbone.layer1.1.bn2.weight, backbone.layer1.1.bn2.bias, backbone.layer1.1.bn2.running_mean, backbone.layer1.1.bn2.running_var, backbone.layer1.1.bn2.num_batches_tracked, backbone.layer1.1.conv3.weight, backbone.layer1.1.bn3.weight, backbone.layer1.1.bn3.bias, backbone.layer1.1.bn3.running_mean, backbone.layer1.1.bn3.running_var, backbone.layer1.1.bn3.num_batches_tracked, backbone.layer1.2.conv1.weight, backbone.layer1.2.bn1.weight, backbone.layer1.2.bn1.bias, backbone.layer1.2.bn1.running_mean, backbone.layer1.2.bn1.running_var, backbone.layer1.2.bn1.num_batches_tracked, backbone.layer1.2.conv2.weight, backbone.layer1.2.bn2.weight, backbone.layer1.2.bn2.bias, backbone.layer1.2.bn2.running_mean, backbone.layer1.2.bn2.running_var, backbone.layer1.2.bn2.num_batches_tracked, backbone.layer1.2.conv3.weight, backbone.layer1.2.bn3.weight, backbone.layer1.2.bn3.bias, backbone.layer1.2.bn3.running_mean, backbone.layer1.2.bn3.running_var, backbone.layer1.2.bn3.num_batches_tracked, backbone.layer2.0.conv1.weight, backbone.layer2.0.bn1.weight, backbone.layer2.0.bn1.bias, backbone.layer2.0.bn1.running_mean, backbone.layer2.0.bn1.running_var, backbone.layer2.0.bn1.num_batches_tracked, backbone.layer2.0.conv2.weight, backbone.layer2.0.bn2.weight, backbone.layer2.0.bn2.bias, backbone.layer2.0.bn2.running_mean, backbone.layer2.0.bn2.running_var, backbone.layer2.0.bn2.num_batches_tracked, backbone.layer2.0.conv3.weight, backbone.layer2.0.bn3.weight, backbone.layer2.0.bn3.bias, backbone.layer2.0.bn3.running_mean, backbone.layer2.0.bn3.running_var, backbone.layer2.0.bn3.num_batches_tracked, backbone.layer2.0.downsample.0.weight, backbone.layer2.0.downsample.1.weight, backbone.layer2.0.downsample.1.bias, backbone.layer2.0.downsample.1.running_mean, backbone.layer2.0.downsample.1.running_var, backbone.layer2.0.downsample.1.num_batches_tracked, backbone.layer2.1.conv1.weight, backbone.layer2.1.bn1.weight, backbone.layer2.1.bn1.bias, backbone.layer2.1.bn1.running_mean, backbone.layer2.1.bn1.running_var, backbone.layer2.1.bn1.num_batches_tracked, backbone.layer2.1.conv2.weight, backbone.layer2.1.bn2.weight, backbone.layer2.1.bn2.bias, backbone.layer2.1.bn2.running_mean, backbone.layer2.1.bn2.running_var, backbone.layer2.1.bn2.num_batches_tracked, backbone.layer2.1.conv3.weight, backbone.layer2.1.bn3.weight, backbone.layer2.1.bn3.bias, backbone.layer2.1.bn3.running_mean, backbone.layer2.1.bn3.running_var, backbone.layer2.1.bn3.num_batches_tracked, backbone.layer2.2.conv1.weight, backbone.layer2.2.bn1.weight, backbone.layer2.2.bn1.bias, backbone.layer2.2.bn1.running_mean, backbone.layer2.2.bn1.running_var, backbone.layer2.2.bn1.num_batches_tracked, backbone.layer2.2.conv2.weight, backbone.layer2.2.bn2.weight, backbone.layer2.2.bn2.bias, backbone.layer2.2.bn2.running_mean, backbone.layer2.2.bn2.running_var, backbone.layer2.2.bn2.num_batches_tracked, backbone.layer2.2.conv3.weight, backbone.layer2.2.bn3.weight, backbone.layer2.2.bn3.bias, backbone.layer2.2.bn3.running_mean, backbone.layer2.2.bn3.running_var, backbone.layer2.2.bn3.num_batches_tracked, backbone.layer2.3.conv1.weight, backbone.layer2.3.bn1.weight, backbone.layer2.3.bn1.bias, backbone.layer2.3.bn1.running_mean, backbone.layer2.3.bn1.running_var, backbone.layer2.3.bn1.num_batches_tracked, backbone.layer2.3.conv2.weight, backbone.layer2.3.bn2.weight, backbone.layer2.3.bn2.bias, backbone.layer2.3.bn2.running_mean, backbone.layer2.3.bn2.running_var, backbone.layer2.3.bn2.num_batches_tracked, backbone.layer2.3.conv3.weight, backbone.layer2.3.bn3.weight, backbone.layer2.3.bn3.bias, backbone.layer2.3.bn3.running_mean, backbone.layer2.3.bn3.running_var, backbone.layer2.3.bn3.num_batches_tracked, backbone.layer3.0.conv1.weight, backbone.layer3.0.bn1.weight, backbone.layer3.0.bn1.bias, backbone.layer3.0.bn1.running_mean, backbone.layer3.0.bn1.running_var, backbone.layer3.0.bn1.num_batches_tracked, backbone.layer3.0.conv2.weight, backbone.layer3.0.bn2.weight, backbone.layer3.0.bn2.bias, backbone.layer3.0.bn2.running_mean, backbone.layer3.0.bn2.running_var, backbone.layer3.0.bn2.num_batches_tracked, backbone.layer3.0.conv3.weight, backbone.layer3.0.bn3.weight, backbone.layer3.0.bn3.bias, backbone.layer3.0.bn3.running_mean, backbone.layer3.0.bn3.running_var, backbone.layer3.0.bn3.num_batches_tracked, backbone.layer3.0.downsample.0.weight, backbone.layer3.0.downsample.1.weight, backbone.layer3.0.downsample.1.bias, backbone.layer3.0.downsample.1.running_mean, backbone.layer3.0.downsample.1.running_var, backbone.layer3.0.downsample.1.num_batches_tracked, backbone.layer3.1.conv1.weight, backbone.layer3.1.bn1.weight, backbone.layer3.1.bn1.bias, backbone.layer3.1.bn1.running_mean, backbone.layer3.1.bn1.running_var, backbone.layer3.1.bn1.num_batches_tracked, backbone.layer3.1.conv2.weight, backbone.layer3.1.bn2.weight, backbone.layer3.1.bn2.bias, backbone.layer3.1.bn2.running_mean, backbone.layer3.1.bn2.running_var, backbone.layer3.1.bn2.num_batches_tracked, backbone.layer3.1.conv3.weight, backbone.layer3.1.bn3.weight, backbone.layer3.1.bn3.bias, backbone.layer3.1.bn3.running_mean, backbone.layer3.1.bn3.running_var, backbone.layer3.1.bn3.num_batches_tracked, backbone.layer3.2.conv1.weight, backbone.layer3.2.bn1.weight, backbone.layer3.2.bn1.bias, backbone.layer3.2.bn1.running_mean, backbone.layer3.2.bn1.running_var, backbone.layer3.2.bn1.num_batches_tracked, backbone.layer3.2.conv2.weight, backbone.layer3.2.bn2.weight, backbone.layer3.2.bn2.bias, backbone.layer3.2.bn2.running_mean, backbone.layer3.2.bn2.running_var, backbone.layer3.2.bn2.num_batches_tracked, backbone.layer3.2.conv3.weight, backbone.layer3.2.bn3.weight, backbone.layer3.2.bn3.bias, backbone.layer3.2.bn3.running_mean, backbone.layer3.2.bn3.running_var, backbone.layer3.2.bn3.num_batches_tracked, backbone.layer3.3.conv1.weight, backbone.layer3.3.bn1.weight, backbone.layer3.3.bn1.bias, backbone.layer3.3.bn1.running_mean, backbone.layer3.3.bn1.running_var, backbone.layer3.3.bn1.num_batches_tracked, backbone.layer3.3.conv2.weight, backbone.layer3.3.bn2.weight, backbone.layer3.3.bn2.bias, backbone.layer3.3.bn2.running_mean, backbone.layer3.3.bn2.running_var, backbone.layer3.3.bn2.num_batches_tracked, backbone.layer3.3.conv3.weight, backbone.layer3.3.bn3.weight, backbone.layer3.3.bn3.bias, backbone.layer3.3.bn3.running_mean, backbone.layer3.3.bn3.running_var, backbone.layer3.3.bn3.num_batches_tracked, backbone.layer3.4.conv1.weight, backbone.layer3.4.bn1.weight, backbone.layer3.4.bn1.bias, backbone.layer3.4.bn1.running_mean, backbone.layer3.4.bn1.running_var, backbone.layer3.4.bn1.num_batches_tracked, backbone.layer3.4.conv2.weight, backbone.layer3.4.bn2.weight, backbone.layer3.4.bn2.bias, backbone.layer3.4.bn2.running_mean, backbone.layer3.4.bn2.running_var, backbone.layer3.4.bn2.num_batches_tracked, backbone.layer3.4.conv3.weight, backbone.layer3.4.bn3.weight, backbone.layer3.4.bn3.bias, backbone.layer3.4.bn3.running_mean, backbone.layer3.4.bn3.running_var, backbone.layer3.4.bn3.num_batches_tracked, backbone.layer3.5.conv1.weight, backbone.layer3.5.bn1.weight, backbone.layer3.5.bn1.bias, backbone.layer3.5.bn1.running_mean, backbone.layer3.5.bn1.running_var, backbone.layer3.5.bn1.num_batches_tracked, backbone.layer3.5.conv2.weight, backbone.layer3.5.bn2.weight, backbone.layer3.5.bn2.bias, backbone.layer3.5.bn2.running_mean, backbone.layer3.5.bn2.running_var, backbone.layer3.5.bn2.num_batches_tracked, backbone.layer3.5.conv3.weight, backbone.layer3.5.bn3.weight, backbone.layer3.5.bn3.bias, backbone.layer3.5.bn3.running_mean, backbone.layer3.5.bn3.running_var, backbone.layer3.5.bn3.num_batches_tracked, backbone.layer4.0.conv1.weight, backbone.layer4.0.bn1.weight, backbone.layer4.0.bn1.bias, backbone.layer4.0.bn1.running_mean, backbone.layer4.0.bn1.running_var, backbone.layer4.0.bn1.num_batches_tracked, backbone.layer4.0.conv2.weight, backbone.layer4.0.bn2.weight, backbone.layer4.0.bn2.bias, backbone.layer4.0.bn2.running_mean, backbone.layer4.0.bn2.running_var, backbone.layer4.0.bn2.num_batches_tracked, backbone.layer4.0.conv3.weight, backbone.layer4.0.bn3.weight, backbone.layer4.0.bn3.bias, backbone.layer4.0.bn3.running_mean, backbone.layer4.0.bn3.running_var, backbone.layer4.0.bn3.num_batches_tracked, backbone.layer4.0.downsample.0.weight, backbone.layer4.0.downsample.1.weight, backbone.layer4.0.downsample.1.bias, backbone.layer4.0.downsample.1.running_mean, backbone.layer4.0.downsample.1.running_var, backbone.layer4.0.downsample.1.num_batches_tracked, backbone.layer4.1.conv1.weight, backbone.layer4.1.bn1.weight, backbone.layer4.1.bn1.bias, backbone.layer4.1.bn1.running_mean, backbone.layer4.1.bn1.running_var, backbone.layer4.1.bn1.num_batches_tracked, backbone.layer4.1.conv2.weight, backbone.layer4.1.bn2.weight, backbone.layer4.1.bn2.bias, backbone.layer4.1.bn2.running_mean, backbone.layer4.1.bn2.running_var, backbone.layer4.1.bn2.num_batches_tracked, backbone.layer4.1.conv3.weight, backbone.layer4.1.bn3.weight, backbone.layer4.1.bn3.bias, backbone.layer4.1.bn3.running_mean, backbone.layer4.1.bn3.running_var, backbone.layer4.1.bn3.num_batches_tracked, backbone.layer4.2.conv1.weight, backbone.layer4.2.bn1.weight, backbone.layer4.2.bn1.bias, backbone.layer4.2.bn1.running_mean, backbone.layer4.2.bn1.running_var, backbone.layer4.2.bn1.num_batches_tracked, backbone.layer4.2.conv2.weight, backbone.layer4.2.bn2.weight, backbone.layer4.2.bn2.bias, backbone.layer4.2.bn2.running_mean, backbone.layer4.2.bn2.running_var, backbone.layer4.2.bn2.num_batches_tracked, backbone.layer4.2.conv3.weight, backbone.layer4.2.bn3.weight, backbone.layer4.2.bn3.bias, backbone.layer4.2.bn3.running_mean, backbone.layer4.2.bn3.running_var, backbone.layer4.2.bn3.num_batches_tracked, neck.lateral_convs.0.conv.weight, neck.lateral_convs.0.conv.bias, neck.lateral_convs.1.conv.weight, neck.lateral_convs.1.conv.bias, neck.lateral_convs.2.conv.weight, neck.lateral_convs.2.conv.bias, neck.lateral_convs.3.conv.weight, neck.lateral_convs.3.conv.bias, neck.fpn_convs.0.conv.weight, neck.fpn_convs.0.conv.bias, neck.fpn_convs.1.conv.weight, neck.fpn_convs.1.conv.bias, neck.fpn_convs.2.conv.weight, neck.fpn_convs.2.conv.bias, neck.fpn_convs.3.conv.weight, neck.fpn_convs.3.conv.bias, rpn_head.rpn_conv.weight, rpn_head.rpn_conv.bias, rpn_head.rpn_cls.weight, rpn_head.rpn_cls.bias, rpn_head.rpn_reg.weight, rpn_head.rpn_reg.bias, roi_head.bbox_head.fc_cls.weight, roi_head.bbox_head.fc_cls.bias, roi_head.bbox_head.fc_reg.weight, roi_head.bbox_head.fc_reg.bias, roi_head.bbox_head.shared_fcs.0.weight, roi_head.bbox_head.shared_fcs.0.bias, roi_head.bbox_head.shared_fcs.1.weight, roi_head.bbox_head.shared_fcs.1.bias

missing keys in source state_dict: detector.backbone.conv1.weight, detector.backbone.bn1.weight, detector.backbone.bn1.bias, detector.backbone.bn1.running_mean, detector.backbone.bn1.running_var, detector.backbone.layer1.0.conv1.weight, detector.backbone.layer1.0.bn1.weight, detector.backbone.layer1.0.bn1.bias, detector.backbone.layer1.0.bn1.running_mean, detector.backbone.layer1.0.bn1.running_var, detector.backbone.layer1.0.conv2.weight, detector.backbone.layer1.0.bn2.weight, detector.backbone.layer1.0.bn2.bias, detector.backbone.layer1.0.bn2.running_mean, detector.backbone.layer1.0.bn2.running_var, detector.backbone.layer1.0.conv3.weight, detector.backbone.layer1.0.bn3.weight, detector.backbone.layer1.0.bn3.bias, detector.backbone.layer1.0.bn3.running_mean, detector.backbone.layer1.0.bn3.running_var, detector.backbone.layer1.0.downsample.0.weight, detector.backbone.layer1.0.downsample.1.weight, detector.backbone.layer1.0.downsample.1.bias, detector.backbone.layer1.0.downsample.1.running_mean, detector.backbone.layer1.0.downsample.1.running_var, detector.backbone.layer1.1.conv1.weight, detector.backbone.layer1.1.bn1.weight, detector.backbone.layer1.1.bn1.bias, detector.backbone.layer1.1.bn1.running_mean, detector.backbone.layer1.1.bn1.running_var, detector.backbone.layer1.1.conv2.weight, detector.backbone.layer1.1.bn2.weight, detector.backbone.layer1.1.bn2.bias, detector.backbone.layer1.1.bn2.running_mean, detector.backbone.layer1.1.bn2.running_var, detector.backbone.layer1.1.conv3.weight, detector.backbone.layer1.1.bn3.weight, detector.backbone.layer1.1.bn3.bias, detector.backbone.layer1.1.bn3.running_mean, detector.backbone.layer1.1.bn3.running_var, detector.backbone.layer1.2.conv1.weight, detector.backbone.layer1.2.bn1.weight, detector.backbone.layer1.2.bn1.bias, detector.backbone.layer1.2.bn1.running_mean, detector.backbone.layer1.2.bn1.running_var, detector.backbone.layer1.2.conv2.weight, detector.backbone.layer1.2.bn2.weight, detector.backbone.layer1.2.bn2.bias, detector.backbone.layer1.2.bn2.running_mean, detector.backbone.layer1.2.bn2.running_var, detector.backbone.layer1.2.conv3.weight, detector.backbone.layer1.2.bn3.weight, detector.backbone.layer1.2.bn3.bias, detector.backbone.layer1.2.bn3.running_mean, detector.backbone.layer1.2.bn3.running_var, detector.backbone.layer2.0.conv1.weight, detector.backbone.layer2.0.bn1.weight, detector.backbone.layer2.0.bn1.bias, detector.backbone.layer2.0.bn1.running_mean, detector.backbone.layer2.0.bn1.running_var, detector.backbone.layer2.0.conv2.weight, detector.backbone.layer2.0.bn2.weight, detector.backbone.layer2.0.bn2.bias, detector.backbone.layer2.0.bn2.running_mean, detector.backbone.layer2.0.bn2.running_var, detector.backbone.layer2.0.conv3.weight, detector.backbone.layer2.0.bn3.weight, detector.backbone.layer2.0.bn3.bias, detector.backbone.layer2.0.bn3.running_mean, detector.backbone.layer2.0.bn3.running_var, detector.backbone.layer2.0.downsample.0.weight, detector.backbone.layer2.0.downsample.1.weight, detector.backbone.layer2.0.downsample.1.bias, detector.backbone.layer2.0.downsample.1.running_mean, detector.backbone.layer2.0.downsample.1.running_var, detector.backbone.layer2.1.conv1.weight, detector.backbone.layer2.1.bn1.weight, detector.backbone.layer2.1.bn1.bias, detector.backbone.layer2.1.bn1.running_mean, detector.backbone.layer2.1.bn1.running_var, detector.backbone.layer2.1.conv2.weight, detector.backbone.layer2.1.bn2.weight, detector.backbone.layer2.1.bn2.bias, detector.backbone.layer2.1.bn2.running_mean, detector.backbone.layer2.1.bn2.running_var, detector.backbone.layer2.1.conv3.weight, detector.backbone.layer2.1.bn3.weight, detector.backbone.layer2.1.bn3.bias, detector.backbone.layer2.1.bn3.running_mean, detector.backbone.layer2.1.bn3.running_var, detector.backbone.layer2.2.conv1.weight, detector.backbone.layer2.2.bn1.weight, detector.backbone.layer2.2.bn1.bias, detector.backbone.layer2.2.bn1.running_mean, detector.backbone.layer2.2.bn1.running_var, detector.backbone.layer2.2.conv2.weight, detector.backbone.layer2.2.bn2.weight, detector.backbone.layer2.2.bn2.bias, detector.backbone.layer2.2.bn2.running_mean, detector.backbone.layer2.2.bn2.running_var, detector.backbone.layer2.2.conv3.weight, detector.backbone.layer2.2.bn3.weight, detector.backbone.layer2.2.bn3.bias, detector.backbone.layer2.2.bn3.running_mean, detector.backbone.layer2.2.bn3.running_var, detector.backbone.layer2.3.conv1.weight, detector.backbone.layer2.3.bn1.weight, detector.backbone.layer2.3.bn1.bias, detector.backbone.layer2.3.bn1.running_mean, detector.backbone.layer2.3.bn1.running_var, detector.backbone.layer2.3.conv2.weight, detector.backbone.layer2.3.bn2.weight, detector.backbone.layer2.3.bn2.bias, detector.backbone.layer2.3.bn2.running_mean, detector.backbone.layer2.3.bn2.running_var, detector.backbone.layer2.3.conv3.weight, detector.backbone.layer2.3.bn3.weight, detector.backbone.layer2.3.bn3.bias, detector.backbone.layer2.3.bn3.running_mean, detector.backbone.layer2.3.bn3.running_var, detector.backbone.layer3.0.conv1.weight, detector.backbone.layer3.0.bn1.weight, detector.backbone.layer3.0.bn1.bias, detector.backbone.layer3.0.bn1.running_mean, detector.backbone.layer3.0.bn1.running_var, detector.backbone.layer3.0.conv2.weight, detector.backbone.layer3.0.bn2.weight, detector.backbone.layer3.0.bn2.bias, detector.backbone.layer3.0.bn2.running_mean, detector.backbone.layer3.0.bn2.running_var, detector.backbone.layer3.0.conv3.weight, detector.backbone.layer3.0.bn3.weight, detector.backbone.layer3.0.bn3.bias, detector.backbone.layer3.0.bn3.running_mean, detector.backbone.layer3.0.bn3.running_var, detector.backbone.layer3.0.downsample.0.weight, detector.backbone.layer3.0.downsample.1.weight, detector.backbone.layer3.0.downsample.1.bias, detector.backbone.layer3.0.downsample.1.running_mean, detector.backbone.layer3.0.downsample.1.running_var, detector.backbone.layer3.1.conv1.weight, detector.backbone.layer3.1.bn1.weight, detector.backbone.layer3.1.bn1.bias, detector.backbone.layer3.1.bn1.running_mean, detector.backbone.layer3.1.bn1.running_var, detector.backbone.layer3.1.conv2.weight, detector.backbone.layer3.1.bn2.weight, detector.backbone.layer3.1.bn2.bias, detector.backbone.layer3.1.bn2.running_mean, detector.backbone.layer3.1.bn2.running_var, detector.backbone.layer3.1.conv3.weight, detector.backbone.layer3.1.bn3.weight, detector.backbone.layer3.1.bn3.bias, detector.backbone.layer3.1.bn3.running_mean, detector.backbone.layer3.1.bn3.running_var, detector.backbone.layer3.2.conv1.weight, detector.backbone.layer3.2.bn1.weight, detector.backbone.layer3.2.bn1.bias, detector.backbone.layer3.2.bn1.running_mean, detector.backbone.layer3.2.bn1.running_var, detector.backbone.layer3.2.conv2.weight, detector.backbone.layer3.2.bn2.weight, detector.backbone.layer3.2.bn2.bias, detector.backbone.layer3.2.bn2.running_mean, detector.backbone.layer3.2.bn2.running_var, detector.backbone.layer3.2.conv3.weight, detector.backbone.layer3.2.bn3.weight, detector.backbone.layer3.2.bn3.bias, detector.backbone.layer3.2.bn3.running_mean, detector.backbone.layer3.2.bn3.running_var, detector.backbone.layer3.3.conv1.weight, detector.backbone.layer3.3.bn1.weight, detector.backbone.layer3.3.bn1.bias, detector.backbone.layer3.3.bn1.running_mean, detector.backbone.layer3.3.bn1.running_var, detector.backbone.layer3.3.conv2.weight, detector.backbone.layer3.3.bn2.weight, detector.backbone.layer3.3.bn2.bias, detector.backbone.layer3.3.bn2.running_mean, detector.backbone.layer3.3.bn2.running_var, detector.backbone.layer3.3.conv3.weight, detector.backbone.layer3.3.bn3.weight, detector.backbone.layer3.3.bn3.bias, detector.backbone.layer3.3.bn3.running_mean, detector.backbone.layer3.3.bn3.running_var, detector.backbone.layer3.4.conv1.weight, detector.backbone.layer3.4.bn1.weight, detector.backbone.layer3.4.bn1.bias, detector.backbone.layer3.4.bn1.running_mean, detector.backbone.layer3.4.bn1.running_var, detector.backbone.layer3.4.conv2.weight, detector.backbone.layer3.4.bn2.weight, detector.backbone.layer3.4.bn2.bias, detector.backbone.layer3.4.bn2.running_mean, detector.backbone.layer3.4.bn2.running_var, detector.backbone.layer3.4.conv3.weight, detector.backbone.layer3.4.bn3.weight, detector.backbone.layer3.4.bn3.bias, detector.backbone.layer3.4.bn3.running_mean, detector.backbone.layer3.4.bn3.running_var, detector.backbone.layer3.5.conv1.weight, detector.backbone.layer3.5.bn1.weight, detector.backbone.layer3.5.bn1.bias, detector.backbone.layer3.5.bn1.running_mean, detector.backbone.layer3.5.bn1.running_var, detector.backbone.layer3.5.conv2.weight, detector.backbone.layer3.5.bn2.weight, detector.backbone.layer3.5.bn2.bias, detector.backbone.layer3.5.bn2.running_mean, detector.backbone.layer3.5.bn2.running_var, detector.backbone.layer3.5.conv3.weight, detector.backbone.layer3.5.bn3.weight, detector.backbone.layer3.5.bn3.bias, detector.backbone.layer3.5.bn3.running_mean, detector.backbone.layer3.5.bn3.running_var, detector.backbone.layer4.0.conv1.weight, detector.backbone.layer4.0.bn1.weight, detector.backbone.layer4.0.bn1.bias, detector.backbone.layer4.0.bn1.running_mean, detector.backbone.layer4.0.bn1.running_var, detector.backbone.layer4.0.conv2.weight, detector.backbone.layer4.0.bn2.weight, detector.backbone.layer4.0.bn2.bias, detector.backbone.layer4.0.bn2.running_mean, detector.backbone.layer4.0.bn2.running_var, detector.backbone.layer4.0.conv3.weight, detector.backbone.layer4.0.bn3.weight, detector.backbone.layer4.0.bn3.bias, detector.backbone.layer4.0.bn3.running_mean, detector.backbone.layer4.0.bn3.running_var, detector.backbone.layer4.0.downsample.0.weight, detector.backbone.layer4.0.downsample.1.weight, detector.backbone.layer4.0.downsample.1.bias, detector.backbone.layer4.0.downsample.1.running_mean, detector.backbone.layer4.0.downsample.1.running_var, detector.backbone.layer4.1.conv1.weight, detector.backbone.layer4.1.bn1.weight, detector.backbone.layer4.1.bn1.bias, detector.backbone.layer4.1.bn1.running_mean, detector.backbone.layer4.1.bn1.running_var, detector.backbone.layer4.1.conv2.weight, detector.backbone.layer4.1.bn2.weight, detector.backbone.layer4.1.bn2.bias, detector.backbone.layer4.1.bn2.running_mean, detector.backbone.layer4.1.bn2.running_var, detector.backbone.layer4.1.conv3.weight, detector.backbone.layer4.1.bn3.weight, detector.backbone.layer4.1.bn3.bias, detector.backbone.layer4.1.bn3.running_mean, detector.backbone.layer4.1.bn3.running_var, detector.backbone.layer4.2.conv1.weight, detector.backbone.layer4.2.bn1.weight, detector.backbone.layer4.2.bn1.bias, detector.backbone.layer4.2.bn1.running_mean, detector.backbone.layer4.2.bn1.running_var, detector.backbone.layer4.2.conv2.weight, detector.backbone.layer4.2.bn2.weight, detector.backbone.layer4.2.bn2.bias, detector.backbone.layer4.2.bn2.running_mean, detector.backbone.layer4.2.bn2.running_var, detector.backbone.layer4.2.conv3.weight, detector.backbone.layer4.2.bn3.weight, detector.backbone.layer4.2.bn3.bias, detector.backbone.layer4.2.bn3.running_mean, detector.backbone.layer4.2.bn3.running_var, detector.neck.lateral_convs.0.conv.weight, detector.neck.lateral_convs.0.conv.bias, detector.neck.lateral_convs.1.conv.weight, detector.neck.lateral_convs.1.conv.bias, detector.neck.lateral_convs.2.conv.weight, detector.neck.lateral_convs.2.conv.bias, detector.neck.lateral_convs.3.conv.weight, detector.neck.lateral_convs.3.conv.bias, detector.neck.fpn_convs.0.conv.weight, detector.neck.fpn_convs.0.conv.bias, detector.neck.fpn_convs.1.conv.weight, detector.neck.fpn_convs.1.conv.bias, detector.neck.fpn_convs.2.conv.weight, detector.neck.fpn_convs.2.conv.bias, detector.neck.fpn_convs.3.conv.weight, detector.neck.fpn_convs.3.conv.bias, detector.rpn_head.rpn_conv.weight, detector.rpn_head.rpn_conv.bias, detector.rpn_head.rpn_cls.weight, detector.rpn_head.rpn_cls.bias, detector.rpn_head.rpn_reg.weight, detector.rpn_head.rpn_reg.bias, detector.roi_head.bbox_head.fc_cls.weight, detector.roi_head.bbox_head.fc_cls.bias, detector.roi_head.bbox_head.fc_reg.weight, detector.roi_head.bbox_head.fc_reg.bias, detector.roi_head.bbox_head.shared_fcs.0.weight, detector.roi_head.bbox_head.shared_fcs.0.bias, detector.roi_head.bbox_head.shared_fcs.1.weight, detector.roi_head.bbox_head.shared_fcs.1.bias, reid.backbone.conv1.weight, reid.backbone.bn1.weight, reid.backbone.bn1.bias, reid.backbone.bn1.running_mean, reid.backbone.bn1.running_var, reid.backbone.layer1.0.conv1.weight, reid.backbone.layer1.0.bn1.weight, reid.backbone.layer1.0.bn1.bias, reid.backbone.layer1.0.bn1.running_mean, reid.backbone.layer1.0.bn1.running_var, reid.backbone.layer1.0.conv2.weight, reid.backbone.layer1.0.bn2.weight, reid.backbone.layer1.0.bn2.bias, reid.backbone.layer1.0.bn2.running_mean, reid.backbone.layer1.0.bn2.running_var, reid.backbone.layer1.0.conv3.weight, reid.backbone.layer1.0.bn3.weight, reid.backbone.layer1.0.bn3.bias, reid.backbone.layer1.0.bn3.running_mean, reid.backbone.layer1.0.bn3.running_var, reid.backbone.layer1.0.downsample.0.weight, reid.backbone.layer1.0.downsample.1.weight, reid.backbone.layer1.0.downsample.1.bias, reid.backbone.layer1.0.downsample.1.running_mean, reid.backbone.layer1.0.downsample.1.running_var, reid.backbone.layer1.1.conv1.weight, reid.backbone.layer1.1.bn1.weight, reid.backbone.layer1.1.bn1.bias, reid.backbone.layer1.1.bn1.running_mean, reid.backbone.layer1.1.bn1.running_var, reid.backbone.layer1.1.conv2.weight, reid.backbone.layer1.1.bn2.weight, reid.backbone.layer1.1.bn2.bias, reid.backbone.layer1.1.bn2.running_mean, reid.backbone.layer1.1.bn2.running_var, reid.backbone.layer1.1.conv3.weight, reid.backbone.layer1.1.bn3.weight, reid.backbone.layer1.1.bn3.bias, reid.backbone.layer1.1.bn3.running_mean, reid.backbone.layer1.1.bn3.running_var, reid.backbone.layer1.2.conv1.weight, reid.backbone.layer1.2.bn1.weight, reid.backbone.layer1.2.bn1.bias, reid.backbone.layer1.2.bn1.running_mean, reid.backbone.layer1.2.bn1.running_var, reid.backbone.layer1.2.conv2.weight, reid.backbone.layer1.2.bn2.weight, reid.backbone.layer1.2.bn2.bias, reid.backbone.layer1.2.bn2.running_mean, reid.backbone.layer1.2.bn2.running_var, reid.backbone.layer1.2.conv3.weight, reid.backbone.layer1.2.bn3.weight, reid.backbone.layer1.2.bn3.bias, reid.backbone.layer1.2.bn3.running_mean, reid.backbone.layer1.2.bn3.running_var, reid.backbone.layer2.0.conv1.weight, reid.backbone.layer2.0.bn1.weight, reid.backbone.layer2.0.bn1.bias, reid.backbone.layer2.0.bn1.running_mean, reid.backbone.layer2.0.bn1.running_var, reid.backbone.layer2.0.conv2.weight, reid.backbone.layer2.0.bn2.weight, reid.backbone.layer2.0.bn2.bias, reid.backbone.layer2.0.bn2.running_mean, reid.backbone.layer2.0.bn2.running_var, reid.backbone.layer2.0.conv3.weight, reid.backbone.layer2.0.bn3.weight, reid.backbone.layer2.0.bn3.bias, reid.backbone.layer2.0.bn3.running_mean, reid.backbone.layer2.0.bn3.running_var, reid.backbone.layer2.0.downsample.0.weight, reid.backbone.layer2.0.downsample.1.weight, reid.backbone.layer2.0.downsample.1.bias, reid.backbone.layer2.0.downsample.1.running_mean, reid.backbone.layer2.0.downsample.1.running_var, reid.backbone.layer2.1.conv1.weight, reid.backbone.layer2.1.bn1.weight, reid.backbone.layer2.1.bn1.bias, reid.backbone.layer2.1.bn1.running_mean, reid.backbone.layer2.1.bn1.running_var, reid.backbone.layer2.1.conv2.weight, reid.backbone.layer2.1.bn2.weight, reid.backbone.layer2.1.bn2.bias, reid.backbone.layer2.1.bn2.running_mean, reid.backbone.layer2.1.bn2.running_var, reid.backbone.layer2.1.conv3.weight, reid.backbone.layer2.1.bn3.weight, reid.backbone.layer2.1.bn3.bias, reid.backbone.layer2.1.bn3.running_mean, reid.backbone.layer2.1.bn3.running_var, reid.backbone.layer2.2.conv1.weight, reid.backbone.layer2.2.bn1.weight, reid.backbone.layer2.2.bn1.bias, reid.backbone.layer2.2.bn1.running_mean, reid.backbone.layer2.2.bn1.running_var, reid.backbone.layer2.2.conv2.weight, reid.backbone.layer2.2.bn2.weight, reid.backbone.layer2.2.bn2.bias, reid.backbone.layer2.2.bn2.running_mean, reid.backbone.layer2.2.bn2.running_var, reid.backbone.layer2.2.conv3.weight, reid.backbone.layer2.2.bn3.weight, reid.backbone.layer2.2.bn3.bias, reid.backbone.layer2.2.bn3.running_mean, reid.backbone.layer2.2.bn3.running_var, reid.backbone.layer2.3.conv1.weight, reid.backbone.layer2.3.bn1.weight, reid.backbone.layer2.3.bn1.bias, reid.backbone.layer2.3.bn1.running_mean, reid.backbone.layer2.3.bn1.running_var, reid.backbone.layer2.3.conv2.weight, reid.backbone.layer2.3.bn2.weight, reid.backbone.layer2.3.bn2.bias, reid.backbone.layer2.3.bn2.running_mean, reid.backbone.layer2.3.bn2.running_var, reid.backbone.layer2.3.conv3.weight, reid.backbone.layer2.3.bn3.weight, reid.backbone.layer2.3.bn3.bias, reid.backbone.layer2.3.bn3.running_mean, reid.backbone.layer2.3.bn3.running_var, reid.backbone.layer3.0.conv1.weight, reid.backbone.layer3.0.bn1.weight, reid.backbone.layer3.0.bn1.bias, reid.backbone.layer3.0.bn1.running_mean, reid.backbone.layer3.0.bn1.running_var, reid.backbone.layer3.0.conv2.weight, reid.backbone.layer3.0.bn2.weight, reid.backbone.layer3.0.bn2.bias, reid.backbone.layer3.0.bn2.running_mean, reid.backbone.layer3.0.bn2.running_var, reid.backbone.layer3.0.conv3.weight, reid.backbone.layer3.0.bn3.weight, reid.backbone.layer3.0.bn3.bias, reid.backbone.layer3.0.bn3.running_mean, reid.backbone.layer3.0.bn3.running_var, reid.backbone.layer3.0.downsample.0.weight, reid.backbone.layer3.0.downsample.1.weight, reid.backbone.layer3.0.downsample.1.bias, reid.backbone.layer3.0.downsample.1.running_mean, reid.backbone.layer3.0.downsample.1.running_var, reid.backbone.layer3.1.conv1.weight, reid.backbone.layer3.1.bn1.weight, reid.backbone.layer3.1.bn1.bias, reid.backbone.layer3.1.bn1.running_mean, reid.backbone.layer3.1.bn1.running_var, reid.backbone.layer3.1.conv2.weight, reid.backbone.layer3.1.bn2.weight, reid.backbone.layer3.1.bn2.bias, reid.backbone.layer3.1.bn2.running_mean, reid.backbone.layer3.1.bn2.running_var, reid.backbone.layer3.1.conv3.weight, reid.backbone.layer3.1.bn3.weight, reid.backbone.layer3.1.bn3.bias, reid.backbone.layer3.1.bn3.running_mean, reid.backbone.layer3.1.bn3.running_var, reid.backbone.layer3.2.conv1.weight, reid.backbone.layer3.2.bn1.weight, reid.backbone.layer3.2.bn1.bias, reid.backbone.layer3.2.bn1.running_mean, reid.backbone.layer3.2.bn1.running_var, reid.backbone.layer3.2.conv2.weight, reid.backbone.layer3.2.bn2.weight, reid.backbone.layer3.2.bn2.bias, reid.backbone.layer3.2.bn2.running_mean, reid.backbone.layer3.2.bn2.running_var, reid.backbone.layer3.2.conv3.weight, reid.backbone.layer3.2.bn3.weight, reid.backbone.layer3.2.bn3.bias, reid.backbone.layer3.2.bn3.running_mean, reid.backbone.layer3.2.bn3.running_var, reid.backbone.layer3.3.conv1.weight, reid.backbone.layer3.3.bn1.weight, reid.backbone.layer3.3.bn1.bias, reid.backbone.layer3.3.bn1.running_mean, reid.backbone.layer3.3.bn1.running_var, reid.backbone.layer3.3.conv2.weight, reid.backbone.layer3.3.bn2.weight, reid.backbone.layer3.3.bn2.bias, reid.backbone.layer3.3.bn2.running_mean, reid.backbone.layer3.3.bn2.running_var, reid.backbone.layer3.3.conv3.weight, reid.backbone.layer3.3.bn3.weight, reid.backbone.layer3.3.bn3.bias, reid.backbone.layer3.3.bn3.running_mean, reid.backbone.layer3.3.bn3.running_var, reid.backbone.layer3.4.conv1.weight, reid.backbone.layer3.4.bn1.weight, reid.backbone.layer3.4.bn1.bias, reid.backbone.layer3.4.bn1.running_mean, reid.backbone.layer3.4.bn1.running_var, reid.backbone.layer3.4.conv2.weight, reid.backbone.layer3.4.bn2.weight, reid.backbone.layer3.4.bn2.bias, reid.backbone.layer3.4.bn2.running_mean, reid.backbone.layer3.4.bn2.running_var, reid.backbone.layer3.4.conv3.weight, reid.backbone.layer3.4.bn3.weight, reid.backbone.layer3.4.bn3.bias, reid.backbone.layer3.4.bn3.running_mean, reid.backbone.layer3.4.bn3.running_var, reid.backbone.layer3.5.conv1.weight, reid.backbone.layer3.5.bn1.weight, reid.backbone.layer3.5.bn1.bias, reid.backbone.layer3.5.bn1.running_mean, reid.backbone.layer3.5.bn1.running_var, reid.backbone.layer3.5.conv2.weight, reid.backbone.layer3.5.bn2.weight, reid.backbone.layer3.5.bn2.bias, reid.backbone.layer3.5.bn2.running_mean, reid.backbone.layer3.5.bn2.running_var, reid.backbone.layer3.5.conv3.weight, reid.backbone.layer3.5.bn3.weight, reid.backbone.layer3.5.bn3.bias, reid.backbone.layer3.5.bn3.running_mean, reid.backbone.layer3.5.bn3.running_var, reid.backbone.layer4.0.conv1.weight, reid.backbone.layer4.0.bn1.weight, reid.backbone.layer4.0.bn1.bias, reid.backbone.layer4.0.bn1.running_mean, reid.backbone.layer4.0.bn1.running_var, reid.backbone.layer4.0.conv2.weight, reid.backbone.layer4.0.bn2.weight, reid.backbone.layer4.0.bn2.bias, reid.backbone.layer4.0.bn2.running_mean, reid.backbone.layer4.0.bn2.running_var, reid.backbone.layer4.0.conv3.weight, reid.backbone.layer4.0.bn3.weight, reid.backbone.layer4.0.bn3.bias, reid.backbone.layer4.0.bn3.running_mean, reid.backbone.layer4.0.bn3.running_var, reid.backbone.layer4.0.downsample.0.weight, reid.backbone.layer4.0.downsample.1.weight, reid.backbone.layer4.0.downsample.1.bias, reid.backbone.layer4.0.downsample.1.running_mean, reid.backbone.layer4.0.downsample.1.running_var, reid.backbone.layer4.1.conv1.weight, reid.backbone.layer4.1.bn1.weight, reid.backbone.layer4.1.bn1.bias, reid.backbone.layer4.1.bn1.running_mean, reid.backbone.layer4.1.bn1.running_var, reid.backbone.layer4.1.conv2.weight, reid.backbone.layer4.1.bn2.weight, reid.backbone.layer4.1.bn2.bias, reid.backbone.layer4.1.bn2.running_mean, reid.backbone.layer4.1.bn2.running_var, reid.backbone.layer4.1.conv3.weight, reid.backbone.layer4.1.bn3.weight, reid.backbone.layer4.1.bn3.bias, reid.backbone.layer4.1.bn3.running_mean, reid.backbone.layer4.1.bn3.running_var, reid.backbone.layer4.2.conv1.weight, reid.backbone.layer4.2.bn1.weight, reid.backbone.layer4.2.bn1.bias, reid.backbone.layer4.2.bn1.running_mean, reid.backbone.layer4.2.bn1.running_var, reid.backbone.layer4.2.conv2.weight, reid.backbone.layer4.2.bn2.weight, reid.backbone.layer4.2.bn2.bias, reid.backbone.layer4.2.bn2.running_mean, reid.backbone.layer4.2.bn2.running_var, reid.backbone.layer4.2.conv3.weight, reid.backbone.layer4.2.bn3.weight, reid.backbone.layer4.2.bn3.bias, reid.backbone.layer4.2.bn3.running_mean, reid.backbone.layer4.2.bn3.running_var, reid.head.fcs.0.fc.weight, reid.head.fcs.0.fc.bias, reid.head.fcs.0.bn.weight, reid.head.fcs.0.bn.bias, reid.head.fcs.0.bn.running_mean, reid.head.fcs.0.bn.running_var, reid.head.fc_out.weight, reid.head.fc_out.bias

[                                                  ] 0/8, elapsed: 0s, ETA:Traceback (most recent call last):
  File "demo/demo_mot.py", line 94, in <module>
    main()
  File "demo/demo_mot.py", line 69, in main
    result = inference_mot(model, img, frame_id=i)
  File "/mmtracking/mmtrack/apis/inference.py", line 78, in inference_mot
    data = test_pipeline(data)
  File "/opt/conda/lib/python3.7/site-packages/mmdet/datasets/pipelines/compose.py", line 40, in __call__
    data = t(data)
  File "/mmtracking/mmtrack/datasets/pipelines/loading.py", line 100, in __call__
    detections = results['detections']
KeyError: 'detections'

The same problems with other deepsoft configs and models combinations.

Build requirements is error

I have installed requirements packages. But build by: python setup.py develop and an error occurred:error: The 'flake8' distribution was not found and is required by motmetrics

(mmtrack) shl@zhihui-mint:~/shl_res/1_project/mmtracking$ python
Python 3.7.9 (default, Aug 31 2020, 12:42:55) 
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import mmcv
>>> import mmdet
>>> import torch
>>> mmcv.__version__
'1.2.5'
>>> mmdet.__version__
'2.8.0'
>>> torch.__version__
'1.6.0'
>>> e
(mmtrack) shl@zhihui-mint:~/shl_res/1_project/mmtracking$ python setup.py develop
running develop
running egg_info
writing mmtrack.egg-info/PKG-INFO
writing dependency_links to mmtrack.egg-info/dependency_links.txt
writing requirements to mmtrack.egg-info/requires.txt
writing top-level names to mmtrack.egg-info/top_level.txt
reading manifest file 'mmtrack.egg-info/SOURCES.txt'
writing manifest file 'mmtrack.egg-info/SOURCES.txt'
running build_ext
Creating /home/shl/anaconda3/envs/mmtrack/lib/python3.7/site-packages/mmtrack.egg-link (link to .)
mmtrack 0.5.0 is already the active version in easy-install.pth

Installed /home/shl/shl_res/1_project/mmtracking
Processing dependencies for mmtrack==0.5.0
Searching for pytest
Reading https://pypi.org/simple/pytest/
Downloading https://files.pythonhosted.org/packages/d7/15/5ef931cbd22585865aad0ea025162545b53af9319cf38542e0b7981d5b34/pytest-6.2.1-py3-none-any.whl#sha256=1969f797a1a0dbd8ccf0fecc80262312729afea9c17f1d70ebf85c5e76c6f7c8
Best match: pytest 6.2.1
Processing pytest-6.2.1-py3-none-any.whl
Installing pytest-6.2.1-py3-none-any.whl to /home/shl/anaconda3/envs/mmtrack/lib/python3.7/site-packages
Adding pytest 6.2.1 to easy-install.pth file
Installing py.test script to /home/shl/anaconda3/envs/mmtrack/bin
Installing pytest script to /home/shl/anaconda3/envs/mmtrack/bin

Installed /home/shl/anaconda3/envs/mmtrack/lib/python3.7/site-packages/pytest-6.2.1-py3.7.egg
Searching for flake8-import-order
Reading https://pypi.org/simple/flake8-import-order/
Downloading https://files.pythonhosted.org/packages/ab/52/cf2d6e2c505644ca06de2f6f3546f1e4f2b7be34246c9e0757c6048868f9/flake8_import_order-0.18.1-py2.py3-none-any.whl#sha256=90a80e46886259b9c396b578d75c749801a41ee969a235e163cfe1be7afd2543
Best match: flake8-import-order 0.18.1
Processing flake8_import_order-0.18.1-py2.py3-none-any.whl
Installing flake8_import_order-0.18.1-py2.py3-none-any.whl to /home/shl/anaconda3/envs/mmtrack/lib/python3.7/site-packages
Adding flake8-import-order 0.18.1 to easy-install.pth file

Installed /home/shl/anaconda3/envs/mmtrack/lib/python3.7/site-packages/flake8_import_order-0.18.1-py3.7.egg
Searching for flake8
Downloading https://files.pythonhosted.org/packages/81/47/5f2cea0164e77dd40726d83b4c865c2a701f60b73cb6af7b539cd42aafb4/flake8-import-order-0.18.1.tar.gz#sha256=a28dc39545ea4606c1ac3c24e9d05c849c6e5444a50fb7e9cdd430fc94de6e92
Best match: flake8 import-order-0.18.1
Processing flake8-import-order-0.18.1.tar.gz
Writing /tmp/easy_install-wx87docg/flake8-import-order-0.18.1/setup.cfg
Running flake8-import-order-0.18.1/setup.py -q bdist_egg --dist-dir /tmp/easy_install-wx87docg/flake8-import-order-0.18.1/egg-dist-tmp-efiwyyrt
file flake8_import_order.py (for module flake8_import_order) not found
warning: no files found matching 'README.md'
warning: no previously-included files matching '*.py[co]' found under directory 'tests'
file flake8_import_order.py (for module flake8_import_order) not found
file flake8_import_order.py (for module flake8_import_order) not found
removing '/home/shl/anaconda3/envs/mmtrack/lib/python3.7/site-packages/flake8_import_order-0.18.1-py3.7.egg' (and everything under it)
creating /home/shl/anaconda3/envs/mmtrack/lib/python3.7/site-packages/flake8_import_order-0.18.1-py3.7.egg
Extracting flake8_import_order-0.18.1-py3.7.egg to /home/shl/anaconda3/envs/mmtrack/lib/python3.7/site-packages
flake8-import-order 0.18.1 is already the active version in easy-install.pth

Installed /home/shl/anaconda3/envs/mmtrack/lib/python3.7/site-packages/flake8_import_order-0.18.1-py3.7.egg
error: The 'flake8' distribution was not found and is required by motmetrics
>>>

About pose tracking

Hello! I wonder whether pose tracking algorithms will be supported in the toolkits?

some question about SOT

hello,when i use sot, i meet some questions:
1.When using SOT, the target I want to track is framed, but bbox cannot track it well, I want to know if it is the problem with my method or the model itself has limitations
2.demo_SOT cannot save each frame to the folder specified by output like demo_MOT
3.If the bbox of an object in the middle of the video is known, can SOT be used to obtain the bbox information of the entire video target?
thanks a lot
LI($037$)F89 )$(}~%%PPC
NK7A9$I10_QQJ274P2_Q1
NZ 5}3IAU~1FWAL$W0}_2PN

wrap_fp16

ImportError: cannot import name 'wrap_fp16_model' from 'mmdet.core'

what can i do to solve this problem?

batch support

When are you planning to add support for performing training and inference in batches?

Motivation
It would increase the training and inference speeds a lot.

Can I use intermediate results

Describe the feature
Can I use intermediate results? For example, I want to save the detection results for visualization.

How to track human in video?

I am trying run VID demo with the following command:

# python demo/demo_vid.py configs/vid/dff/dff_faster_rcnn_r101_dc5_1x_imagenetvid.py --checkpoint https://download.openmmlab.com/mmtracking/vid/dff/dff_faster_rcnn_r101_dc5_1x_imagenetvid/dff_faster_rcnn_r101_dc5_1x_imagenetvid_20201218_172720-ad732e17.pth  --input demo/demo.mp4 --device cpu --output output.mp4
No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda'
2021-04-24 19:32:39,179 - mmtrack - INFO - load motion from: https://download.openmmlab.com/mmtracking/pretrained_weights/flownet_simple.pth
2021-04-24 19:32:39,179 - mmtrack - INFO - Use load_from_http loader
Use load_from_http loader
/opt/conda/lib/python3.7/site-packages/mmdet/models/dense_heads/rpn_head.py:192: UserWarning: In rpn_proposal or test_cfg, nms_thr has been moved to a dict named nms as iou_threshold, max_num has been renamed as max_per_img, name of original arguments and the way to specify iou_threshold of NMS will be deprecated.
  'In rpn_proposal or test_cfg, '
/opt/conda/lib/python3.7/site-packages/torch/nn/functional.py:3000: UserWarning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and uses scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. 
  warnings.warn("The default behavior for interpolate/upsample with float scale_factor changed "

In output video car and bike are in the bbox, but not people. How to track only humans in video?

IndexError: index 3 is out of bounds for dimension 0 with size 3 when running Tracktor with Custom Dataset

When I try running the Tracktor using a file modified from configs/mot/tracktor/tracktor_faster-rcnn_r50_fpn_4e_mot17-private-half.py by changing weights and num_class and run using a copy of demo_mot.py

    type='Tracktor',
    pretrains=dict(
        detector=  # noqa: E251
        'https://download.openmmlab.com/mmtracking/mot/faster_rcnn/faster-rcnn_r50_fpn_4e_mot17-half-64ee2ed4.pth', 
...
                clip_border=False), num_classes=1))),

To

    type='Tracktor',
    pretrains=dict(
        detector=  # noqa: E251
        '/home/palm/rcnn_1/epoch_48.pth', 
...
                clip_border=False), num_classes=3))),

Environment

sys.platform: linux
Python: 3.6.9 |Anaconda, Inc.| (default, Jul 30 2019, 19:07:31) [GCC 7.3.0]
CUDA available: True
CUDA_HOME: /usr/local/cuda
NVCC: Build cuda_11.0_bu.TC445_37.28845127_0
GPU 0: GeForce GTX 1080
GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
PyTorch: 1.7.1+cu101
PyTorch compiling details: PyTorch built with:
  - GCC 7.3
  - C++ Version: 201402
  - Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v1.6.0 (Git Hash 5ef631a030a6f73131c77892041042805a06064f)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - NNPACK is enabled
  - CPU capability usage: AVX2
  - CUDA Runtime 10.1
  - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75
  - CuDNN 7.6.3
  - Magma 2.5.2
  - Build settings: BLAS=MKL, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, 

TorchVision: 0.8.2+cu101
OpenCV: 4.4.0
MMCV: 1.2.6
mmtrack: 0.5.0

Error traceback

  File "/home/palm/PycharmProjects/mmtracking/demo.py", line 71, in <module>
    main()
  File "/home/palm/PycharmProjects/mmtracking/demo.py", line 57, in main
    result = inference_mot(model, img, frame_id=i//5)
  File "/home/palm/PycharmProjects/mmtracking/mmtrack/apis/inference.py", line 94, in inference_mot
    result = model(return_loss=False, rescale=True, **data)
  File "/home/palm/miniconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/palm/miniconda3/lib/python3.6/site-packages/mmcv/runner/fp16_utils.py", line 84, in new_func
    return old_func(*args, **kwargs)
  File "/home/palm/PycharmProjects/mmtracking/mmtrack/models/mot/base.py", line 154, in forward
    return self.forward_test(img, img_metas, **kwargs)
  File "/home/palm/PycharmProjects/mmtracking/mmtrack/models/mot/base.py", line 131, in forward_test
    return self.simple_test(imgs[0], img_metas[0], **kwargs)
  File "/home/palm/PycharmProjects/mmtracking/mmtrack/models/mot/tracktor.py", line 148, in simple_test
    **kwargs)
  File "/home/palm/PycharmProjects/mmtracking/mmtrack/models/mot/trackers/tracktor_tracker.py", line 168, in track
    feats, img_metas, model.detector, frame_id, rescale)
  File "/home/palm/PycharmProjects/mmtracking/mmtrack/models/mot/trackers/tracktor_tracker.py", line 96, in regress_tracks
    ids = ids[valid_inds]
IndexError: index 3 is out of bounds for dimension 0 with size 3

IndexError: Caught IndexError in DataLoader worker process 0. When I train the model on LISA Traffic Sign Datasets.

Hi, I custom LISA Traffic Sign Datasets to Coco Video Datasets. And then I run the train.py. All thing seems good, but when it comes about Epoch [1][600/6618], I got the error.

IndexError: Caught IndexError in DataLoader worker process 0. 

More informations are under here.

Traceback (most recent call last):
  File "tools/train.py", line 168, in <module>
    main()
  File "tools/train.py", line 164, in main
    meta=meta)
  File "/home/syo/opt/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/mmtrack/apis/train.py", line 136, in train_model
    runner.run(data_loaders, cfg.workflow, cfg.total_epochs)
  File "/home/syo/opt/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 125, in run
    epoch_runner(data_loaders[i], **kwargs)
  File "/home/syo/opt/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 47, in train
    for i, data_batch in enumerate(self.data_loader):
  File "/home/syo/opt/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 435, in __next__
    data = self._next_data()
  File "/home/syo/opt/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1085, in _next_data
    return self._process_data(data)
  File "/home/syo/opt/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1111, in _process_data
    data.reraise()
  File "/home/syo/opt/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/_utils.py", line 428, in reraise
    raise self.exc_type(msg)
IndexError: Caught IndexError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "/home/syo/opt/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 198, in _worker_loop
    data = fetcher.fetch(index)
  File "/home/syo/opt/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/home/syo/opt/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/home/syo/opt/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/mmdet/datasets/custom.py", line 193, in __getitem__
    data = self.prepare_train_img(idx)
  File "/home/syo/opt/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/mmtrack/datasets/coco_video_dataset.py", line 280, in prepare_train_img
    return self.prepare_data(idx)
  File "/home/syo/opt/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/mmtrack/datasets/coco_video_dataset.py", line 268, in prepare_data
    return self.pipeline(results)
  File "/home/syo/opt/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/mmdet/datasets/pipelines/compose.py", line 40, in __call__
    data = t(data)
  File "/home/syo/opt/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/mmtrack/datasets/pipelines/formatting.py", line 187, in __call__

Help! Please! Thank you very much!

error: OpenCV: FFMPEG: fallback to use tag 0x7634706d/'mp4v'

Describe the bug
After installing the mmtracking, I ran demo_mot.py, and then met the error: OpenCV: FFMPEG: fallback to use tag 0x7634706d/'mp4v'

Reproduction

  1. What command or script did you run?
 python demo/demo_mot.py configs/mot/deepsort/sort_faster-rcnn_fpn_4e_mot17-private.py --input demo/demo.mp4 --output mot.mp4 --backend 'plt'

Environment

sys.platform: linux
Python: 3.8.5 (default, Sep 4 2020, 07:30:14) [GCC 7.3.0]
CUDA available: True
CUDA_HOME: :/usr/local/cuda-11.1:/usr/local/cuda-11.1
GPU 0: GeForce RTX 3090
GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
PyTorch: 1.7.0
PyTorch compiling details: PyTorch built with:

  • GCC 7.3
  • C++ Version: 201402
  • Intel(R) Math Kernel Library Version 2020.0.2 Product Build 20200624 for Intel(R) 64 architecture applications
  • Intel(R) MKL-DNN v1.6.0 (Git Hash 5ef631a030a6f73131c77892041042805a06064f)
  • OpenMP 201511 (a.k.a. OpenMP 4.5)
  • NNPACK is enabled
  • CPU capability usage: AVX2
  • CUDA Runtime 11.0
  • NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_37,code=compute_37
  • CuDNN 8.0.3
  • Magma 2.5.2
  • Build settings: BLAS=MKL, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON,

TorchVision: 0.8.1
OpenCV: 4.5.1
MMCV: 1.2.5
mmtrack: 0.5.0

Error traceback
If applicable, paste the error trackback here.

2021-01-16 23:01:32,177 - mmtrack - INFO - load detector from: https://download.openmmlab.com/mmtracking/mot/faster_rcnn/faster-rcnn_r50_fpn_4e_mot17-ffa52ae7.pth
[                                                  ] 0/8, elapsed: 0s, ETA:/home/n504/hubw/code/mmtracking/mmtrack/core/utils/visualization.py:163: UserWarning: Matplotlib is currently using agg, which is a non-GUI backend, so cannot show the figure.
  plt.show()
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 8/8, 1.5 task/s, elapsed: 5s, ETA:     0smaking the output video at mot.mp4 with a FPS of 3
OpenCV: FFMPEG: tag 0x44495658/'XVID' is not supported with codec id 12 and format 'mp4 / MP4 (MPEG-4 Part 14)'
OpenCV: FFMPEG: fallback to use tag 0x7634706d/'mp4v'
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 8/8, 35.4 task/s, elapsed: 0s, ETA:     0s

unexpected keyword argument 'return_inds'

Describe the bug

An error occurs, when I run demos with tracktor.

Reproduction

  1. What command or script did you run?

python demo/demo_mot.py configs/mot/tracktor/tracktor_faster-rcnn_r50_fpn_4e_mot17-private.py --input demo/demo.mp4 --output mot2.mp4 --device cpu

  1. Did you make any modifications on the code or config?
    No. I made no changes.

Environment

  1. Please run python mmdet/utils/collect_env.py to collect necessary environment information and paste it here.
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/c++/4.2.1
sys.platform: darwin
Python: 3.7.4 (default, Aug 13 2019, 15:17:50) [Clang 4.0.1 (tags/RELEASE_401/final)]
CUDA available: False
GCC: Apple clang version 12.0.0 (clang-1200.0.32.28)
PyTorch: 1.4.0
PyTorch compiling details: PyTorch built with:
  - GCC 4.2
  - clang 9.0.0
  - Intel(R) Math Kernel Library Version 2019.0.5 Product Build 20190808 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v0.21.1 (Git Hash 7d2fd500bc78936d1d648ca713b901012f470dbc)
  - NNPACK is enabled
  - Build settings: BLAS=MKL, BUILD_NAMEDTENSOR=OFF, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -Wno-deprecated-declarations -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -O2 -fPIC -Wno-narrowing -Wall -Wextra -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -Wno-invalid-partial-specialization -Wno-typedef-redefinition -Wno-unknown-warning-option -Wno-unused-private-field -Wno-inconsistent-missing-override -Wno-aligned-allocation-unavailable -Wno-c++14-extensions -Wno-constexpr-not-const -Wno-missing-braces -Qunused-arguments -fcolor-diagnostics -faligned-new -fno-math-errno -fno-trapping-math -Wno-unused-private-field -Wno-missing-braces -Wno-c++14-extensions -Wno-constexpr-not-const, DISABLE_NUMA=1, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=OFF, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=OFF, USE_NNPACK=ON, USE_OPENMP=OFF, USE_STATIC_DISPATCH=OFF, 

TorchVision: 0.5.0
OpenCV: 4.2.0
MMCV: 1.2.1
mmtrack: 0.5.0

Error traceback

Downloading: "https://download.openmmlab.com/mmtracking/mot/reid/tracktor_reid_r50_iter25245-a452f51f.pth" to /Users/apple/.cache/torch/checkpoints/tracktor_reid_r50_iter25245-a452f51f.pth
100%|██████████| 98.4M/98.4M [00:11<00:00, 8.75MB/s]
[>>>>                               ] 1/8, 0.1 task/s, elapsed: 9s, ETA:    60sTraceback (most recent call last):
  File "/Users/apple/Desktop/github/mmtracking/demo/demo_mot.py", line 88, in <module>
    main()
  File "/Users/apple/Desktop/github/mmtracking/demo/demo_mot.py", line 64, in main
    result = inference_mot(model, img, frame_id=i)
  File "/Users/apple/Desktop/github/mmtracking/mmtrack/apis/inference.py", line 92, in inference_mot
    result = model(return_loss=False, rescale=True, **data)
  File "/Users/apple/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/Users/apple/anaconda3/lib/python3.7/site-packages/mmcv/runner/fp16_utils.py", line 84, in new_func
    return old_func(*args, **kwargs)
  File "/Users/apple/Desktop/github/mmtracking/mmtrack/models/mot/base.py", line 154, in forward
    return self.forward_test(img, img_metas, **kwargs)
  File "/Users/apple/Desktop/github/mmtracking/mmtrack/models/mot/base.py", line 131, in forward_test
    return self.simple_test(imgs[0], img_metas[0], **kwargs)
  File "/Users/apple/Desktop/github/mmtracking/mmtrack/models/mot/tracktor.py", line 139, in simple_test
    **kwargs)
  File "/Users/apple/Desktop/github/mmtracking/mmtrack/models/mot/trackers/tracktor_tracker.py", line 146, in track
    feats, img_metas, model.detector, frame_id, rescale)
  File "/Users/apple/Desktop/github/mmtracking/mmtrack/models/mot/trackers/tracktor_tracker.py", line 72, in regress_tracks
    return_inds=True)
TypeError: multiclass_nms() got an unexpected keyword argument 'return_inds'

Details of Tracktor

Hi!
For the results of Tracktor, have you used Interpolation for the results?
Thanks

SOT demo failed

I have built image with docker, installed requirements with pip, tried to run SOT demo:

# python demo/demo_sot.py configs/sot/siamese_rpn/siamese_rpn_r50_1x_lasot.py  --input demo/demo.mp4 --device cpu --output output.mp4
No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda'
2021-04-24 19:26:56,652 - mmtrack - INFO - load backbone from: https://download.openmmlab.com/mmtracking/pretrained_weights/sot_resnet50.model
2021-04-24 19:26:56,652 - mmtrack - INFO - Use load_from_http loader
Warning: The model doesn't have classes
Select a ROI and then press SPACE or ENTER button!
Cancel the selection process by pressing c button!
qt.qpa.xcb: could not connect to display 
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "/opt/conda/lib/python3.7/site-packages/cv2/qt/plugins" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.

Available platform plugins are: xcb.

Aborted (core dumped)

About Tracktor

Hi !
Thanks for your sharing. Really admire your work.
Just would like to know, what is the main difference between the original Tracktor++V2 and this reproduction in mmtracking?
From the code, it seems that the tracking (data association) part almost same with original one, but the performance increased a lot in terms of MOTA and IDF1, which is really great.
Therefore, the difference lies in the detector ?
Thanks.

ValueError: need at least one array to concatenate

I have followed all instructions. And I'm getting this error when I try to run

python tools/train.py configs/mot/deepsort/deepsort_faster-rcnn_fpn_4e_mot17-public-half.py

Traceback (most recent call last): File "tools/train.py", line 168, in <module> main() File "tools/train.py", line 164, in main meta=meta) File "/home/maxwelr/mmtracking/mmtrack/apis/train.py", line 136, in train_model runner.run(data_loaders, cfg.workflow, cfg.total_epochs) File "/home/maxwelr/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 125, in run epoch_runner(data_loaders[i], **kwargs) File "/home/maxwelr/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 47, in train for i, data_batch in enumerate(self.data_loader): File "/home/maxwelr/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 291, in __iter__ return _MultiProcessingDataLoaderIter(self) File "/home/maxwelr/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 764, in __init__ self._try_put_index() File "/home/maxwelr/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 994, in _try_put_index index = self._next_index() File "/home/maxwelr/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 357, in _next_index return next(self._sampler_iter) # may raise StopIteration File "/home/maxwelr/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/utils/data/sampler.py", line 208, in __iter__ for idx in self.sampler: File "/home/maxwelr/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/mmdet/datasets/samplers/group_sampler.py", line 36, in __iter__ indices = np.concatenate(indices) File "<__array_function__ internals>", line 6, in concatenate ValueError: need at least one array to concatenate

python setup.py develop

root@/mmtracking-master# python setup.py develop
running develop
running egg_info
writing mmtrack.egg-info/PKG-INFO
writing dependency_links to mmtrack.egg-info/dependency_links.txt
writing requirements to mmtrack.egg-info/requires.txt
writing top-level names to mmtrack.egg-info/top_level.txt
reading manifest file 'mmtrack.egg-info/SOURCES.txt'
writing manifest file 'mmtrack.egg-info/SOURCES.txt'
/opt/conda/lib/python3.6/site-packages/torch/utils/cpp_extension.py:339: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend.
warnings.warn(msg.format('we could not find ninja.'))
running build_ext
Creating /opt/conda/lib/python3.6/site-packages/mmtrack.egg-link (link to .)
mmtrack 0.5.0 is already the active version in easy-install.pth

Installed /data/lyy/object-detection/mmtracking-master
Processing dependencies for mmtrack==0.5.0
Searching for dotty_dict
Reading https://pypi.org/simple/dotty_dict/
Downloading https://files.pythonhosted.org/packages/a7/da/fc25898c4edb9549b2aac0f7329fec027d654e94d4c4b89849d4c5fff0a4/dotty_dict-1.3.0.tar.gz#sha256=eb0035a3629ecd84397a68f1f42f1e94abd1c34577a19cd3eacad331ee7cbaf0
Best match: dotty-dict 1.3.0
Processing dotty_dict-1.3.0.tar.gz
Writing /tmp/easy_install-6tvk0j52/dotty_dict-1.3.0/setup.cfg
Running dotty_dict-1.3.0/setup.py -q bdist_egg --dist-dir /tmp/easy_install-6tvk0j52/dotty_dict-1.3.0/egg-dist-tmp-2m_gtven
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/site-packages/setuptools/sandbox.py", line 154, in save_modules
yield saved
File "/opt/conda/lib/python3.6/site-packages/setuptools/sandbox.py", line 195, in setup_context
yield
File "/opt/conda/lib/python3.6/site-packages/setuptools/sandbox.py", line 250, in run_setup
_execfile(setup_script, ns)
File "/opt/conda/lib/python3.6/site-packages/setuptools/sandbox.py", line 45, in _execfile
exec(code, globals, locals)
File "/tmp/easy_install-6tvk0j52/dotty_dict-1.3.0/setup.py", line 16, in

File "/opt/conda/lib/python3.6/encodings/ascii.py", line 26, in decode
return codecs.ascii_decode(input, self.errors)[0]
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc5 in position 118: ordinal not in range(128)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "setup.py", line 159, in
zip_safe=False)
File "/opt/conda/lib/python3.6/site-packages/setuptools/init.py", line 165, in setup
return distutils.core.setup(**attrs)
File "/opt/conda/lib/python3.6/distutils/core.py", line 148, in setup
dist.run_commands()
File "/opt/conda/lib/python3.6/distutils/dist.py", line 955, in run_commands
self.run_command(cmd)
File "/opt/conda/lib/python3.6/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/opt/conda/lib/python3.6/site-packages/setuptools/command/develop.py", line 38, in run
self.install_for_development()
File "/opt/conda/lib/python3.6/site-packages/setuptools/command/develop.py", line 155, in install_for_development
self.process_distribution(None, self.dist, not self.no_deps)
File "/opt/conda/lib/python3.6/site-packages/setuptools/command/easy_install.py", line 759, in process_distribution
[requirement], self.local_index, self.easy_install
File "/opt/conda/lib/python3.6/site-packages/pkg_resources/init.py", line 781, in resolve
replace_conflicting=replace_conflicting
File "/opt/conda/lib/python3.6/site-packages/pkg_resources/init.py", line 1064, in best_match
return self.obtain(req, installer)
File "/opt/conda/lib/python3.6/site-packages/pkg_resources/init.py", line 1076, in obtain
return installer(requirement)
File "/opt/conda/lib/python3.6/site-packages/setuptools/command/easy_install.py", line 686, in easy_install
return self.install_item(spec, dist.location, tmpdir, deps)
File "/opt/conda/lib/python3.6/site-packages/setuptools/command/easy_install.py", line 712, in install_item
dists = self.install_eggs(spec, download, tmpdir)
File "/opt/conda/lib/python3.6/site-packages/setuptools/command/easy_install.py", line 897, in install_eggs
return self.build_and_install(setup_script, setup_base)
File "/opt/conda/lib/python3.6/site-packages/setuptools/command/easy_install.py", line 1167, in build_and_install
self.run_setup(setup_script, setup_base, args)
File "/opt/conda/lib/python3.6/site-packages/setuptools/command/easy_install.py", line 1151, in run_setup
run_setup(setup_script, args)
File "/opt/conda/lib/python3.6/site-packages/setuptools/sandbox.py", line 253, in run_setup
raise
File "/opt/conda/lib/python3.6/contextlib.py", line 99, in exit
self.gen.throw(type, value, traceback)
File "/opt/conda/lib/python3.6/site-packages/setuptools/sandbox.py", line 195, in setup_context
yield
File "/opt/conda/lib/python3.6/contextlib.py", line 99, in exit
self.gen.throw(type, value, traceback)
File "/opt/conda/lib/python3.6/site-packages/setuptools/sandbox.py", line 166, in save_modules
saved_exc.resume()
File "/opt/conda/lib/python3.6/site-packages/setuptools/sandbox.py", line 141, in resume
six.reraise(type, exc, self._tb)
File "/opt/conda/lib/python3.6/site-packages/setuptools/_vendor/six.py", line 685, in reraise
raise value.with_traceback(tb)
File "/opt/conda/lib/python3.6/site-packages/setuptools/sandbox.py", line 154, in save_modules
yield saved
File "/opt/conda/lib/python3.6/site-packages/setuptools/sandbox.py", line 195, in setup_context
yield
File "/opt/conda/lib/python3.6/site-packages/setuptools/sandbox.py", line 250, in run_setup
_execfile(setup_script, ns)
File "/opt/conda/lib/python3.6/site-packages/setuptools/sandbox.py", line 45, in _execfile
exec(code, globals, locals)
File "/tmp/easy_install-6tvk0j52/dotty_dict-1.3.0/setup.py", line 16, in

File "/opt/conda/lib/python3.6/encodings/ascii.py", line 26, in decode
return codecs.ascii_decode(input, self.errors)[0]
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc5 in position 118: ordinal not in range(128)

the reason why imagenet_det_30plus1cls.json needed

hello.
I'm very thank you for your nice work.

I wonder why image-net detection dataset is needed when I want to train video object detection models such as FGFA.

I prepared Image-net VID dataset and converted it to coco-Vid format but couldn't train my custom model properly since I don't have "imagenet_det_30plus1cls.json" file.

Is there any way to train my custom vid-model without detection dateset?

In my opinion, the vid dataset already have bounding boxes for objects, so object detection dataset is not nessarary.

thank you for reading and hope this Issue can help others.

What's the develop plan

It's a great project, can you share your develop roadmap like mmdetection, so that more people can join in this project and contribute more modern algorithm

demo where I run on my own bounding boxes

Is there a demo where I can implement tracking on my own bounding boxes for the case of multiobject tracking?

I have my own detector that returns bounding boxes and the confidences for the boxes. I want to use mmtracking to do the tracking but I'm not sure how to instantiate the tracker, and run on my own boxes. Is there a demo I can use for that?

RuntimeError when running the Tracktor inference code

Describe the bug
A clear and concise description of what the bug is.

I get the runtime error when running the tractor inference code.
Seems like there are several empty boxes in public detections.
I used tracktor_faster-rcnn_r50_fpn_4e_mot17-public.py config file for inference.

Reproduction

  1. What command or script did you run?
tools/dist_test.sh configs/mot/tracktor/tracktor_faster-rcnn_r50_fpn_4e_mot17-public.py 8 --eval track
  1. Did you make any modifications on the code or config? Did you understand what you have modified?

Not at all.

  1. What dataset did you use and what task did you run?

Environment

sys.platform: linux
Python: 3.7.0 (default, Oct 9 2018, 10:31:47) [GCC 7.3.0]
CUDA available: True
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 10.2, V10.2.89
GPU 0,1,2,3,4,5,6,7: Quadro RTX 6000
GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
PyTorch: 1.7.1
PyTorch compiling details: PyTorch built with:

  • GCC 7.3
  • C++ Version: 201402
  • Intel(R) Math Kernel Library Version 2020.0.2 Product Build 20200624 for Intel(R) 64 architecture applications
  • Intel(R) MKL-DNN v1.6.0 (Git Hash 5ef631a030a6f73131c77892041042805a06064f)
  • OpenMP 201511 (a.k.a. OpenMP 4.5)
  • NNPACK is enabled
  • CPU capability usage: AVX2
  • CUDA Runtime 10.2
  • NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_37,code=compute_37
  • CuDNN 7.6.5
  • Magma 2.5.2
  • Build settings: BLAS=MKL, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON,

TorchVision: 0.8.2
OpenCV: 4.5.1
MMCV: 1.2.4
mmtrack: 0.5.0

Error traceback
If applicable, paste the error trackback here.

Traceback (most recent call last):
  File "tools/test.py", line 171, in <module>
Traceback (most recent call last):
  File "tools/test.py", line 171, in <module>
    main()
  File "tools/test.py", line 151, in main
    args.gpu_collect)
  File "/home/miruware/projects/mmtracking/mmtrack/apis/test.py", line 82, in multi_gpu_test
    result = model(return_loss=False, rescale=True, **data)
  File "/home/miruware/anaconda3/envs/prj-vod-mmdet/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    main()
  File "tools/test.py", line 151, in main
    args.gpu_collect)
  File "/home/miruware/projects/mmtracking/mmtrack/apis/test.py", line 82, in multi_gpu_test
    result = self.forward(*input, **kwargs)
  File "/home/miruware/anaconda3/envs/prj-vod-mmdet/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 619, in forward
    result = model(return_loss=False, rescale=True, **data)    output = self.module(*inputs[0], **kwargs[0])

  File "/home/miruware/anaconda3/envs/prj-vod-mmdet/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
  File "/home/miruware/anaconda3/envs/prj-vod-mmdet/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/miruware/anaconda3/envs/prj-vod-mmdet/lib/python3.7/site-packages/mmcv/runner/fp16_utils.py", line 84, in new_func
        result = self.forward(*input, **kwargs)return old_func(*args, **kwargs)

  File "/home/miruware/projects/mmtracking/mmtrack/models/mot/base.py", line 154, in forward
  File "/home/miruware/anaconda3/envs/prj-vod-mmdet/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 619, in forward
    return self.forward_test(img, img_metas, **kwargs)
  File "/home/miruware/projects/mmtracking/mmtrack/models/mot/base.py", line 131, in forward_test
    return self.simple_test(imgs[0], img_metas[0], **kwargs)
  File "/home/miruware/projects/mmtracking/mmtrack/models/mot/tracktor.py", line 119, in simple_test
    rescale=rescale)
      File "/home/miruware/projects/mmdetection/mmdet/models/roi_heads/test_mixins.py", line 113, in simple_test_bboxes
output = self.module(*inputs[0], **kwargs[0])
  File "/home/miruware/anaconda3/envs/prj-vod-mmdet/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    rois = rois.reshape(batch_size, num_proposals_per_img, -1)
RuntimeError: cannot reshape tensor of 0 elements into shape [1, 0, -1] because the unspecified dimension size -1 can be any value and is ambiguous
    result = self.forward(*input, **kwargs)
  File "/home/miruware/anaconda3/envs/prj-vod-mmdet/lib/python3.7/site-packages/mmcv/runner/fp16_utils.py", line 84, in new_func
    return old_func(*args, **kwargs)
  File "/home/miruware/projects/mmtracking/mmtrack/models/mot/base.py", line 154, in forward
    return self.forward_test(img, img_metas, **kwargs)
  File "/home/miruware/projects/mmtracking/mmtrack/models/mot/base.py", line 131, in forward_test
    return self.simple_test(imgs[0], img_metas[0], **kwargs)
  File "/home/miruware/projects/mmtracking/mmtrack/models/mot/tracktor.py", line 119, in simple_test
    rescale=rescale)
  File "/home/miruware/projects/mmdetection/mmdet/models/roi_heads/test_mixins.py", line 113, in simple_test_bboxes
    rois = rois.reshape(batch_size, num_proposals_per_img, -1)
RuntimeError: cannot reshape tensor of 0 elements into shape [1, 0, -1] because the unspecified dimension size -1 can be any value and is ambiguous

question about ref_img_sampler parameters (num_ref_imgs, frame_range, filter_key_img )

@OceanPang @GT9505 Thanks a lot for maintaining this library!

I read the related docs on these params but couldnt understand their meaning, would you tell their meaning and what they change during training:

num_ref_imgs, frame_range, filter_key_img from this config.

What is the difference between this and this as given:
num_ref_imgs=9, frame_range=9, filter_key_img=True
num_ref_imgs=2, frame_range=0, filter_key_img=True

Several questions on the MOT config settings

Hello,

I am new to MOT.
I have several questions below:

  1. If my understanding is correct, the public stands for the detection boxes provided by the challenge to ensure the comparability between different tracking methods (so we use the provided bounding boxes for every video), whereas the private stands for detection boxes from any user-trained object detector (so we use the generated bounding boxes). Is my understanding correct?

  2. I see the MOT17 'train' dataset is halved for train and test. Is this common in MOT?

  3. why the config file of tracktor_faster-rcnn_r50_fpn_4e_mot17-private.py sets the test_set argument as 'train' instead of 'test'?

  4. what is the purpose of the public-half_search config file? Is it to set the proper hyperparameters? If then, what hyperparameter we especially target for searching? (e.g., obj_score_thr)

KeyError: 'ChannelMapper is not in the neck registry'

hello,mmtracking is a good job .but when I train siamese rpn++,I encountered the key error.can you help to solve it,thank you very much.

Traceback (most recent call last):
File "tools/train.py", line 168, in
main()
File "tools/train.py", line 141, in main
model = build_model(cfg.model)
File "/media/yxy/4TB/ayxy/mmtracking/mmtrack/models/builder.py", line 69, in build_model
return build(cfg, MODELS)
File "/media/yxy/4TB/ayxy/mmtracking/mmtrack/models/builder.py", line 34, in build
return build_from_cfg(cfg, registry, default_args)
File "/home/yxy/anaconda3/envs/mmdet/lib/python3.7/site-packages/mmcv/utils/registry.py", line 171, in build_from_cfg
return obj_cls(**args)
File "/media/yxy/4TB/ayxy/mmtracking/mmtrack/models/sot/siamrpn.py", line 31, in init
self.neck = build_neck(neck)
File "/media/yxy/4TB/ayxy/mmdetection/mmdet/models/builder.py", line 42, in build_neck
return build(cfg, NECKS)
File "/media/yxy/4TB/ayxy/mmdetection/mmdet/models/builder.py", line 32, in build
return build_from_cfg(cfg, registry, default_args)
File "/home/yxy/anaconda3/envs/mmdet/lib/python3.7/site-packages/mmcv/utils/registry.py", line 164, in build_from_cfg
f'{obj_type} is not in the {registry.name} registry')
KeyError: 'ChannelMapper is not in the neck registry'

Question about initializing the tracker.

Can I start tracking with optional bounding box on a video? If so, should I understand that one shot learning is used for tracking?
Any basic usage document or tutorial would be nice.

Best

How to actually Train with mmtracking (e.g MOT)

I'm following tutorial:
Training on a single GPU
python tools/train.py ${CONFIG_FILE} [optional arguments]

then I tried:
python tools/train.py configs/mot/deepsort/deepsort_faster-rcnn_fpn_4e_mot17-public-half.py
it gives me:
NotImplementedError: Please train detector and reid models first and inference with Tracktor.

Can you give me some advice? Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.