Code Monkey home page Code Monkey logo

centertrack's Introduction

Tracking Objects as Points

Simultaneous object detection and tracking using center points:

Tracking Objects as Points,
Xingyi Zhou, Vladlen Koltun, Philipp Krähenbühl,
arXiv technical report (arXiv 2004.01177)

@article{zhou2020tracking,
  title={Tracking Objects as Points},
  author={Zhou, Xingyi and Koltun, Vladlen and Kr{\"a}henb{\"u}hl, Philipp},
  journal={ECCV},
  year={2020}
}

Contact: [email protected]. Any questions or discussion are welcome!

Abstract

Tracking has traditionally been the art of following interest points through space and time. This changed with the rise of powerful deep networks. Nowadays, tracking is dominated by pipelines that perform object detection followed by temporal association, also known as tracking-by-detection. In this paper, we present a simultaneous detection and tracking algorithm that is simpler, faster, and more accurate than the state of the art. Our tracker, CenterTrack, applies a detection model to a pair of images and detections from the prior frame. Given this minimal input, CenterTrack localizes objects and predicts their associations with the previous frame. That's it. CenterTrack is simple, online (no peeking into the future), and real-time. It achieves 67.3% MOTA on the MOT17 challenge at 22 FPS and 89.4% MOTA on the KITTI tracking benchmark at 15 FPS, setting a new state of the art on both datasets. CenterTrack is easily extended to monocular 3D tracking by regressing additional 3D attributes. Using monocular video input, it achieves 28.3% [email protected] on the newly released nuScenes 3D tracking benchmark, substantially outperforming the monocular baseline on this benchmark while running at 28 FPS.

Features at a glance

  • One-sentence method summary: Our model takes the current frame, the previous frame, and a heatmap rendered from previous tracking results as input, and predicts the current detection heatmap as well as their offsets to centers in the previous frame.

  • The model can be trained on still image datasets if videos are not available.

  • Easily extends to monocular 3d object tracking, multi-category tracking, and pose tracking.

  • State-of-the-art performance on MOT17, KITTI, and nuScenes monocular tracking benchmarks.

Main results

Pedestrian tracking on MOT17 test set

Detection MOTA FPS
Public 61.5 22
Private 67.8 22

2D vehicle tracking on KITTI test set (with flip test)

MOTA FPS
89.44 15

3D tracking on nuScenes test set

AMOTA @ 0.2 AMOTA FPS
27.8 4.6 28

Besides benchmark evaluation, we also provide models for 80-category tracking and pose tracking trained on COCO. See the sample visual results below (Video files from openpose and YOLO).

All models and details are available in our Model zoo.

Installation

Please refer to INSTALL.md for installation instructions.

Use CenterTrack

We support demo for videos, webcam, and image folders.

First, download the models (By default, nuscenes_3d_tracking for monocular 3D tracking, coco_tracking for 80-category detection and coco_pose_tracking for pose tracking) from the Model zoo and put them in CenterNet_ROOT/models/.

We provide a video clip from the nuScenes dataset in videos/nuscenes_mini.mp4. To test monocular 3D tracking on this video, run

python demo.py tracking,ddd --load_model ../models/nuScenes_3Dtracking.pth --dataset nuscenes --pre_hm --track_thresh 0.1 --demo ../videos/nuscenes_mini.mp4 --test_focal_length 633

You will need to specify test_focal_length for monocular 3D tracking demo to convert the image coordinate system back to 3D. The value 633 is half of a typical focal length (~1266) in nuScenes dataset in input resolution 1600x900. The mini demo video is in an input resolution of 800x448, so we need to use a half focal length. You don't need to set the test_focal_length when testing on the original nuScenes data.

If setup correctly, you will see an output video like:

Similarly, for 80-category tracking on images/ video, run:

python demo.py tracking --load_model ../models/coco_tracking.pth --demo /path/to/image/or/folder/or/video 

If you want to test with person tracking models, you need to add --num_class 1:

python demo.py tracking --load_model ../models/mot17_half.pth --num_class 1 --demo /path/to/image/or/folder/or/video 

For webcam demo, run

python demo.py tracking --load_model ../models/coco_tracking.pth --demo webcam 

For monocular 3D tracking, run

python demo.py tracking,ddd --demo webcam --load_model ../models/coco_tracking.pth --demo /path/to/image/or/folder/or/video/or/webcam 

Similarly, for pose tracking, run:

python demo.py tracking,multi_pose --load_model ../models/coco_pose.pth --demo /path/to/image/or/folder/or/video/or/webcam 

The result for the example images should look like:

You can add --debug 2 to visualize the heatmap and offset predictions.

To use this CenterTrack in your own project, you can

import sys
CENTERTRACK_PATH = /path/to/CenterTrack/src/lib/
sys.path.insert(0, CENTERTRACK_PATH)

from detector import Detector
from opts import opts

MODEL_PATH = /path/to/model
TASK = 'tracking' # or 'tracking,multi_pose' for pose tracking and 'tracking,ddd' for monocular 3d tracking
opt = opts().init('{} --load_model {}'.format(TASK, MODEL_PATH).split(' '))
detector = Detector(opt)

images = ['''image read from open cv or from a video''']
for img in images:
  ret = detector.run(img)['results']

Each ret will be a list dict: [{'bbox': [x1, y1, x2, y2], 'tracking_id': id, ...}]

Training on custom dataset

If you want to train CenterTrack on your own dataset, you can use --dataset custom and manually specify the annotation file, image path, input resolutions, and number of categories. You still need to create the annotation files in COCO format (referring to the many convert_X_to_coco.py examples in tools). For example, you can use the following command to train on our mot17 experiment without using the pre-defined mot dataset file:

python main.py tracking --exp_id mot17_half_sc --dataset custom --custom_dataset_ann_path ../data/mot17/annotations/train_half.json --custom_dataset_img_path ../data/mot17/train/ --input_h 544 --input_w 960 --num_classes 1 --pre_hm --ltrb_amodal --same_aug --hm_disturb 0.05 --lost_disturb 0.4 --fp_disturb 0.1 --gpus 0,1

Benchmark Evaluation and Training

After installation, follow the instructions in DATA.md to setup the datasets. Then check GETTING_STARTED.md to reproduce the results in the paper. We provide scripts for all the experiments in the experiments folder.

License

CenterTrack is developed upon CenterNet. Both codebases are released under MIT License themselves. Some code of CenterNet are from third-parties with different licenses, please check the CenterNet repo for details. In addition, this repo uses py-motmetrics for MOT evaluation and nuscenes-devkit for nuScenes evaluation and preprocessing. See NOTICE for detail. Please note the licenses of each dataset. Most of the datasets we used in this project are under non-commercial licenses.

centertrack's People

Contributors

nuri-benbarka avatar xingyizhou avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

centertrack's Issues

OpenCV: FFMPEG: tag 0x44495658/'XVID' is not supported with codec id 12 and format 'mp4 / MP4 (MPEG-4 Part 14)' OpenCV: FFMPEG: fallback to use tag 0x7634706d/'mp4v'

anyone like me can't output video properly?

default setting, the error shows
OpenCV: FFMPEG: tag 0x44495658/'XVID' is not supported with codec id 12 and format 'mp4 / MP4 (MPEG-4 Part 14)' OpenCV: FFMPEG: fallback to use tag 0x7634706d/'mp4v'

I also tried the following

v2.VideoWriter_fourcc(*'mp4v)

still can't yield video properly.

demo.py

when I use demo.py to run a avi ,a problem happened at the last frame:
Traceback (most recent call last):
File "demo.py", line 126, in
demo(opt)
File "demo.py", line 65, in demo
ret = detector.run(img, input_meta)
File "/home/ppp/centertrack_new/src/lib/detector.py", line 66, in run
image = image_or_path_or_tensor['image'][0].numpy()
TypeError: 'NoneType' object is not subscriptable

what should I do?

MOT17 evaluation get error with no ground truth

When try to evaluation MOT17 dataset with d0d9561, this error occur

--gt_type _val_half
gt_type _val_half
gt_files []
11:48:20 INFO - Found 0 groundtruths and 7 test files.
11:48:20 INFO - Available LAP solvers ['lapsolver', 'lap', 'scipy', 'munkres']
11:48:20 INFO - Default LAP solver 'lapsolver'
11:48:20 INFO - Loading files.
11:48:20 WARNING - No ground truth for MOT17-13-FRCNN, skipping.
11:48:20 WARNING - No ground truth for MOT17-09-FRCNN, skipping.
11:48:20 WARNING - No ground truth for MOT17-05-FRCNN, skipping.
11:48:20 WARNING - No ground truth for MOT17-04-FRCNN, skipping.
11:48:20 WARNING - No ground truth for MOT17-02-FRCNN, skipping.
11:48:20 WARNING - No ground truth for MOT17-10-FRCNN, skipping.
11:48:20 WARNING - No ground truth for MOT17-11-FRCNN, skipping.
11:48:20 INFO - Running metrics
/home/dhk/.pyenv/versions/3.7.6/envs/.venv/lib/python3.7/site-packages/motmetrics/mot.py:243: FutureWarning: the 'labels' keyword is deprecated, use 'codes' instead
idx = pd.MultiIndex(levels=[[],[]], labels=[[],[]], names=['FrameId','Event'])
/home/dhk/.pyenv/versions/3.7.6/envs/.venv/lib/python3.7/site-packages/motmetrics/metrics.py:302: RuntimeWarning: invalid value encountered in long_scalars
return num_detections / num_objects
/home/dhk/.pyenv/versions/3.7.6/envs/.venv/lib/python3.7/site-packages/motmetrics/metrics.py:298: RuntimeWarning: invalid value encountered in long_scalars
return num_detections / (num_false_positives + num_detections)
/home/dhk/.pyenv/versions/3.7.6/envs/.venv/lib/python3.7/site-packages/motmetrics/metrics.py:294: RuntimeWarning: invalid value encountered in long_scalars
return 1. - (num_misses + num_switches + num_false_positives) / num_objects
/home/dhk/.pyenv/versions/3.7.6/envs/.venv/lib/python3.7/site-packages/motmetrics/metrics.py:290: RuntimeWarning: invalid value encountered in double_scalars
return df.noraw['D'].sum() / num_detections
Rcll Prcn GT MT PT ML FP FN IDs FM MOTA MOTP num_objects
OVERALL nan% nan% 0 nan% nan% nan% nan% nan% nan% nan% nan% nan 0
Traceback (most recent call last)
...
IndexError: arrays used as indices must be of integer (or boolean) type

This part is suspicious.

os.system('python tools/eval_motchallenge.py ' + \
'../data/mot{}/{}/ '.format(self.year, 'trainval') + \
'{}/results_mot{}/ '.format(save_dir, self.dataset_version) + \
gt_type_str + ' --eval_official')

There may be two solutions. First one is edit mot.py to match data folder structure, specifically, change trainval to train. Second one is change data folder modification script to match mot.py

Offset not used?

Hello

Firstly, I appreciate your work

  1. Is offset being used in tracking. I can't find it in tracking.py file?. Is the offset supposed to be 0 in the implemented code?

  2. When i use heatmap (pre_hm) false predictions after some frames start accumulating (this does not happen when i use same model without pre_hm but model was trained with pre_hm). Is the pre_hm not robust ?Did you face any similar issues like the one in below images? (trying to do for vehicles)(Model is trained for 18 epochs now)

2107

cannot convert to onnx

Use scipy.optimize.linear_sum_assignment instead.
FutureWarning)
Running tracking
Using tracking threshold for out threshold! 0.3
Fix size testing.
training chunk_sizes: [32]
input h w: 512 512
heads {'hm': 80, 'reg': 2, 'wh': 2, 'tracking': 2}
weights {'hm': 1, 'reg': 1, 'wh': 0.1, 'tracking': 1}
head conv {'hm': [256], 'reg': [256], 'wh': [256], 'tracking': [256]}
Namespace(K=100, add_05=False, amodel_offset_weight=1, arch='dla_34', aug_rot=0, backbone='dla34', batch_size=32, chunk_sizes=[32], data_dir='/home/unreal/rahul/Documents/pose_py/CenterTrack/src/lib/../../data', dataset='coco', dataset_version='', debug=0, debug_dir='/home/unreal/rahul/Documents/pose_py/CenterTrack/src/lib/../../exp/tracking/default/debug', debugger_theme='white', demo='', dense_reg=1, dep_weight=1, depth_scale=1, device=device(type='cuda'), dim_weight=1, dla_node='dcn', down_ratio=4, efficient_level=0, eval_val=False, exp_dir='/home/unreal/rahul/Documents/pose_py/CenterTrack/src/lib/../../exp/tracking', exp_id='default', fix_res=True, fix_short=-1, flip=0.5, flip_test=False, fp_disturb=0, gpus=[0], gpus_str='0', head_conv={'hm': [256], 'reg': [256], 'wh': [256], 'tracking': [256]}, head_kernel=3, heads={'hm': 80, 'reg': 2, 'wh': 2, 'tracking': 2}, hm_disturb=0, hm_hp_weight=1, hm_weight=1, hp_weight=1, hungarian=False, ignore_loaded_cats=[], input_h=512, input_res=512, input_w=512, keep_res=False, kitti_split='3dop', load_model='', load_results='', lost_disturb=0, lr=0.000125, lr_step=[60], ltrb=False, ltrb_amodal=False, ltrb_amodal_weight=0.1, ltrb_weight=0.1, map_argoverse_id=False, master_batch_size=32, max_age=-1, max_frame_dist=3, model_output_list=True, msra_outchannel=256, neck='dlaup', new_thresh=0.3, nms=False, no_color_aug=False, no_pause=False, no_pre_img=False, non_block_test=False, not_cuda_benchmark=False, not_idaup=False, not_prefetch_test=False, not_rand_crop=False, not_set_cuda_env=False, not_show_bbox=False, not_show_number=False, num_classes=80, num_epochs=70, num_head_conv=1, num_iters=-1, num_layers=101, num_stacks=1, num_workers=4, nuscenes_att=False, nuscenes_att_weight=1, off_weight=1, optim='adam', out_thresh=0.3, output_h=128, output_res=128, output_w=128, pad=31, pre_hm=False, pre_img=True, pre_thresh=0.3, print_iter=0, prior_bias=-4.6, public_det=False, qualitative=False, reg_loss='l1', reset_hm=False, resize_video=False, resume=False, reuse_hm=False, root_dir='/home/unreal/rahul/Documents/pose_py/CenterTrack/src/lib/../..', rot_weight=1, rotate=0, same_aug_pre=False, save_all=False, save_dir='/home/unreal/rahul/Documents/pose_py/CenterTrack/src/lib/../../exp/tracking/default', save_framerate=30, save_img_suffix='', save_imgs=[], save_point=[90], save_results=False, save_video=False, scale=0, seed=317, shift=0, show_track_color=False, skip_first=-1, tango_color=False, task='tracking', test=False, test_dataset='coco', test_scales=[1.0], track_thresh=0.3, tracking=True, tracking_weight=1, trainval=False, transpose_video=False, use_kpt_center=False, use_loaded_results=False, val_intervals=10000, velocity=False, velocity_weight=1, video_h=512, video_w=512, vis_gt_bev='', vis_thresh=0.3, weights={'hm': 1, 'reg': 1, 'wh': 0.1, 'tracking': 1}, wh_weight=0.1, zero_pre_hm=False, zero_tracking=False)
Using node type: (<class 'model.networks.dla.DeformConv'>, <class 'model.networks.dla.DeformConv'>)
/home/unreal/rahul/Documents/pose_py/CenterTrack/src/lib/model/networks/DCNv2/dcn_v2.py:31: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
ctx.deformable_groups)
Traceback (most recent call last):
File "convert_onnx.py", line 65, in
convert_onnx(opt)
File "convert_onnx.py", line 43, in convert_onnx
torch.onnx.export(model.module, dummy_input1, "model.onnx")
File "/home/unreal/anaconda3/envs/CenterTrack/lib/python3.6/site-packages/torch/onnx/init.py", line 148, in export
strip_doc_string, dynamic_axes, keep_initializers_as_inputs)
File "/home/unreal/anaconda3/envs/CenterTrack/lib/python3.6/site-packages/torch/onnx/utils.py", line 66, in export
dynamic_axes=dynamic_axes, keep_initializers_as_inputs=keep_initializers_as_inputs)
File "/home/unreal/anaconda3/envs/CenterTrack/lib/python3.6/site-packages/torch/onnx/utils.py", line 428, in _export
operator_export_type, strip_doc_string, val_keep_init_as_ip)
RuntimeError: ONNX export failed: Couldn't export Python operator _DCNv2

evaluate_trackingtrain_half.seqmap is missing

When I tried to evaluate kitti tracking, I faced error says FileNotFoundError: [Errno 2] No such file or directory: './tools/eval_kitti_track/data/tracking/evaluate_trackingval_half.seqmap'.
In this folder I can find evaluate_trackingval_half.seqmap, so there should be evaluate_trackingval_half.seqmap too??

Something missing in 'convert_mot_det_to_results.py'

Hi,
First,' seqs = [s for s in os.listdir(DET_PATH) if '_det' in s]' , there is no '_det' in '../../data/mot17/'

Second, 'if not IS_THIRD_PARTY:' there is no 'IS_THIRD_PARTY'?

I just follow the commend in your description.
Is there anything wrong?
Thanks a lot!

import DCN failed in KITTI Tracking

Thanks for your work!

I'm running test.py as it mentioned in GettingStarted/KITTI Tracking.
While running I'm getting an error import DCN failed.
I see that src/lib/model/networks/dla.py provides the following code:

try:
    from .DCNv2.dcn_v2 import DCN
except:
    print('import DCN failed')
    DCN = None

But there is no .DCNv2 lib in your repository, so DCN is always None.

Can anyone help me?

Can't save video

When I ran demo.py with --save_video argument, I had a below warning and video wasn't saved. I realized it even doesn't create results folder, so I made by myself. Then 'default_nuscenes_mini.mp4.mp4' file appeared in result folder, but the content was empty.
I played around some combinations of codecs(MJPG , DIVX) and extensions(.avi , .mp4), but still failed. was it only my environment issue?

[CODE] python demo.py tracking,ddd --load_model ../models/nuScenes_3Dtracking.pth --dataset nuscenes --pre_hm --track_thresh 0.1 --demo ../videos/nuscenes_mini.mp4 --test_focal_length 633 --save_video
[ERROR] OpenCV: FFMPEG: tag 0x44495658/'XVID' is not supported with codec id 12 and format 'mp4 / MP4 (MPEG-4 Part 14)'
OpenCV: FFMPEG: fallback to use tag 0x7634706d/'mp4v'

Warning: No ImageNet pretrain!!

When I use python demo.py tracking --load_model ../models/coco_tracking.pth --demo /path/to/image/or/folder/or/video, I get the following error:
D:\ProgramData\Anaconda3\envs\pytorch13\lib\site-packages\sklearn\utils\linear_assignment_.py:22: FutureWarning: The linear_assignment_ module is deprecated in 0.21 and will be removed from 0.23. Use scipy.optimize.linear_sum_assignment instead. FutureWarning) Running tracking Using tracking threshold for out threshold! 0.3 Fix size testing. training chunk_sizes: [32] input h w: 512 512 heads {'hm': 80, 'reg': 2, 'wh': 2, 'tracking': 2} weights {'hm': 1, 'reg': 1, 'wh': 0.1, 'tracking': 1} head conv {'hm': [256], 'reg': [256], 'wh': [256], 'tracking': [256]} Creating model... Using node type: (<class 'model.networks.dla.DeformConv'>, <class 'model.networks.dla.DeformConv'>) Warning: No ImageNet pretrain!! Traceback (most recent call last): File "demo.py", line 118, in <module> demo(opt) File "demo.py", line 23, in demo detector = Detector(opt) File "E:\Track3\CenterTrack\src\lib\detector.py", line 33, in __init__ opt.arch, opt.heads, opt.head_conv, opt=opt) File "E:\Track3\CenterTrack\src\lib\model\model.py", line 28, in create_model model = model_class(num_layers, heads=head, head_convs=head_conv, opt=opt) File "E:\Track3\CenterTrack\src\lib\model\networks\dla.py", line 611, in __init__ node_type=self.node_type) File "E:\Track3\CenterTrack\src\lib\model\networks\dla.py", line 564, in __init__ node_type=node_type)) File "E:\Track3\CenterTrack\src\lib\model\networks\dla.py", line 526, in __init__ proj = node_type[0](c, o) File "E:\Track3\CenterTrack\src\lib\model\networks\dla.py", line 513, in __init__ self.conv = DCN(chi, cho, kernel_size=(3,3), stride=1, padding=1, dilation=1, deformable_groups=1) TypeError: 'NoneType' object is not callable
Is there anything wrong?
Thanks a lot!

Error with multi gpu training

I can train the network on KITTI with single gpu.
However when I added "--gpus 2,3" for multi-gpu training with the full command as follows:
python main.py tracking --exp_id kitti_fulltrain --dataset kitti_tracking --dataset_version train --pre_hm --same_aug --hm_disturb 0.05 --lost_disturb 0.2 -- fp_disturb 0.1 --batch_size 4 --load_model ../models/nuScenes_3Ddetection_e140.pth --gpus 2,3
I got the following error:

error in modulated_deformable_im2col_cuda: no kernel image is available for execution on the device
Traceback (most recent call last):
File "main.py", line 101, in
main(opt)
File "main.py", line 70, in main
log_dict_train, _ = trainer.train(epoch, train_loader)
File "/home/kejie/CenterTrack/src/lib/trainer.py", line 317, in train
return self.run_epoch('train', epoch, data_loader)
File "/home/kejie/CenterTrack/src/lib/trainer.py", line 149, in run_epoch
output, loss, loss_stats = model_with_loss(batch)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 152, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 162, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 85, in parallel_apply
output.reraise()
File "/opt/conda/lib/python3.7/site-packages/torch/_utils.py", line 394, in reraise
raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker
output = module(*input, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/home/kejie/CenterTrack/src/lib/trainer.py", line 98, in forward
outputs = self.model(batch['image'], pre_img, pre_hm)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/home/kejie/CenterTrack/src/lib/model/networks/base_model.py", line 75, in forward
feats = self.imgpre2feats(x, pre_img, pre_hm)
File "/home/kejie/CenterTrack/src/lib/model/networks/dla.py", line 633, in imgpre2feats
x = self.dla_up(x)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/home/kejie/CenterTrack/src/lib/model/networks/dla.py", line 572, in forward
ida(layers, len(layers) -i - 2, len(layers))
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/home/kejie/CenterTrack/src/lib/model/networks/dla.py", line 543, in forward
layers[i] = upsample(project(layers[i]))
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 778, in forward
output_padding, self.groups, self.dilation)
RuntimeError: CUDA error: an illegal memory access was encountered

Any clues?

Pose tracking with Resnet

I'm trying to test out pose tracking using Resnet (to try implementing without DCN layer), but running into some trouble.

I am running demo.py with args tracking,multi_pose --pre_hm --arch res_18 --load_model ../models/coco_pose_tracking.pth --demo filename, and trying to create the PoseResNet model from model.py to check the structure.

This hits the following error:

File "C:\Work\Projects\CenterTrack\src\lib\model\model.py", line 28, in create_model
    model = model_class(num_layers, heads=head, head_convs=head_conv, opt=opt)
TypeError: __init__() got an unexpected keyword argument 'opt'

This seems to occur because PoseResNet(nn.Module): __init__(self, num_layers, heads, head_convs, _): doesn't expect opt, while also expecting a different positional argument. I tried a few different things to get past this, eventually changing the _ to opt in init(...), and commenting out the first super(...), which seems to create a model (probably incorrect).

But even after this, model.forward raises NotImplementedError, because there is no function to run the model. DLA34 seems to inherit this function from BaseModel, while GenericNetwork has its own implementation. Neither work with PoseResNet (BaseModel needs imgpre2feats(..), GenericNetwork needs self.backbone etc)

Should PoseResNet be changed to build from BaseModel or GenericNetwork, or simply with some different arguments? Any help to get this working is appreciated, thank you.

in cpu?

  • hi

  • when python3 demo.py tracking,ddd --load_model ../models/nuScenes_3Dtracking.pth --dataset nuscenes --pre_hm --track_thresh 0.1 --demo ../videos/nuscenes_mini.mp4

  • AssertionError:
    The NVIDIA driver on your system is too old (found version 9000).
    Please update your GPU driver by downloading and installing a new
    version from the URL: http://www.nvidia.com/Download/index.aspx
    Alternatively, go to: https://pytorch.org to install
    a PyTorch version that has been compiled with your version
    of the CUDA driver.

  • This code can be used in cpu?

Qustions about testing on MOT17 test dataset.

Hi!
Nice work!!
I have some questions about MOT17 testing.

  1. When I train the model on full dataset of MOT17 follows your commend
    "python main.py tracking --exp_id mot17_fulltrain_sc --dataset mot --dataset_version 17trainval --pre_hm --ltrb_amodal --same_aug --hm_disturb 0.05 --lost_disturb 0.4 --fp_disturb 0.1 --gpus 0,1"

and test the model on test dataset of MOT17 "
python test.py tracking --exp_id mot17_fulltrain_public --dataset mot --dataset_version 17test --pre_hm --ltrb_amodal --track_thresh 0.4 --pre_thresh 0.5 --load_model ..exp/tracking/mot17_fulltrain_sc/model_last.pth --public_det --load_results ../data/mot17/results/test_det.json"
I get these lines:
Drop parameter base.fc.weight.
Drop parameter base.fc.bias.
However, when I simply use the model you provide,"mot17_fulltrain_sc.pth", there is no such output. So maybe is there something different?

  1. When I want to run the test dataset of MOT17 follows your commend, I cannot find test_det.json in /results folder.
    python test.py tracking --exp_id mot17_fulltrain_sc --dataset mot --dataset_version 17test --pre_hm --ltrb_amodal --track_thresh 0.4 --pre_thresh 0.5 --resume --public_det --load_results ../data/mot17/results/test_det.json
    Could you tell me how can I get the test_det.json? I am trying to test my results on the test dataset of MOT17.
    Thanks a lot!

The 3d tracking results of nuscenes

Hi,
Thanks for your awesome work, I have install the module and try to run the demo code. But I found the results of output are not as good as you put in the readme.
Is there any hints or something I need to fix the problem?
1ddd_pred
2ddd_pred

What about adding a ReID branch?

Awesome work!
What about adding a ReID branch in centertrack? With one id feature for each center, this tracker can track in global way. What do u think of it?

cannot connect to X server

Hi, I'm trying to test demo in Colab but getting error: "cannot connect to X server" as a result.

I cant find what I should comment in code to get it working directly from the terminal, any advice will be appreciated.

Here is full output:
/content/CenterTrack/src /usr/local/lib/python3.6/dist-packages/sklearn/utils/linear_assignment_.py:22: FutureWarning: The linear_assignment_ module is deprecated in 0.21 and will be removed from 0.23. Use scipy.optimize.linear_sum_assignment instead. FutureWarning) Running tracking Using tracking threshold for out threshold! 0.1 Fix size testing. training chunk_sizes: [32] input h w: 448 800 heads {'hm': 10, 'reg': 2, 'wh': 2, 'tracking': 2, 'dep': 1, 'rot': 8, 'dim': 3, 'amodel_offset': 2} weights {'hm': 1, 'reg': 1, 'wh': 0.1, 'tracking': 1, 'dep': 1, 'rot': 1, 'dim': 1, 'amodel_offset': 1} head conv {'hm': [256], 'reg': [256], 'wh': [256], 'tracking': [256], 'dep': [256], 'rot': [256], 'dim': [256], 'amodel_offset': [256]} Creating model... Using node type: (<class 'model.networks.dla.DeformConv'>, <class 'model.networks.dla.DeformConv'>) Warning: No ImageNet pretrain!! loaded ../models/nuScenes_3Dtracking.pth, epoch 70 : cannot connect to X server

No detector_factory

Hi,

Thank you for sharing your hard work.
It seems like repo is missing the folder containing detector_factory.
Thank you for your time.

How to use mot17_fulltrain.pth model to track video?

When I use the below command :
python demo.py tracking --load_model ../models/coco_tracking.pth --demo ../videos/test.avi
It works well.

But when I want to use mot17_fulltrain.pth model ,I use this command :
python demo.py tracking --load_model ../models/mot17_fulltrain.pth --demo ../videos/test.avi
the output images all have no boxes !

About image augmentation for coco static image

HI,authors,
thanks for your nice work and codes shared.I'm a big fan of your "centernet" architecture.
Recently I see your new CenterTrack paper,and find that you use coco static image to simulate tracking frames only by image augmentation and get a large accuracy improvement.
I want to try it in normally object detection tasks and see whether it will still work,but when I look forward into the codes,I hav't found the codes referd.Can you give more details about the codes for this, thank you !

cannot connect to X server

hi, when i try
python demo.py tracking,ddd --load_model ../models/nuScenes_3Dtracking.pth --dataset nuscenes --pre_hm --track_thresh 0.1 --demo ../videos/nuscenes_mini.mp4 --test_focal_length 633
i got
/.conda/envs/env1/lib/python3.6/site-packages/sklearn/externals/joblib/externals/cloudpickle/cloudpickle.py:47: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
Running tracking
Using tracking threshold for out threshold! 0.1
Fix size testing.
training chunk_sizes: [32]
input h w: 448 800
heads {'hm': 10, 'reg': 2, 'wh': 2, 'tracking': 2, 'dep': 1, 'rot': 8, 'dim': 3, 'amodel_offset': 2}
weights {'hm': 1, 'reg': 1, 'wh': 0.1, 'tracking': 1, 'dep': 1, 'rot': 1, 'dim': 1, 'amodel_offset': 1}
head conv {'hm': [256], 'reg': [256], 'wh': [256], 'tracking': [256], 'dep': [256], 'rot': [256], 'dim': [256], 'amodel_offset': [256]}
Creating model...
Using node type: (<class 'model.networks.dla.DeformConv'>, <class 'model.networks.dla.DeformConv'>)
Warning: No ImageNet pretrain!!
loaded ../models/nuScenes_3Dtracking.pth, epoch 70
: cannot connect to X server

could you help me find the problem?

fatal error: cublas_v2.h

I'm building DCNv2.
While executing ./make.sh this error occured:

In file included from /usr/local/cuda/include/cuda_runtime.h:83,
from :
/usr/local/cuda/include/crt/host_config.h:138:2: error: #error -- unsupported GNU version! gcc versions later than 8 are not supported!
138 | #error -- unsupported GNU version! gcc versions later than 8 are not supported!
| ^~~~~
In file included from /home/user/carla_sdc/CenterTrack/src/lib/model/networks/DCNv2/src/cuda/dcn_v2_im2col_cuda.cu:7:
/home/user/anaconda3/envs/envi/lib/python3.7/site-packages/torch/include/ATen/cuda/CUDAContext.h:7:10: fatal error: cublas_v2.h: No such file or directory
7 | #include <cublas_v2.h>
| ^~~~~~~~~~~~~
compilation terminated.
error: command '/usr/local/cuda/bin/nvcc' failed with exit status 1

How do I solve it? What may be the reason? Googling didn't help. Thank you in advance!

I'm running on
Ubuntu 19.10
CUDA 10.1
torch==1.4.0

Problem with evaluate_tracking.py

Hello!

While training DCNv2 on KITTI, got an error:

Traceback (most recent call last):
  File "tools/eval_kitti_track/evaluate_tracking.py", line 984, in <module>
    success = evaluate(result_sha,mail,split_version=split_version)
  File "tools/eval_kitti_track/evaluate_tracking.py", line 919, in evaluate
    t_sha=result_sha, mail=mail,cls=c,split_version=split_version)
  File "tools/eval_kitti_track/evaluate_tracking.py", line 103, in __init__
    with open(filename_test_mapping, "r") as fh:
FileNotFoundError: [Errno 2] No such file or directory: './tools/eval_kitti_track/data/tracking/evaluate_trackingDrive/CenterTrack/src/lib/../../exp/tracking/kitti_half/results_kitti_tracking/.seqmap'

So, I checked the evaluate_tracking.py:
There is something weird in filename evaluate_tracking{split_vesrion}.seqmap. It seems that this lib was called with wrong parameters: split_version = sys.argv[2] if len(sys.argv) >= 3 else ''

But I don't know why and what kind of lib calls this evaluate_tracking.py.

Can anyone help?

THCudaCheck Fail illegal memory access

Hi, I'm trying to run this on EC2, so I modified demo.py to remove im.show and im.waitKey. When running python demo_no_output.py tracking,ddd --load_model ../models/nuScenes_3Dtracking.pth --dataset nuscenes --pre_hm --track_thresh 0.1 --demo ../videos/nuscenes_mini.mp4 --save_video

I get the following output/error:

Running tracking
Using tracking threshold for out threshold! 0.1
Fix size testing.
training chunk_sizes: [32]
input h w: 448 800
heads {'hm': 10, 'reg': 2, 'wh': 2, 'tracking': 2, 'dep': 1, 'rot': 8, 'dim': 3, 'amodel_offset': 2}
weights {'hm': 1, 'reg': 1, 'wh': 0.1, 'tracking': 1, 'dep': 1, 'rot': 1, 'dim': 1, 'amodel_offset': 1}
head conv {'hm': [256], 'reg': [256], 'wh': [256], 'tracking': [256], 'dep': [256], 'rot': [256], 'dim': [256], 'amodel_offset': [256]}
Creating model...
Using node type: (<class 'model.networks.dla.DeformConv'>, <class 'model.networks.dla.DeformConv'>)
Warning: No ImageNet pretrain!!
loaded ../models/nuScenes_3Dtracking.pth, epoch 70
OpenCV: FFMPEG: tag 0x44495658/'XVID' is not supported with codec id 12 and format 'mp4 / MP4 (MPEG-4 Part 14)'
OpenCV: FFMPEG: fallback to use tag 0x7634706d/'mp4v'
Skip imshow
Initialize tracking!
error in modulated_deformable_im2col_cuda: invalid device function
THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1579022034529/work/aten/src/THC/THCCachingHostAllocator.cpp line=278 error=700 : an illegal memory access was encountered
Traceback (most recent call last):
  File "demo_no_output.py", line 119, in <module>
    demo(opt)
  File "demo_no_output.py", line 65, in demo
    ret = detector.run(img, input_meta)
  File "/home/ec2-user/centertrack/CenterTrack/src/lib/detector.py", line 102, in run
    images, self.pre_images, pre_hms, pre_inds, return_time=True)
  File "/home/ec2-user/centertrack/CenterTrack/src/lib/detector.py", line 301, in process
    output = self.model(images, pre_images, pre_hms)[-1]
  File "/home/ec2-user/anaconda3/envs/CenterTrack/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ec2-user/centertrack/CenterTrack/src/lib/model/networks/base_model.py", line 75, in forward
    feats = self.imgpre2feats(x, pre_img, pre_hm)
  File "/home/ec2-user/centertrack/CenterTrack/src/lib/model/networks/dla.py", line 633, in imgpre2feats
    x = self.dla_up(x)
  File "/home/ec2-user/anaconda3/envs/CenterTrack/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ec2-user/centertrack/CenterTrack/src/lib/model/networks/dla.py", line 572, in forward
    ida(layers, len(layers) -i - 2, len(layers))
  File "/home/ec2-user/anaconda3/envs/CenterTrack/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ec2-user/centertrack/CenterTrack/src/lib/model/networks/dla.py", line 545, in forward
    layers[i] = node(layers[i] + layers[i - 1])
  File "/home/ec2-user/anaconda3/envs/CenterTrack/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ec2-user/centertrack/CenterTrack/src/lib/model/networks/dla.py", line 516, in forward
    x = self.conv(x)
  File "/home/ec2-user/anaconda3/envs/CenterTrack/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ec2-user/centertrack/CenterTrack/src/lib/model/networks/DCNv2/dcn_v2.py", line 121, in forward
    offset = torch.cat((o1, o2), dim=1)
RuntimeError: cuda runtime error (700) : an illegal memory access was encountered at /opt/conda/conda-bld/pytorch_1579022034529/work/aten/src/THC/THCCachingHostAllocator.cpp:278
[1]    18140 segmentation fault  python demo_no_output.py tracking,ddd --load_model  --dataset nuscenes   0.1

I'm using Python 3.6.10, followed install directions and built DCN with the make.sh file. Found same error on DCN issues: CharlesShang/DCNv2#35 Any help greatly appreciated!

Tried this with two different EC2 instances and got same error.

  • Amazon Linux 2 AMI with 4x V100's
  • Ubuntu 18.04 with 1 K80

About Dataset

Could you offer some notes about the return of Dataset? It will be useful for me to read your code. Thank you!

how to Calculation ct with c++ inferfor 3d detction

Hi thanks your great works
Calculation ct(use ct = [(bbox[0] + bbox[2]) / 2, (bbox[1] + bbox[3]) / 2]) with c++ infer(item['loc'], item['rot_y']  for 3d detction is right? because i don not how to calculation dets['bboxes'][i][j]  only use this instead 

the calculation of item ('loc'), item ('rot_y') needs to get ct value, depending on whether the output of the network has amodel_offset branches, ct calculation method is different, for the 3d detection model has amodel_offest branch, execute the following code internal if statement, this ct_output = dets['bboxes'][i][j].reshape(2, 2).mean(axis=0) dets['bboxes'][i][j] is a series of conversions through heap topk,
and if want to get this value, have to write a post-processing operation (topk, a series of conversions) with C++ to get the dets['bboxes'][i][j] ?

if 'amodel_offset' in dets and len(dets['amodel_offset'][i]) > j: ct_output = dets['bboxes'][i][j].reshape(2, 2).mean(axis=0) amodel_ct_output = ct_output + dets['amodel_offset'][i][j] ct = transform_preds_with_trans( amodel_ct_output.reshape(1, 2), trans).reshape(2).tolist() #print(ct) else: bbox = item['bbox'] print(bbox) ct = [(bbox[0] + bbox[2]) / 2, (bbox[1] + bbox[3]) / 2] item['ct'] = ct item['loc'], item['rot_y'] = ddd2locrot( ct, item['alpha'], item['dim'], item['dep'], calibs[i])

Using meta['pre_dets'] before definition.

Hi, I run 'python demo.py tracking --load_model ../models/coco_tracking.pth --demo /home/dl/img1/' and get below
Initialize tracking!
Traceback (most recent call last):
File "demo.py", line 119, in
demo(opt)
File "demo.py", line 90, in demo
ret = detector.run(image_name)
File "/home/dl/project/CenterTrack/src/lib/detector.py", line 93, in run
self.tracker.init_track(meta['pre_dets'])
KeyError: 'pre_dets'

Obviously, it doesn't define meta['pre_dets'].

Few unknown parameters in opt

while running the code I came across listed opt declarations which were not available

  1. simple_radius ( dataset_generic.py)
    2)self.opt.cont_fast_focal_loss. (generic_dataset.py line 384)
  2. self._update_oracle(output, batch, opt) and oracle map in trainer.py .

I commented on these, so code is running awesomely. But I want to know their purpose.
Please help me with these.

demo.py

when I use python demo.py tracking --load_model ../models/coco_tracking.pth --demo ../aaa(aaa is my pictures),there is no result,and the terminal said
Running tracking
Using tracking threshold for out threshold! 0.3
Fix size testing.
training chunk_sizes: [32]
input h w: 512 512
heads {'hm': 80, 'reg': 2, 'wh': 2, 'tracking': 2}
weights {'hm': 1, 'reg': 1, 'wh': 0.1, 'tracking': 1}
head conv {'hm': [256], 'reg': [256], 'wh': [256], 'tracking': [256]}
Creating model...
Using node type: (<class 'model.networks.dla.DeformConv'>, <class 'model.networks.dla.DeformConv'>)
Warning: No ImageNet pretrain!!
loaded ../models/coco_tracking.pth, epoch 70
Drop parameter base.pre_hm_layer.0.weight.
Drop parameter base.pre_hm_layer.1.weight.
Drop parameter base.pre_hm_layer.1.bias.
Drop parameter base.pre_hm_layer.1.running_mean.
Drop parameter base.pre_hm_layer.1.running_var.
Drop parameter base.pre_hm_layer.1.num_batches_tracked.
Initialize tracking!
Traceback (most recent call last):
File "demo.py", line 118, in
demo(opt)
File "demo.py", line 90, in demo
ret = detector.run(image_name)
File "/home/lsw/CenterTrack/src/lib/detector.py", line 94, in run
self.tracker.init_track(meta['pre_dets'])
KeyError: 'pre_dets'
Segmentation fault (core dumped)

what is the problem?

KeyError: 'pre_dets'

when i use demo.py to test on images, it occur the error:KeyError: 'pre_dets'

small typo in install.md

install says:
cd $CenterTrack_ROOT/src/lib/models/networks/
while the path is:
CenterTrack_ROOT/src/lib/model/networks/ (no "s")

Center Net vs Center Track

Hi,
I used your two projects Center net and center track, Do you think that Center track will give a more stable output with testing it on sequence data, with higher fps? I'm using center track with max_frame_dist = 1 (pre_img = curr_img).

Center net is great in detection, but the location and the rotation_y are less stable on testing it on sequence data. Do you think that the Center track will be better here?

Another question, you are using RegWeightedL1Loss for depth, but in that case, the depth loss is always dominant. Do you think it is better from L1Loss?

rotation_y loss in 3d

Hi @xingyizhou

I trained Center net and center track for a lot of time, and the minimal loss that I can get from rotation loss is ~1.3. Is this good? I thought the perfect result to be near 0 right?

I investigated the code and make some manipulation, the minimal results are like this (losses.py).

In case all the tensor are zeros.
loss_bin1 0.6931
loss_bin2 0.6931
loss_res 0
some 1.38

But in the normal experiments, I get ~ 1.32,

What I should get? Do you have different results?

Question about inference

Hi, I appreciate your work, but can not find the inference process in the paper. How do you obtain the center point for each frame, using the center prediction in this frame, or using the center of the pervious frame plus the offset? Thank you.

a issue of test.py

I use python test.py tracking --exp_id mot17_half --dataset mot --dataset_version 17halfval --pre_hm --ltrb_amodal --track_thresh 0.4 --pre_thresh 0.5 --load_model ../models/mot17_half.pth

Bad key "text.kerning_factor" on line 4 in
/home/lsw/anaconda3/envs/CenterNet/lib/python3.6/site-packages/matplotlib/mpl-data/stylelib/_classic_test_patch.mplstyle.
You probably need to get an updated matplotlibrc file from
https://github.com/matplotlib/matplotlib/blob/v3.1.3/matplotlibrc.template
or from the matplotlib source distribution
Running tracking
Using tracking threshold for out threshold! 0.4
Fix size testing.
training chunk_sizes: [32]
input h w: 544 960
heads {'hm': 1, 'reg': 2, 'wh': 2, 'tracking': 2, 'ltrb_amodal': 4}
weights {'hm': 1, 'reg': 1, 'wh': 0.1, 'tracking': 1, 'ltrb_amodal': 0.1}
head conv {'hm': [256], 'reg': [256], 'wh': [256], 'tracking': [256], 'ltrb_amodal': [256]}
Namespace(K=100, add_05=False, amodel_offset_weight=1, arch='dla_34', aug_rot=0, backbone='dla34', batch_size=32, chunk_sizes=[32], custom_dataset_ann_path='', custom_dataset_img_path='', data_dir='/home/lsw/centertrack/src/lib/../../data', dataset='mot', dataset_version='17halfval', debug=0, debug_dir='/home/lsw/centertrack/src/lib/../../exp/tracking/mot17_half/debug', debugger_theme='white', demo='', dense_reg=1, dep_weight=1, depth_scale=1, dim_weight=1, dla_node='dcn', down_ratio=4, efficient_level=0, eval_val=False, exp_dir='/home/lsw/centertrack/src/lib/../../exp/tracking', exp_id='mot17_half', fix_res=True, fix_short=-1, flip=0.5, flip_test=False, fp_disturb=0, gpus=[0], gpus_str='0', head_conv={'hm': [256], 'reg': [256], 'wh': [256], 'tracking': [256], 'ltrb_amodal': [256]}, head_kernel=3, heads={'hm': 1, 'reg': 2, 'wh': 2, 'tracking': 2, 'ltrb_amodal': 4}, hm_disturb=0, hm_hp_weight=1, hm_weight=1, hp_weight=1, hungarian=False, ignore_loaded_cats=[], input_h=544, input_res=960, input_w=960, keep_res=False, kitti_split='3dop', load_model='../models/mot17_half.pth', load_results='', lost_disturb=0, lr=0.000125, lr_step=[60], ltrb=False, ltrb_amodal=True, ltrb_amodal_weight=0.1, ltrb_weight=0.1, map_argoverse_id=False, master_batch_size=32, max_age=-1, max_frame_dist=3, model_output_list=False, msra_outchannel=256, neck='dlaup', new_thresh=0.4, nms=False, no_color_aug=False, no_pause=False, no_pre_img=False, non_block_test=False, not_cuda_benchmark=False, not_idaup=False, not_max_crop=False, not_prefetch_test=False, not_rand_crop=False, not_set_cuda_env=False, not_show_bbox=False, not_show_number=False, num_classes=1, num_epochs=70, num_head_conv=1, num_iters=-1, num_layers=101, num_stacks=1, num_workers=4, nuscenes_att=False, nuscenes_att_weight=1, off_weight=1, optim='adam', out_thresh=0.4, output_h=136, output_res=240, output_w=240, pad=31, pre_hm=True, pre_img=True, pre_thresh=0.5, print_iter=0, prior_bias=-4.6, public_det=False, qualitative=False, reg_loss='l1', reset_hm=False, resize_video=False, resume=False, reuse_hm=False, root_dir='/home/lsw/centertrack/src/lib/../..', rot_weight=1, rotate=0, same_aug_pre=False, save_all=False, save_dir='/home/lsw/centertrack/src/lib/../../exp/tracking/mot17_half', save_framerate=30, save_img_suffix='', save_imgs=[], save_point=[90], save_results=False, save_video=False, scale=0, seed=317, shift=0, show_track_color=False, skip_first=-1, tango_color=False, task='tracking', test=False, test_dataset='mot', test_focal_length=-1, test_scales=[1.0], track_thresh=0.4, tracking=True, tracking_weight=1, trainval=False, transpose_video=False, use_kpt_center=False, use_loaded_results=False, val_intervals=10000, velocity=False, velocity_weight=1, video_h=512, video_w=512, vis_gt_bev='', vis_thresh=0.3, weights={'hm': 1, 'reg': 1, 'wh': 0.1, 'tracking': 1, 'ltrb_amodal': 0.1}, wh_weight=0.1, zero_pre_hm=False, zero_tracking=False)
fatal: No names found, cannot describe anything.
Traceback (most recent call last):
File "test.py", line 195, in
prefetch_test(opt)
File "test.py", line 59, in prefetch_test
Logger(opt)
File "/home/lsw/centertrack/src/lib/logger.py", line 33, in init
subprocess.check_output(["git", "describe"])))
File "/home/lsw/anaconda3/envs/CenterTrack/lib/python3.6/subprocess.py", line 356, in check_output
**kwargs).stdout
File "/home/lsw/anaconda3/envs/CenterTrack/lib/python3.6/subprocess.py", line 438, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['git', 'describe']' returned non-zero exit status 128.

what should I do?
Should I use MOT17 datesets? or where I should put the MOT17?

Is there anyone test on real video with mot17_fulltrain model?

I tried to test in test sequences with mot17_fulltrain, but the results were not so good enough. As long as a person was covered by other persons on the two neighboring frames , his ID would be changed immediately.

The result images of tracking were generated by command '--debug 4'. I was wondering if there was something wrong with my operation

'convert_mot_det_to_results.py' not working well?

Hi, thanks for your great work on MOT!

I have been trying to prepare MOT dataset using your src/tools/get_mot_17.sh, but found out that src/tools/convert_mot_det_to_results.py isn't working as I expected. On line 15, os.listdir('../../data/mot17/') was called to fetch all folders with '_det' in its name. However, in my ../../data/mot17/, there are only 4 folders (train, test, results, annotations), none of which has '_det' in its name, so the script ended without printing anything.

I wonder if there's anything missing in my data, or did convert_mot_to_coco.py not generate enough output files?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.