Code Monkey home page Code Monkey logo

motiontrack's Introduction

MotionTrack

MotionTrack is a simple but effective multi-object tracker for unmanned surface vehicle videos.

Abstract

Multiple object tracking (MOT) in unmanned surface vehicle (USV) videos has many application scenarios in the military and civilian fields. State-of-the-art MOT methods first extract a set of detections from the video frames, then utilize IoU distance to associate the detections of current frame and tracklets of last frame , and finally adopt linear Kalman filter to estimate the current position of tracklets. However, some problems in USV videos seriously affect the tracking performance, such as low frame rate, wobble of observation platform, nonlinear motion of objects, small objects and ambiguous appearance. In this paper, we fully explore the motion cue in USV videos, and propose a simple but effective tracker, named MotionTrack. Equipping with YOLOv7 as object detector, the data association of MotionTrack is mainly composed of Cascade Matching with Gaussian Distance module and Observation-Centric Kalman Filter module. We validate the effectiveness with extensive experiments on the recent Jari-Maritime-Tracking-2022 dataset, achieving new state-of-the-art 46.9 MOTA, 49.2 IDF1 with 35.2 FPS running speed on a single 3090 GPU.

Tracking performance

Results on JMT2022 test1 dataset

Methods MOTA IDF1 MT ML FP FN IDs FM FPS
SORT 34.4 29.7 21.2 33.0 7336 52610 3956 5170 35.5
ByteTrack 34.6 30.8 22.2 30.7 9385 50324 3984 5004 35.4
MotionTrack 46.9 49.2 35.3 25.2 6960 42983 1730 3400 35.2

Visualization results on JMT2022 test1 dataset

Installation

Installing on the host machine

Step1. Install MotionTrack.

git clone https://github.com/lzq11/MotionTrack.git
cd MotionTrack
pip3 install -r requirements.txt

Step2. Install pycocotools.

pip3 install cython; pip3 install 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'

Step3. Others

pip3 install cython_bbox

Data preparation

Download JMT2022 and organize it into the following structure:

jmt2022
   |——————images
            └——————train
                      └——————seq_folder
            └——————test1

Then, you need to turn the datasets to YOLO format. Remember to change the path in jmt2yolo.py to your own first:

cd <MotionTrack_HOME>
python3 tools/jmt2yolo.py

And then you will get a file structure like this,:

JMT2022
   |——————images
   |        └——————train
   |                   └——————img_file(*.jpg)
   |        └——————test1
   |——————labels
   |        └——————train
   |                   └——————label_file(*.txt)
   |        └——————test1

you may need to manually create some folder in JMT2022 folder.

Model zoo

JMT2022 test model

Train on JMT2022 train split set, evaluate on JMT2022 test1 split set.

Model MOTA IDF1 S Time(ms)
tiny-640 [google], [baidu(code:u0ds)] 21.7 30.2 22.4 1.7+1.3
tiny-960 [google], [baidu(code:m1pa)] 40.4 44.0 37.3 4.9+1.4
tiny-1920 [google], [baidu(code:ga2m)] 45.2 47.8 43.3 10.2+1.5
W6-1920 [google], [baidu(code:ecnv)] 46.9 49.2 44.4 26.9+1.5

Training

The COCO pretrained YOLOv7-W6 and YOLOv7-tiny model can be downloaded from their model zoo. After downloading the pretrained models, you can put them under <MotionTrack_HOME>/pretrain. You may need to change the path of the jmt2022.yaml to point to your own (ie. train, val, and test).

  • Single GPU training
cd <MotionTrack_HOME>
# train W6 models
python3 tools/train_aux.py --workers 8 --device 0 --batch-size 5  --data data/jmt2022.yaml --cfg cfg/training/yolov7-w6.yaml --name yolov7-w6  --hyp data/hyp.jmt2022.p6.yaml --weights 'pretrain/yolov7-w6.pt'
# train tiny models
python3 tools/train.py --workers 8 --device 0 --batch-size 20  --data data/jmt2022.yaml --cfg cfg/training/yolov7-tiny.yaml --name yolov7-tiny  --hyp data/hyp.jmt2022.tiny.yaml --weights 'pretrain/yolov7-tiny.pt'
  • Multiple GPU training
cd <MotionTrack_HOME>
# train W6 models
python -m torch.distributed.launch --nproc_per_node 4 --master_port 9527 tools/train_aux.py --workers 8 --device 0,1,2,3 --sync-bn --batch-size 20  --data data/jmt2022.yaml --cfg cfg/training/yolov7-w6.yaml --name yolov7-w6  --hyp data/hyp.jmt2022.p6.yaml --weights 'pretrain/yolov7-w6.pt'
# train tiny models
python -m torch.distributed.launch --nproc_per_node 4 --master_port 9527 tools/train.py --workers 8 --device 0,1,2,3 --sync-bn --batch-size 100  --data data/jmt2022.yaml --cfg cfg/training/yolov7-tiny.yaml --name yolov7-tiny  --hyp data/hyp.jmt2022.tiny.yaml --weights 'pretrain/yolov7-tiny.pt'

Tracking

  • Test on JMT2022
cd <MotionTrack_HOME>
python3 tools/track.py --weights 'path/to/W6-1920.pt' --data_root "path/to/jmt2022/images/test1"

Evaluation

  • Evaluation ignoring the category (Single class)
cd <MotionTrack_HOME>
# You may need to change the path of the script to point to your own (ie. data_root and result_root).
python3 tools/evalsc.py

You can get 46.9 MOTA and 49.2 IDF1 using the tracking results of pretrained W6-1920 model, or you can just use the results we provide in w6-1920-motion-140.

  • Evaluation considering the category (Multiple class)
cd <MotionTrack_HOME>
# You may need to change the path of the script to point to your own (ie. data_root and result_root).
python3 tools/evalmc.py

You can get 44.4 S using the tracking results of pretrained W6-1920 model, or you can just use the results we provide in w6-1920-motion-140.

Deploy

  • Export ONNX with NMS
cd <MotionTrack_HOME>
# export W6 models
python3 tools/export.py --weights 'path/to/W6-1920.pt' --grid --end2end --simplify \
        --topk-all 100 --iou-thres 0.65 --conf-thres 0.1
# export tiny models
python3 tools/export.py --weights 'path/to/tiny-960.pt' --grid --end2end --simplify \
        --topk-all 100 --iou-thres 0.65 --conf-thres 0.1
  • TensorRT in Cpp

Please refer to our another project https://github.com/lzq11/MotionTrackCpp

Acknowledgements

A large part of the code is borrowed from the previous outstanding work. Many thanks for their wonderful works.

motiontrack's People

Contributors

lzq11 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar jianqiu-hu avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.