Code Monkey home page Code Monkey logo

mot_neural_solver's People

Contributors

guillembraso avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mot_neural_solver's Issues

custom training of MOTMPNet

Hi,

I am trying to train the model with pre-computed detections and embeddings because my detector is not a faster-rcnn like Tracktor and my CNN is not based on torchreid. Just checking if this will work,

  1. use the gt boxes and compute embedding features and store in some folder with some format.
  2. modify MOTGraphDataset class so it doesn't compute again

Let me know your thoughts. Thank you!

Solving environment: failed

Solving environment: failed
ResolvePackageNotFound:

  • ld_impl_linux-64==2.34=h53a641e_4
  • libgfortran-ng==7.5.0=hdf63c60_6

Why trained using ground truth annotations?

From the config file and the checkpoint that you have released, it seems that the model is trained directly with ground truth annotations. While most trackers train using the public detections along with the ground truth, is there any particular reason for training with ground truth directly?

suggest change env

python3.6 is exellent in being not compatiable with most 3rd party packages. Hope someone would merge a 3.7 or 3.5 env yaml.

ReID training script

First, congratulations and thank you for the repo!
As stated in the readme, is it in your plans to release the Reid training script?
I would really appreciate that!

Thank you in advance!

Error while creating environment

Hi, I was trying to follow the tutorial step by step. But I always get this error.

Collecting package metadata (repodata.json): done
Solving environment: failed

ResolvePackageNotFound:

  • ld_impl_linux-64==2.34=h53a641e_4

May I ask if you have any idea on this? Many thanks

env promblem

Hi i got two issues ,could you help me
1.Collecting package metadata (repodata.json): done
Solving environment: failed

ResolvePackageNotFound:

  • ld_impl_linux-64==2.34=h53a641e_4

2.pip install -e tracking_wo_bnw
ERROR: File "setup.py" not found. Directory cannot be installed in editable mode:

can't find config.ymal

Hi,I am trying to reproduce your results!
I encount a problem. In code cross_validation.py , there is a yaml :
ex.add_config('/usr/stud/brasoand/mpn_tracking/configs/config.yaml').

As no keyword run_id appears in any yaml of configs folder, I expect there is another config file config.yaml to be uploaded. Is that right?

Computer is crashing and shutting down automatically when I start Training or Evaluating.

Hi, I set up the environment as per the instruction but when I start training, My pc is automatically crashing and shutting it down.
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 430.64 Driver Version: 430.64 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 TITAN Xp Off | 00000000:8D:00.0 On | N/A |
| 23% 26C P8 9W / 250W | 360MiB / 12194MiB | 3% Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1976 G /usr/lib/xorg/Xorg 241MiB |
| 0 3208 G compiz 116MiB |
+-----------------------------------------------------------------------------+

Is there anything I can configure while training or evaluating? Do i need to have any specific hardware requirement to run this project? How much space is required in root-directory (I have 2GB space root directory)? If you need more details, Please ask. Please help me to figure it out.

Thank you.

Why using a directed graph in the code?

Hello!

In the paper it is stated that the problem is modelled with an undirected graph, however, while checking the code I saw the graph is computed as follows:

self.graph_obj = Graph(x = node_feats, edge_attr = torch.cat((edge_feats, edge_feats), dim = 0),
                               edge_index = torch.cat((edge_ixs, torch.stack((edge_ixs[1], edge_ixs[0]))), dim=1))

As far as I understand the edges' connections (edge_index) are computed by concatenating all the previously computed edges but in the opposite direction, i.e. creating a directed graph. Am I correct?

Could you please tell me the differences in performance and/or the advantages of using a directed graph?
Thank you in advance

GPU memory and multi-GPU mode

Hi,

what kind of GPU have you used to train the model? I cannot run the code (neither training nor evaluation) on a 2-GPU machine - each 12 GB without running into a memory problem. Is there a multi-GPU support planned?

Thanks!

MOV_CAMERA_DICT meaning

Im trying to implement this on custom dataset
what does mapping in MOV_CAMERA_DICT dict means
in data/seq_processing/****loader.py

multi-GPU

I want to know where the multi_gpu interface

a Question

Hi. Thanks for your code. I have a question about it. I think there is a little difference between your code and the paper. In the code below, when you calculate CodeCogsEqn, I think x[flow_out_col] means node feature j (2) instead of node feature i (3) in equation 6 in your paper. Is that right?

flow_out_input = torch.cat([x[flow_out_col], edge_attr[flow_out_mask]], dim=1)

flow_in_input = torch.cat([x[flow_in_col], edge_attr[flow_in_mask]], dim=1)

Problems regarding to evaluate on MOT test set

Hi,

Thank you for sharing this nice work.

I tried to run the evaluation part, however I was not able to do so as the gt.txt files are missing for the test set.

FileNotFoundError: [Errno 2] No such file or directory: '/home/gaojiaxi/mot_neural_solver/data/MOT_eval_gt/TUD-Crossing/gt/gt.txt'

I tried to search on the repository and also on the MOT website but I coundn't find the gt file for test set as well.

Can you help in this case?

Thanks

OSError: /home/xyz/anaconda3/envs/mot_neural_solver/lib/python3.6/site-packages/torch_sparse/_version.so: undefined symbol: _ZN3c105ErrorC1ENS_14SourceLocationERKSs

I followed until step 5 in setup.

I ran into error when i executed: python scripts/evaluate.py. Could you help me?

Traceback (most recent call last):
File "scripts/evaluate.py", line 9, in
from mot_neural_solver.pl_module.pl_module import MOTNeuralSolver
File "/home/rajkumar/mot_neural_solver/src/mot_neural_solver/pl_module/pl_module.py", line 6, in
from torch_geometric.data import DataLoader
File "/home/rajkumar/anaconda3/envs/mot_neural_solver/lib/python3.6/site-packages/torch_geometric/init.py", line 2, in
import torch_geometric.nn
File "/home/rajkumar/anaconda3/envs/mot_neural_solver/lib/python3.6/site-packages/torch_geometric/nn/init.py", line 2, in
from .data_parallel import DataParallel
File "/home/rajkumar/anaconda3/envs/mot_neural_solver/lib/python3.6/site-packages/torch_geometric/nn/data_parallel.py", line 5, in
from torch_geometric.data import Batch
File "/home/rajkumar/anaconda3/envs/mot_neural_solver/lib/python3.6/site-packages/torch_geometric/data/init.py", line 1, in
from .data import Data
File "/home/rajkumar/anaconda3/envs/mot_neural_solver/lib/python3.6/site-packages/torch_geometric/data/data.py", line 7, in
from torch_sparse import coalesce
File "/home/rajkumar/anaconda3/envs/mot_neural_solver/lib/python3.6/site-packages/torch_sparse/init.py", line 13, in
library, [osp.dirname(file)]).origin)
File "/home/rajkumar/.local/lib/python3.6/site-packages/torch/_ops.py", line 104, in load_library
ctypes.CDLL(path)
File "/home/rajkumar/anaconda3/envs/mot_neural_solver/lib/python3.6/ctypes/init.py", line 348, in init
self._handle = _dlopen(self._name, mode)
OSError: /home/rajkumar/anaconda3/envs/mot_neural_solver/lib/python3.6/site-packages/torch_sparse/_version.so: undefined symbol: _ZN3c105ErrorC1ENS_14SourceLocationERKSs

how to use multiple GPUs to train and test?

There is too many detections in a single frame in my dataset, so when i test, it would raise CUDA out of memery runtime error. May I ask what should I modify to use multiple gpus???

Couldn't get through the first step of setting up Conda due to conflict

OS: Ubuntu 18.04

ollecting package metadata (repodata.json): done
Solving environment: |
Found conflicts! Looking for incompatible packages.
This can take several minutes. Press CTRL-C to abort.
failed
Solving environment: \
Found conflicts! Looking for incompatible packages.
This can take several minutes. Press CTRL-C to abort.
failed

UnsatisfiableError: The following specifications were found to be incompatible with each other:

Output in format: Requested package -> Available versions

Package ncurses conflicts for:
numpy==1.18.1=py36h4f9e942_0 -> python[version='>=3.6,<3.7.0a0'] -> ncurses[version='5.9.|5.9|>=6.1,<6.3.0a0|>=6.2,<6.3.0a0|>=6.2,<7.0a0|>=6.1,<7.0a0|>=6.0,<7.0a0|6.0.']
pywavelets==1.1.1=py36h785e9b2_1 -> python[version='>=3.6,<3.7.0a0'] -> ncurses[version='5.9.|5.9|>=6.1,<6.3.0a0|>=6.2,<6.3.0a0|>=6.2,<7.0a0|>=6.1,<7.0a0|>=6.0,<7.0a0|6.0.']
toolz==0.10.0=py_0 -> python -> ncurses[version='5.9.|5.9|>=6.1,<6.3.0a0|>=6.2,<6.3.0a0|>=6.2,<7.0a0|>=6.1,<7.0a0|>=6.0,<7.0a0|6.0.']
python==3.6.10=h8356626_1011_cpython -> ncurses[version='>=6.1,<6.3.0a0']
setuptools==46.4.0=py36h9f0ad1d_0 -> python[version='>=3.6,<3.7.0a0'] -> ncurses[version='5.9.|5.9|>=6.1,<6.3.0a0|>=6.2,<6.3.0a0|>=6.2,<7.0a0|>=6.1,<7.0a0|>=6.0,<7.0a0|6.0.']
pip==20.1.1=py_1 -> python[version='>=3'] -> ncurses[version='5.9.|5.9|>=6.1,<6.3.0a0|>=6.2,<6.3.0a0|>=6.2,<7.0a0|>=6.1,<7.0a0|>=6.0,<7.0a0|6.0.']
cycler==0.10.0=py_2 -> python -> ncurses[version='5.9.|5.9|>=6.1,<6.3.0a0|>=6.2,<6.3.0a0|>=6.2,<7.0a0|>=6.1,<7.0a0|>=6.0,<7.0a0|6.0.']
python_abi==3.6=1_cp36m -> python=3.6 -> ncurses[version='5.9.|5.9|>=6.1,<6.3.0a0|>=6.2,<6.3.0a0|>=6.2,<7.0a0|>=6.1,<7.0a0|>=6.0,<7.0a0|6.0.']
torchvision==0.6.0=py36_cu101 -> python[version='>=3.6,<3.7.0a0'] -> ncurses[version='5.9.|5.9|>=6.1,<6.3.0a0|>=6.2,<6.3.0a0|>=6.2,<7.0a0|>=6.1,<7.0a0|>=6.0,<7.0a0|6.0.']
certifi==2020.4.5.1=py36h9f0ad1d_0 -> python[version='>=3.6,<3.7.0a0'] -> ncurses[version='5.9.|5.9|>=6.1,<6.3.0a0|>=6.2,<6.3.0a0|>=6.2,<7.0a0|>=6.1,<7.0a0|>=6.0,<7.0a0|6.0.']
scikit-image=0.17.2 -> pypy3.6[version='>=7.3.2'] -> ncurses[version='5.9.|5.9|>=6.1,<6.3.0a0|>=6.2,<6.3.0a0|>=6.2,<7.0a0|>=6.1,<7.0a0|>=6.0,<7.0a0|6.0.']
mkl_random==1.1.0=py36hb3f55d8_0 -> python[version='>=3.6,<3.7.0a0'] -> ncurses[version='5.9.|5.9|>=6.1,<6.3.0a0|>=6.2,<6.3.0a0|>=6.2,<7.0a0|>=6.1,<7.0a0|>=6.0,<7.0a0|6.0.']
ncurses==6.1=hf484d3e_1002

an online tracker or an offline tracker?

Hi, thanks for your great work!

I am just wondering is this an online tracker or an offline tracker? After the reading of the whole paper, I feel like this should be an offline tracker, but there's also a 'Hz' metric in the evaluation tables. Therefore I got confused, if offline, how the Hz is calculated? And also as far as I understand, Tracktor is actually an online one, right?

Thanks!

Training on GT

Hi there,

After looking through the code it seems like the training is done directly on ground-truth detections. Could you please clarify if this is the case?

train and val on mot16

Hello sir, nice work!

may I know what I need to do if I need to do training on mot16? Thanks.

script for training reid network

we find your research wonderful !! however, we want to further improve the perfomance of the reid network, can you provide us with the script for training reid network? many thanks!

Can't reid on custom dataset

Hi, I have created a dataset that consists of one single image repeated with a person on a field.
The dataset has the same structure used in mot datasets with det/det.txt, img1 and seqinfo.ini .
I have also run all steps required to include a custom dataset.

Finally I have created a script which loads configs, models, tracker and runs MPNTracker.track over the dataset.
It goes like this:

solver = MOTNeuralSolver(hparams = dict(config))
model, cnn_model = solver.load_model()
dataset = solver.test_dataset()

tracker = MPNTracker(dataset=dataset,
                     graph_model=model,
                     use_gt=False,
                     eval_params=config['eval_params'],
                     dataset_params=config['dataset_params'])
tracker.track('custom_dataset', output_path='output/results/results.txt')

Tracking seems to run correctly, but the output doesn't make sense.
There is one new id for the detection on each of the frames. Like it can't reid at all.

Does anybody has the slightest idea of why this could be happening?

Example

Could not evaluate the given results when i evaluate

when i evaluate the trained result, i use python scripts/evaluate.py , and at begin, the program goes well, then, the program Could not evaluate the given results. As following:

(mot_neural_solver) root@autodl-container-377e11abac-03f90540:~/mot_neural_solver# python scripts/evaluate.py
WARNING - evaluate - No observers have been added to this run
INFO - evaluate - Running command 'main'
INFO - evaluate - Started
Successfully loaded pretrained weights from "/root/mot_neural_solver/output/trained_models/reid/resnet50_market_cuhk_duke.tar-232"
** The following layers are discarded due to unmatched keys or layer size: ['classifier.weight', 'classifier.bias']
Loading processed dets for sequence TUD-Crossing from /root/mot_neural_solver/data/2DMOT2015/test/TUD-Crossing/processed_data/det/tracktor_prepr_det.pkl
Loading processed dets for sequence PETS09-S2L2 from /root/mot_neural_solver/data/2DMOT2015/test/PETS09-S2L2/processed_data/det/tracktor_prepr_det.pkl
Loading processed dets for sequence ETH-Jelmoli from /root/mot_neural_solver/data/2DMOT2015/test/ETH-Jelmoli/processed_data/det/tracktor_prepr_det.pkl
Loading processed dets for sequence ETH-Linthescher from /root/mot_neural_solver/data/2DMOT2015/test/ETH-Linthescher/processed_data/det/tracktor_prepr_det.pkl
Loading processed dets for sequence ETH-Crossing from /root/mot_neural_solver/data/2DMOT2015/test/ETH-Crossing/processed_data/det/tracktor_prepr_det.pkl
Loading processed dets for sequence AVG-TownCentre from /root/mot_neural_solver/data/2DMOT2015/test/AVG-TownCentre/processed_data/det/tracktor_prepr_det.pkl
Loading processed dets for sequence ADL-Rundle-1 from /root/mot_neural_solver/data/2DMOT2015/test/ADL-Rundle-1/processed_data/det/tracktor_prepr_det.pkl
Loading processed dets for sequence ADL-Rundle-3 from /root/mot_neural_solver/data/2DMOT2015/test/ADL-Rundle-3/processed_data/det/tracktor_prepr_det.pkl
Loading processed dets for sequence KITTI-16 from /root/mot_neural_solver/data/2DMOT2015/test/KITTI-16/processed_data/det/tracktor_prepr_det.pkl
Loading processed dets for sequence KITTI-19 from /root/mot_neural_solver/data/2DMOT2015/test/KITTI-19/processed_data/det/tracktor_prepr_det.pkl
Loading processed dets for sequence Venice-1 from /root/mot_neural_solver/data/2DMOT2015/test/Venice-1/processed_data/det/tracktor_prepr_det.pkl
Loading processed dets for sequence MOT17-01-DPM from /root/mot_neural_solver/data/MOT17Labels/test/MOT17-01-DPM/processed_data/det/tracktor_prepr_det.pkl
Loading processed dets for sequence MOT17-01-FRCNN from /root/mot_neural_solver/data/MOT17Labels/test/MOT17-01-FRCNN/processed_data/det/tracktor_prepr_det.pkl
Loading processed dets for sequence MOT17-01-SDP from /root/mot_neural_solver/data/MOT17Labels/test/MOT17-01-SDP/processed_data/det/tracktor_prepr_det.pkl
Loading processed dets for sequence MOT17-03-DPM from /root/mot_neural_solver/data/MOT17Labels/test/MOT17-03-DPM/processed_data/det/tracktor_prepr_det.pkl
Loading processed dets for sequence MOT17-03-FRCNN from /root/mot_neural_solver/data/MOT17Labels/test/MOT17-03-FRCNN/processed_data/det/tracktor_prepr_det.pkl
Loading processed dets for sequence MOT17-03-SDP from /root/mot_neural_solver/data/MOT17Labels/test/MOT17-03-SDP/processed_data/det/tracktor_prepr_det.pkl
Loading processed dets for sequence MOT17-06-DPM from /root/mot_neural_solver/data/MOT17Labels/test/MOT17-06-DPM/processed_data/det/tracktor_prepr_det.pkl
Loading processed dets for sequence MOT17-06-FRCNN from /root/mot_neural_solver/data/MOT17Labels/test/MOT17-06-FRCNN/processed_data/det/tracktor_prepr_det.pkl
Loading processed dets for sequence MOT17-06-SDP from /root/mot_neural_solver/data/MOT17Labels/test/MOT17-06-SDP/processed_data/det/tracktor_prepr_det.pkl
Loading processed dets for sequence MOT17-07-DPM from /root/mot_neural_solver/data/MOT17Labels/test/MOT17-07-DPM/processed_data/det/tracktor_prepr_det.pkl
Loading processed dets for sequence MOT17-07-FRCNN from /root/mot_neural_solver/data/MOT17Labels/test/MOT17-07-FRCNN/processed_data/det/tracktor_prepr_det.pkl
Loading processed dets for sequence MOT17-07-SDP from /root/mot_neural_solver/data/MOT17Labels/test/MOT17-07-SDP/processed_data/det/tracktor_prepr_det.pkl
Loading processed dets for sequence MOT17-08-DPM from /root/mot_neural_solver/data/MOT17Labels/test/MOT17-08-DPM/processed_data/det/tracktor_prepr_det.pkl
Loading processed dets for sequence MOT17-08-FRCNN from /root/mot_neural_solver/data/MOT17Labels/test/MOT17-08-FRCNN/processed_data/det/tracktor_prepr_det.pkl
Loading processed dets for sequence MOT17-08-SDP from /root/mot_neural_solver/data/MOT17Labels/test/MOT17-08-SDP/processed_data/det/tracktor_prepr_det.pkl
Loading processed dets for sequence MOT17-12-DPM from /root/mot_neural_solver/data/MOT17Labels/test/MOT17-12-DPM/processed_data/det/tracktor_prepr_det.pkl
Loading processed dets for sequence MOT17-12-FRCNN from /root/mot_neural_solver/data/MOT17Labels/test/MOT17-12-FRCNN/processed_data/det/tracktor_prepr_det.pkl
Loading processed dets for sequence MOT17-12-SDP from /root/mot_neural_solver/data/MOT17Labels/test/MOT17-12-SDP/processed_data/det/tracktor_prepr_det.pkl
Loading processed dets for sequence MOT17-14-DPM from /root/mot_neural_solver/data/MOT17Labels/test/MOT17-14-DPM/processed_data/det/tracktor_prepr_det.pkl
Loading processed dets for sequence MOT17-14-FRCNN from /root/mot_neural_solver/data/MOT17Labels/test/MOT17-14-FRCNN/processed_data/det/tracktor_prepr_det.pkl
Loading processed dets for sequence MOT17-14-SDP from /root/mot_neural_solver/data/MOT17Labels/test/MOT17-14-SDP/processed_data/det/tracktor_prepr_det.pkl
Tracking TUD-Crossing
Tracking sequence  TUD-Crossing
Done!

Tracking PETS09-S2L2
Tracking sequence  PETS09-S2L2
Done!

Tracking ETH-Jelmoli
Tracking sequence  ETH-Jelmoli
Done!

Tracking ETH-Linthescher
Tracking sequence  ETH-Linthescher
Done!

Tracking ETH-Crossing
Tracking sequence  ETH-Crossing
Done!

Tracking AVG-TownCentre
Tracking sequence  AVG-TownCentre
Done!

Tracking ADL-Rundle-1
Tracking sequence  ADL-Rundle-1
Done!

Tracking ADL-Rundle-3
Tracking sequence  ADL-Rundle-3
Done!

Tracking KITTI-16
Tracking sequence  KITTI-16
Done!

Tracking KITTI-19
Tracking sequence  KITTI-19
Done!

Tracking Venice-1
Tracking sequence  Venice-1
Done!

Tracking MOT17-01-DPM
Tracking sequence  MOT17-01-DPM
Done!

Tracking MOT17-01-FRCNN
Tracking sequence  MOT17-01-FRCNN
Done!

Tracking MOT17-01-SDP
Tracking sequence  MOT17-01-SDP
Done!

Tracking MOT17-03-DPM
Tracking sequence  MOT17-03-DPM
Done!

Tracking MOT17-03-FRCNN
Tracking sequence  MOT17-03-FRCNN
Done!

Tracking MOT17-03-SDP
Tracking sequence  MOT17-03-SDP
Done!

Tracking MOT17-06-DPM
Tracking sequence  MOT17-06-DPM
Done!

Tracking MOT17-06-FRCNN
Tracking sequence  MOT17-06-FRCNN
Done!

Tracking MOT17-06-SDP
Tracking sequence  MOT17-06-SDP
Done!

Tracking MOT17-07-DPM
Tracking sequence  MOT17-07-DPM
Done!

Tracking MOT17-07-FRCNN
Tracking sequence  MOT17-07-FRCNN
Done!

Tracking MOT17-07-SDP
Tracking sequence  MOT17-07-SDP
Done!

Tracking MOT17-08-DPM
Tracking sequence  MOT17-08-DPM
Done!

Tracking MOT17-08-FRCNN
Tracking sequence  MOT17-08-FRCNN
Done!

Tracking MOT17-08-SDP
Tracking sequence  MOT17-08-SDP
Done!

Tracking MOT17-12-DPM
Tracking sequence  MOT17-12-DPM
Done!

Tracking MOT17-12-FRCNN
Tracking sequence  MOT17-12-FRCNN
Done!

Tracking MOT17-12-SDP
Tracking sequence  MOT17-12-SDP
Done!

Tracking MOT17-14-DPM
Tracking sequence  MOT17-14-DPM
Done!

Tracking MOT17-14-FRCNN
Tracking sequence  MOT17-14-FRCNN
Done!

Tracking MOT17-14-SDP
Tracking sequence  MOT17-14-SDP
Done!

Could not evaluate the given results
INFO - evaluate - Completed after 0:05:46
(mot_neural_solver) root@autodl-container-377e11abac-03f90540:~/mot_neural_solver#

thanks a lot and look forward ur reply! It's very important to me!

How to implement with KITTI dataset?

We want to test the methodology on the KITTI dataset, but we could not make it work. Do you have any suggestions for the modifications on the code and the data?

CUDA error when I Training

Environment:cuda11.0,torch1.5.0

when i start train by python scripts/train.py with data_splits.train=all_train train_params.save_every_epoch=True train_params.num_epochs=6

the terminal raise CUDA error(as the text):

WARNING - root - Changed type of config entry "data_splits.train" from list to str
WARNING - train - No observers have been added to this run
INFO - train - Running command 'main'
INFO - train - Started
Configuration (modified, added, typechanged, doc):
  add_date = True
  ckpt_path = 'trained_models/graph_nets/mot_mpnet_epoch_006.ckpt'
  cross_val_split = None
  run_id = 'train_w_default_config'
  seed = 672080547                   # the random seed for this experiment
  data_splits:
    test = ['mot15_test', 'mot17_test']
    train = 'all_train'
    val = []
  dataset_params:
    GT_train_max_iou_containment_thresh = 0.85
    GT_train_max_iou_thresh = 0.75
    augment = True
    det_file_name = 'tracktor_prepr_det'
    edge_feats_to_use = ['secs_time_dists',
 'norm_feet_x_dists',
 'norm_feet_y_dists',
 'bb_height_dists',
 'bb_width_dists',
 'emb_dist']
    frames_per_graph = 15
    gt_assign_min_iou = 0.5
    gt_training_min_vis = 0.2
    img_batch_size = 5000
    img_size = [128, 64]
    max_detects = 500
    max_detects_to_drop_perc = 0.3
    max_frame_dist = 'max'
    max_ids_to_drop_perc = 0.15
    min_detects = 25
    min_detects_to_drop_perc = 0
    min_ids_to_drop_perc = 0
    min_iou_bb_wiggling = 0.8
    node_embeddings_dir = 'resnet50_conv'
    overwrite_processed_data = False
    p_change_fps_step = 0.5
    precomputed_embeddings = True
    reciprocal_k_nns = True
    reid_embeddings_dir = 'resnet50_w_fc256'
    top_k_nns = 50
    target_fps_dict:
      moving = 9
      static = 6
  eval_params:
    add_tracktor_detects = True
    best_method_criteria = 'idf1'
    check_val_every_n_epoch = 9999
    log_per_seq_metrics = False
    max_dets_per_graph_seq = 40000
    metrics_to_log = ['loss', 'precision', 'recall', 'constr_sr']
    min_track_len = 2
    mot_metrics_to_log = ['mota',
 'norm_mota',
 'idf1',
 'norm_idf1',
 'num_switches',
 'num_misses',
 'num_false_positives',
 'num_fragmentations',
 'constr_sr']
    mot_metrics_to_norm = ['mota', 'idf1']
    normalize_mot_metrics = True
    rounding_method = 'exact'
    set_pruned_edges_to_inactive = False
    solver_backend = 'pulp'
    tensorboard = False
    use_tracktor_start_ends = True
    val_percent_check = 0
  graph_model_params:
    node_agg_fn = 'sum'
    num_class_steps = 11
    num_enc_steps = 12
    reattach_initial_edges = True
    reattach_initial_nodes = False
    classifier_feats_dict:
      dropout_p = 0
      edge_fc_dims = [8]
      edge_in_dim = 16
      edge_out_dim = 1
      use_batchnorm = False
    cnn_params:
      arch = 'resnet50'
      model_weights_path:
        resnet50 = 'trained_models/reid/resnet50_market_cuhk_duke.tar-232'
    edge_model_feats_dict:
      dropout_p = 0
      fc_dims = [80, 16]
      use_batchnorm = False
    encoder_feats_dict:
      dropout_p = 0
      edge_fc_dims = [18, 18]
      edge_in_dim = 6
      edge_out_dim = 16
      node_fc_dims = [128]
      node_in_dim = 2048
      node_out_dim = 32
      use_batchnorm = False
    node_model_feats_dict:
      dropout_p = 0
      fc_dims = [56, 32]
      use_batchnorm = False
  train_params:
    batch_size = 8
    num_epochs = 6
    num_workers = 6
    save_epoch_start = 1
    save_every_epoch = True
    tensorboard = False
    lr_scheduler:
      type = None
      args:
        gamma = 0.5
        step_size = 7
    optimizer:
      type = 'Adam'
      args:
        lr = 0.001
        weight_decay = 0.0001
Successfully loaded pretrained weights from "/root/mot_neural_solver/output/trained_models/reid/resnet50_market_cuhk_duke.tar-232"
** The following layers are discarded due to unmatched keys or layer size: ['classifier.weight', 'classifier.bias']
GPU available: True, used: True
INFO - lightning - GPU available: True, used: True
No environment variable for node rank defined. Set as 0.
WARNING - lightning - No environment variable for node rank defined. Set as 0.
CUDA_VISIBLE_DEVICES: [0]
INFO - lightning - CUDA_VISIBLE_DEVICES: [0]
Detections for sequence MOT17-02-GT need to be processed. Starting processing
Finished processing detections for seq MOT17-02-GT. Result was stored at /root/mot_neural_solver/data/MOT17Labels/train/MOT17-02-GT/processed_data/det/gt.pkl
Found existing stored node embeddings. Deleting them and replacing them for new ones
Found existing stored reid embeddings. Deleting them and replacing them for new ones
Computing embeddings for 20130 detections
ERROR - train - Failed after 0:00:18!
Traceback (most recent calls WITHOUT Sacred internals):
  File "scripts/train.py", line 79, in main
    trainer.fit(model)
  File "/root/miniconda3/envs/mot_neural_solver/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 859, in fit
    self.single_gpu_train(model)
  File "/root/miniconda3/envs/mot_neural_solver/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 503, in single_gpu_train
    self.run_pretrain_routine(model)
  File "/root/miniconda3/envs/mot_neural_solver/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 1015, in run_pretrain_routine
    self.train()
  File "/root/miniconda3/envs/mot_neural_solver/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 308, in train
    self.reset_train_dataloader(model)
  File "/root/miniconda3/envs/mot_neural_solver/lib/python3.6/site-packages/pytorch_lightning/trainer/data_loading.py", line 156, in reset_train_dataloader
    self.train_dataloader = self.request_dataloader(model.train_dataloader)
  File "/root/miniconda3/envs/mot_neural_solver/lib/python3.6/site-packages/pytorch_lightning/trainer/data_loading.py", line 280, in request_dataloader
    dataloader = dataloader_fx()
  File "/root/mot_neural_solver/src/mot_neural_solver/pl_module/pl_module.py", line 73, in train_dataloader
    return self._get_data(mode = 'train')
  File "/root/mot_neural_solver/src/mot_neural_solver/pl_module/pl_module.py", line 57, in _get_data
    logger=None)
  File "/root/mot_neural_solver/src/mot_neural_solver/data/mot_graph_dataset.py", line 33, in __init__
    self.seq_det_dfs, self.seq_info_dicts, self.seq_names = self._load_seq_dfs(seqs_to_retrieve)
  File "/root/mot_neural_solver/src/mot_neural_solver/data/mot_graph_dataset.py", line 82, in _load_seq_dfs
    seq_det_df = seq_processor.load_or_process_detections()
  File "/root/mot_neural_solver/src/mot_neural_solver/data/seq_processing/seq_processor.py", line 381, in load_or_process_detections
    seq_det_df = self.process_detections()
  File "/root/mot_neural_solver/src/mot_neural_solver/data/seq_processing/seq_processor.py", line 347, in process_detections
    self._store_embeddings()
  File "/root/mot_neural_solver/src/mot_neural_solver/data/seq_processing/seq_processor.py", line 307, in _store_embeddings
    node_out, reid_out = self.cnn_model(bboxes.cuda())
  File "/root/miniconda3/envs/mot_neural_solver/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/root/mot_neural_solver/src/mot_neural_solver/models/resnet.py", line 272, in forward
    f = self.featuremaps(x)
  File "/root/mot_neural_solver/src/mot_neural_solver/models/resnet.py", line 263, in featuremaps
    x = self.relu(x)
  File "/root/miniconda3/envs/mot_neural_solver/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/root/miniconda3/envs/mot_neural_solver/lib/python3.6/site-packages/torch/nn/modules/activation.py", line 94, in forward
    return F.relu(input, inplace=self.inplace)
  File "/root/miniconda3/envs/mot_neural_solver/lib/python3.6/site-packages/torch/nn/functional.py", line 1061, in relu
    result = torch.relu_(input)
RuntimeError: CUDA error: no kernel image is available for execution on the device

thanks a lot, Look forward to your favourable reply

Question about how to extract loss for each epoch?

Hi,

Thank you for the amazing work. I am using the model training cells trajectories. I believe that the result I trained is overfitting. I am relatively new to deep learning algorithms. I am wondering that is there any way to extract training and validation loss for each epoch, so that I can adjust the parameters and layer numbers in the model? Thank you for your help.

Best,
Joanne

Given detections

Hello,

Thank you for your provided detections. I have a few questions.
The first case is the one in which you run Tracktor and just discard the REID part. I understand this.
The second part refers to section C.4 in your paper where you talk about a Faster RCNN trained on MOT17 Detection.
Can you explain how that second part is different from the Tracktor? And how the second part is provided for DPM and SDP? I mean how is it since those are different detectors.
Finally, I want to ask you, if I use the tracktor based detections, would that still be considered public detections for MOT?

Thanks in advance.

MOT20 detections

Hello,

Would it be possible to provide detections processed with Tracktor for MOT 20?

TypeError: __init__() missing 1 required positional argument: 'hparams'

Hi there!

I've been trying to run the code on colab on MOT17 dataset. I followed all the instructions and I managed to get the network to train.

The 'python scripts/evaluate.py' command seems to fail though. This is the error I get:

WARNING - evaluate - No observers have been added to this run
INFO - evaluate - Running command 'main'
INFO - evaluate - Started
ERROR - evaluate - Failed after 0:00:00!
Traceback (most recent calls WITHOUT Sacred internals):
File "scripts/evaluate.py", line 34, in main
model = MOTNeuralSolver.load_from_checkpoint(checkpoint_path=_config['ckpt_path'] if osp.exists(_config['ckpt_path']) else osp.join(OUTPUT_PATH, _config['ckpt_path']))
File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/core/saving.py", line 156, in load_from_checkpoint
model = cls._load_model_state(checkpoint, strict=strict, kwargs)
File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/core/saving.py", line 198, in _load_model_state
model = cls(
_cls_kwargs)
TypeError: init() missing 1 required positional argument: 'hparams'

Any ideas as to how to fix this? Thanks in advance

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.