Code Monkey home page Code Monkey logo

pf-track's People

Contributors

pvtokmakov avatar ziqipang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pf-track's Issues

Question about motion prediction

The code structure is really elegant!

And i have a question w.r.t. motion prediction/ref points update:

the reference points is updated twice in tracker.py with motion predictions and ego movements respectively, meaning the predicted motion is defined in the current frame coordinate system?

about visualization

When I run the visualization code with the command “python tools/video_demo/cam_demo.py --data_infos_path ./data/nuscenes/tracking_forecasting-mini_infos_val.pkl --result results.json --show-dir ./work_dirs/visualizations/”, I encountered the following error:

Traceback (most recent call last):
File "tools/cam_demo.py", line 260, in
make_videos(show_dir, fig_names, 'video.mp4', show_dir)
File "tools/cam_demo.py", line 213, in make_videos
writer.append_data(cv2.resize(imageio.imread(im), (4000, 2800)))
File "/home/guohaotian/anaconda3/envs/PFTrack/lib/python3.8/site-packages/imageio/v2.py", line 226, in append_data
return self.instance.write(im, **self.write_args)
File "/home/guohaotian/anaconda3/envs/PFTrack/lib/python3.8/site-packages/imageio/plugins/tifffile_v3.py", line 244, in write
self._fh.write(image, **kwargs)
TypeError: write() got an unexpected keyword argument 'fps'

Why two step training

Hello, thanks for your excellent work!
I am a new ont to end-to-end tracking, so I wonder why would you separate the training into two steps (i.e. f1_q500_1600x640.py then f3_q500_1600x640.py) ?
What if I use the second training step directly without the first step?

Looking forward to your reply.

Didn't provide mmdet3d folder

Hi, I just found there's no mmdet3d folder when I run create_data.py. I copies it from BEVFusion github, but it reports error. So could you provide your mmdet3d folder as well? Thank you so much!

Code release time

Hi, thanks for the great Works! When will the code be released? Will the code be open source in March?

detection_head

I want to put the detection_head of petrv2 into it. During the training process, this problem occurred: TypeError: forward() takes 3 positional arguments but 6 were given. Does forward() in petrv2_head.py need to change the number of parameters?
Looking forward to your answer,thank you.

question about visualization

Hi @ziqipang ,

Thanks for releasing the code for your great work! I was wondering where I can find results.json file, for mini-set validations to be used for visualization demos. Thanks!

Gradient checkpointing with ddp

Dear authors,

I'm now working on my own multi-camera tracker with PETR backbone. However, when I enabled the gradient checkpointing(with_cp=True) for the PETRTransformerDecoderLayer class, I got the error:

RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the `forward` function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes. or try to use _set_static_graph() as a workaround if this module graph does not change during training loop.2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple `checkpoint` functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases in default. You can try to use _set_static_graph() as a workaround if your module graph does not change over iterations.
Parameter at index 142 has been marked as ready twice. This means that multiple autograd engine  hooks have fired for this particular parameter during this iteration.

It seems like a conflict between ddp and gradient checkpointing as in many posts. However, setting find_unused_parameters=False doesn't help in my case, but with_cp=False helps.

I saw in the config files that you provided, you also have set the gradient checkpointing with_cp=False everywhere. So I want to know whether you got the same issue when training the tracker?

Best regards

Docker file

Dear author,

Hello, thank you for your great work.
Along my appreciation, I wonder where can I get the "Dockerfile-pftrack" which was linked in the environment setting readme.
It seems deleted some reason.

Adjusting batch size

RuntimeError: CUDA out of memory occurred when I trained f3_q500_1600x640.py.
For this reason, I want to change the batch size to reduce the memory, but I can't find the place to change the batch size. I would like to ask where I can change it.
Or are there other ways to reduce the memory?
Looking forward to your answer,thank you.

Question about Track Extension

Hi @ziqipang, I find in the Algorithm B of the Appendix, there exist different updating manners of Query, Center, and Motion during "Track Extension", but in the corresponding part of the code implementation https://github.com/TRI-ML/PF-Track/blob/main/projects/tracking_plugin/models/trackers/runtime_tracker.py#L35-L43, there seems to be only the "living time updating" $L_t$. Am I missing anything important? Or could you please specify where the "Track Extension" is fully implemented in this repo? Thanks.

Question about reference points updating

@ziqipang Hi, I am curious about the way that reference points are updated. Specifically, based on the geometric meaning, I think they should be updated like the following, regarding https://github.com/TRI-ML/PF-Track/blob/main/projects/tracking_plugin/models/trackers/spatial_temporal_reason.py#L261-L266

reference_points[..., :2] = inverse_sigmoid(reference_points[..., :2])
reference_points[..., :2] += motions.clone().detach()
reference_points[..., :2] = reference_points[..., :2].sigmoid()

However, after I modified it in this way, the tracker suffers a significant performance drop. Could you please share some insights about that? Thanks in advance.

About the tracking result

Hello, I am an undergraduate student. Recently I have been studying algorithms related to motion prediction. I hope to experiment with my own algorithm using the tracking results. Could you please provide the verification results of your algorithm on Nuscenes? If you can provide relevant help, thank you sincerely.

Question about the pipeline inputs

Thanks for sharing the great work!

May i ask that have you tried to use the 3D detection results as network inputs and maybe any denoise techniques as well? Thanks.

for Validation

Why use mutiple GPUs to evaluate only release detection results? how to change it? Thanks

Motion prediction along the z axis

@ziqipang Hello, thanks for your great work.

I find only the first two dimensions are included in the loss calculation of prediction, and I noticed that you have set find_unused_parameters=False. Where does the third dimension of the motion regression head receive the gradient? Am I missing anything important? Thanks.

nuscenes_dataset

Hello, I ran into a problem while running the test_tracking using "python tools/test_tracking.py projects/configs/tracking/petr/f3_q500_800x320.py ./work_dir/f3_petr_800x320/final.pth --jsonfile_prefix ./work_dir/f3_petr_800x320/results --eval bbox". The problem is "RuntimeError: Expected 4-dimensional input for 4-dimensional weight [64, 3, 3, 3], but got 6-dimensional input of size [1, 1, 6, 3, 320, 800] instead".Looking forward to your answer,thank you.
issue

question about distributed training

Hi @ziqipang ,

I have one more question about distributed training :)
I could run the code on single gpu, but when trying on multiple gpus the code seems to get stuck at some point ... I am using the following run command:
CUDA_VISIBLE_DEVICES=3,4 bash tools/dist_train.sh projects/configs/tracking/petr/f1_q500_800x320.py 2 --work-dir work_dirs/f1_pf_track/

Would you have a suggestion what could be underlying reason or how to approach it?

Train and infer on KITTI data

Hello, thank you for this great work! Could you give some pointers on how I could use KITTI data to train and infer?

About detection performance

Hi, sorry to bother you again since I have met another problem.
I use the 'forward_track' function to get the result and then evaluate the detection performance, however, the MAP is not good (like 0.0232).
When I use the same model, but use 'forward_test' function to get the result and then evaluate the detection performance, the performance is about 0.3191. (But forward_test won't output information related to track_ids)

Have you ever met this before? Is this within expectations?

about result.json

When I run the visualization code with the command “python tools/video_demo/bev.py ./projects/configs/tracking/petr/f3_q500_800x320.py --result results.json --show-dir ./work_dirs/visualizations/”, I encountered the following error:

Traceback (most recent call last):
File "tools/bev.py", line 199, in
main()
File "tools/bev.py", line 70, in main
raw_data = dataset[data_info_idx]
File "/home/guohaotian/PF-Track-main/mmdetection3d/mmdet3d/datasets/custom_3d.py", line 357, in getitem
data = self.prepare_train_data(idx)
File "/home/guohaotian/PF-Track-main/projects/tracking_plugin/datasets/nuscenes_tracking_dataset.py", line 313, in prepare_train_data
example = self.pipeline(input_dict)
File "/home/guohaotian/anaconda3/envs/PFTrack/lib/python3.8/site-packages/mmdet/datasets/pipelines/compose.py", line 41, in call
data = t(data)
File "/home/guohaotian/PF-Track-main/projects/tracking_plugin/datasets/pipelines/pipeline.py", line 470, in call
results = self._load_forecasting(results)
File "/home/guohaotian/PF-Track-main/projects/tracking_plugin/datasets/pipelines/pipeline.py", line 373, in _load_forecasting
results['gt_forecasting_locs'] = results['ann_info']['gt_forecasting_locs']
KeyError: 'gt_forecasting_locs'
0%|
Looking forward to your reply.

Tracking by detection experiment config

Hi, thanks for your great work!
I notice that you use SimpleTrack as baseline in your 'tracking by detection' experiment. So I am wondering if you could provide the config.yaml of SimpleTrack under nuscenes to reproduce the same results in your paper (AMOTA: 0.402, AMOTP: 1.324, IDS: 2053).
Besides, I also want to know whether you use f1_petr_800x320 detection results or f3_petr_800x320 detection results as the input to SimpleTrack. And whether you use 2hz or 20hz in SimpleTrack.
Looking forward to your reply!

Get some Errors when test tracking method

I got some errors when I tested the tracking.

My environment is a little bit different from the environment which is provided. My MMCV is version mmcv-full==1.6.0, my mmdet is version 2.28.2, my mmdet3d is version 1.0.0rc6. Also, my torch version is 1.13.1.

I could train and validate. However, when I tried to test tracking, I got this bug.

KeyError: 'NuScenesTrackingDataset is not in the dataset registry'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.