tri-ml / pf-track Goto Github PK
View Code? Open in Web Editor NEWImplementation of PF-Track
License: Other
Implementation of PF-Track
License: Other
The code structure is really elegant!
And i have a question w.r.t. motion prediction/ref points update:
the reference points is updated twice in tracker.py with motion predictions and ego movements respectively, meaning the predicted motion is defined in the current frame coordinate system?
Hi @ziqipang i noticed that for the full-resolution setting, the amota on val is 0.479 but the one on test is around 0.434. Do you have any ideas why there is this notable drop?
Looking forward to your reply!
When I run the visualization code with the command “python tools/video_demo/cam_demo.py --data_infos_path ./data/nuscenes/tracking_forecasting-mini_infos_val.pkl --result results.json --show-dir ./work_dirs/visualizations/”, I encountered the following error:
Traceback (most recent call last):
File "tools/cam_demo.py", line 260, in
make_videos(show_dir, fig_names, 'video.mp4', show_dir)
File "tools/cam_demo.py", line 213, in make_videos
writer.append_data(cv2.resize(imageio.imread(im), (4000, 2800)))
File "/home/guohaotian/anaconda3/envs/PFTrack/lib/python3.8/site-packages/imageio/v2.py", line 226, in append_data
return self.instance.write(im, **self.write_args)
File "/home/guohaotian/anaconda3/envs/PFTrack/lib/python3.8/site-packages/imageio/plugins/tifffile_v3.py", line 244, in write
self._fh.write(image, **kwargs)
TypeError: write() got an unexpected keyword argument 'fps'
? What is the result.json file in relation to visualization code?
python tools/video_demo/bev.py ./projects/configs/tracking/petr/f3_q500_800x320.py --result results.json --show-dir ./work_dirs/visualizations/
Hello, thanks for your excellent work!
I am a new ont to end-to-end tracking, so I wonder why would you separate the training into two steps (i.e. f1_q500_1600x640.py then f3_q500_1600x640.py) ?
What if I use the second training step directly without the first step?
Looking forward to your reply.
Hi, I just found there's no mmdet3d folder when I run create_data.py. I copies it from BEVFusion github, but it reports error. So could you provide your mmdet3d folder as well? Thank you so much!
Hi, thanks for the great Works! When will the code be released? Will the code be open source in March?
I want to put the detection_head of petrv2 into it. During the training process, this problem occurred: TypeError: forward() takes 3 positional arguments but 6 were given. Does forward() in petrv2_head.py need to change the number of parameters?
Looking forward to your answer,thank you.
Hi @ziqipang ,
Thanks for releasing the code for your great work! I was wondering where I can find results.json
file, for mini-set validations to be used for visualization demos. Thanks!
Dear authors,
I'm now working on my own multi-camera tracker with PETR backbone. However, when I enabled the gradient checkpointing(with_cp=True
) for the PETRTransformerDecoderLayer
class, I got the error:
RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the `forward` function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes. or try to use _set_static_graph() as a workaround if this module graph does not change during training loop.2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple `checkpoint` functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases in default. You can try to use _set_static_graph() as a workaround if your module graph does not change over iterations.
Parameter at index 142 has been marked as ready twice. This means that multiple autograd engine hooks have fired for this particular parameter during this iteration.
It seems like a conflict between ddp and gradient checkpointing as in many posts. However, setting find_unused_parameters=False
doesn't help in my case, but with_cp=False
helps.
I saw in the config files that you provided, you also have set the gradient checkpointing with_cp=False
everywhere. So I want to know whether you got the same issue when training the tracker?
Best regards
Dear author,
Hello, thank you for your great work.
Along my appreciation, I wonder where can I get the "Dockerfile-pftrack" which was linked in the environment setting readme.
It seems deleted some reason.
RuntimeError: CUDA out of memory occurred when I trained f3_q500_1600x640.py.
For this reason, I want to change the batch size to reduce the memory, but I can't find the place to change the batch size. I would like to ask where I can change it.
Or are there other ways to reduce the memory?
Looking forward to your answer,thank you.
Hi @ziqipang, I find in the Algorithm B of the Appendix, there exist different updating manners of Query, Center, and Motion during "Track Extension", but in the corresponding part of the code implementation https://github.com/TRI-ML/PF-Track/blob/main/projects/tracking_plugin/models/trackers/runtime_tracker.py#L35-L43, there seems to be only the "living time updating"
@ziqipang Hi, I am curious about the way that reference points are updated. Specifically, based on the geometric meaning, I think they should be updated like the following, regarding https://github.com/TRI-ML/PF-Track/blob/main/projects/tracking_plugin/models/trackers/spatial_temporal_reason.py#L261-L266
reference_points[..., :2] = inverse_sigmoid(reference_points[..., :2])
reference_points[..., :2] += motions.clone().detach()
reference_points[..., :2] = reference_points[..., :2].sigmoid()
However, after I modified it in this way, the tracker suffers a significant performance drop. Could you please share some insights about that? Thanks in advance.
Hello, I am an undergraduate student. Recently I have been studying algorithms related to motion prediction. I hope to experiment with my own algorithm using the tracking results. Could you please provide the verification results of your algorithm on Nuscenes? If you can provide relevant help, thank you sincerely.
@ziqipang Hi, thanks for your great work. May I know how to do the evaluation on the nuscenes test set? Thanks.
Thanks for sharing the great work!
May i ask that have you tried to use the 3D detection results as network inputs and maybe any denoise techniques as well? Thanks.
Why use mutiple GPUs to evaluate only release detection results? how to change it? Thanks
@ziqipang Hello, thanks for your great work.
I find only the first two dimensions are included in the loss calculation of prediction, and I noticed that you have set find_unused_parameters=False
. Where does the third dimension of the motion regression head receive the gradient? Am I missing anything important? Thanks.
Hello, I ran into a problem while running the test_tracking using "python tools/test_tracking.py projects/configs/tracking/petr/f3_q500_800x320.py ./work_dir/f3_petr_800x320/final.pth --jsonfile_prefix ./work_dir/f3_petr_800x320/results --eval bbox". The problem is "RuntimeError: Expected 4-dimensional input for 4-dimensional weight [64, 3, 3, 3], but got 6-dimensional input of size [1, 1, 6, 3, 320, 800] instead".Looking forward to your answer,thank you.
Hi @ziqipang ,
I have one more question about distributed training :)
I could run the code on single gpu, but when trying on multiple gpus the code seems to get stuck at some point ... I am using the following run command:
CUDA_VISIBLE_DEVICES=3,4 bash tools/dist_train.sh projects/configs/tracking/petr/f1_q500_800x320.py 2 --work-dir work_dirs/f1_pf_track/
Would you have a suggestion what could be underlying reason or how to approach it?
Hello, thank you for this great work! Could you give some pointers on how I could use KITTI data to train and infer?
Hi, sorry to bother you again since I have met another problem.
I use the 'forward_track' function to get the result and then evaluate the detection performance, however, the MAP is not good (like 0.0232).
When I use the same model, but use 'forward_test' function to get the result and then evaluate the detection performance, the performance is about 0.3191. (But forward_test won't output information related to track_ids)
Have you ever met this before? Is this within expectations?
When I run the visualization code with the command “python tools/video_demo/bev.py ./projects/configs/tracking/petr/f3_q500_800x320.py --result results.json --show-dir ./work_dirs/visualizations/”, I encountered the following error:
Traceback (most recent call last):
File "tools/bev.py", line 199, in
main()
File "tools/bev.py", line 70, in main
raw_data = dataset[data_info_idx]
File "/home/guohaotian/PF-Track-main/mmdetection3d/mmdet3d/datasets/custom_3d.py", line 357, in getitem
data = self.prepare_train_data(idx)
File "/home/guohaotian/PF-Track-main/projects/tracking_plugin/datasets/nuscenes_tracking_dataset.py", line 313, in prepare_train_data
example = self.pipeline(input_dict)
File "/home/guohaotian/anaconda3/envs/PFTrack/lib/python3.8/site-packages/mmdet/datasets/pipelines/compose.py", line 41, in call
data = t(data)
File "/home/guohaotian/PF-Track-main/projects/tracking_plugin/datasets/pipelines/pipeline.py", line 470, in call
results = self._load_forecasting(results)
File "/home/guohaotian/PF-Track-main/projects/tracking_plugin/datasets/pipelines/pipeline.py", line 373, in _load_forecasting
results['gt_forecasting_locs'] = results['ann_info']['gt_forecasting_locs']
KeyError: 'gt_forecasting_locs'
0%|
Looking forward to your reply.
Hi, thanks for your great work!
I notice that you use SimpleTrack as baseline in your 'tracking by detection' experiment. So I am wondering if you could provide the config.yaml of SimpleTrack under nuscenes to reproduce the same results in your paper (AMOTA: 0.402, AMOTP: 1.324, IDS: 2053).
Besides, I also want to know whether you use f1_petr_800x320 detection results or f3_petr_800x320 detection results as the input to SimpleTrack. And whether you use 2hz or 20hz in SimpleTrack.
Looking forward to your reply!
Hi @ziqipang have you tried to implement the motion predicion evaluation? Because there is only test_tracking.py.. Thanks.
I got some errors when I tested the tracking.
My environment is a little bit different from the environment which is provided. My MMCV is version mmcv-full==1.6.0, my mmdet is version 2.28.2, my mmdet3d is version 1.0.0rc6. Also, my torch version is 1.13.1.
I could train and validate. However, when I tried to test tracking, I got this bug.
KeyError: 'NuScenesTrackingDataset is not in the dataset registry'
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.