Code Monkey home page Code Monkey logo

qd-3dt's People

Contributors

eborboihuc avatar fyu avatar royyang0714 avatar saschahornauer avatar tobiasfshr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

qd-3dt's Issues

CUDA out of memory

I have implemented your training process on nuScenes dataset. I used your default settings. My environment is like 4x3090, but the error is like "CUDA out of memory". How I can do to adjust the parameters and use less GPU memory?

Information about 2D bounding boxes

In the final result, the generated txt contains 3d information. I would like to know how the result of'(a) part' mentioned in the picture of your paper is transferred to'(b) part', you can tell How is this part of my content reflected in the code?

eval_det_nusc.txt: No such file or directory

I follow the instruction to install the environment and download the nuscenes dataset successfully.
when I try to run this line:

image

I got the following errors:


  • python scripts/eval_nusc_det.py --version=v1.0-trainval --root=data/nuscenes/ --work_dir=work_dirs/Nusc/quasi_r101_dcn_3dmatch_multibranch_conv_dep_dim_cen_clsrot_sep_aug_confidence_scale_no_filter/output_val_box3d_deep_depth_motion_lstm_3dcen --gt_anns=data/nuscenes/anns/tracking_val.json
  • tee work_dirs/Nusc/quasi_r101_dcn_3dmatch_multibranch_conv_dep_dim_cen_clsrot_sep_aug_confidence_scale_no_filter/output_val_box3d_deep_depth_motion_lstm_3dcen/eval_det_nusc.txt
    tee: work_dirs/Nusc/quasi_r101_dcn_3dmatch_multibranch_conv_dep_dim_cen_clsrot_sep_aug_confidence_scale_no_filter/output_val_box3d_deep_depth_motion_lstm_3dcen/eval_det_nusc.txt: No such file or directory
    File "scripts/eval_nusc_det.py", line 190
    print(f'Loading tracking result from {tracking_result_path}')

it said lacking the file:

eval_det_nusc.txt
eval_mot_nusc.txt
eval_mot_02_nusc.txt

However, I cannot find it. Where could I find these files and how could I generate them?

Many thanks!

ValueError: Unknown CUDA arch (8.6) or GPU not supported & RuntimeError: Error compiling objects for extension

My laptop env is

NVIDIA RTX 3080
cuda 11.1
NVIDIA Driver Version: 470.57.02

when I run bash install.sh
with after pip install -r requriement.txt

ValueError: Unknown CUDA arch (8.6) or GPU not supported

so I installed cuda 11.1 pytorch version.

pytorch with cuda 11.1
when I run bash install.sh
then I got this error

Traceback (most recent call last):
File "setup.py", line 200, in
zip_safe=False)
File "/home/han/anaconda3/envs/3dt/lib/python3.7/site-packages/setuptools/init.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/home/han/anaconda3/envs/3dt/lib/python3.7/distutils/core.py", line 148, in setup
dist.run_commands()
File "/home/han/anaconda3/envs/3dt/lib/python3.7/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/home/han/anaconda3/envs/3dt/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/home/han/anaconda3/envs/3dt/lib/python3.7/site-packages/setuptools/command/develop.py", line 34, in run
self.install_for_development()
File "/home/han/anaconda3/envs/3dt/lib/python3.7/site-packages/setuptools/command/develop.py", line 136, in install_for_development
self.run_command('build_ext')
File "/home/han/anaconda3/envs/3dt/lib/python3.7/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/home/han/anaconda3/envs/3dt/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/home/han/anaconda3/envs/3dt/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 79, in run
_build_ext.run(self)
File "/home/han/anaconda3/envs/3dt/lib/python3.7/distutils/command/build_ext.py", line 340, in run
self.build_extensions()
File "/home/han/anaconda3/envs/3dt/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 653, in build_extensions
build_ext.build_extensions(self)
File "/home/han/anaconda3/envs/3dt/lib/python3.7/distutils/command/build_ext.py", line 449, in build_extensions
self._build_extensions_serial()
File "/home/han/anaconda3/envs/3dt/lib/python3.7/distutils/command/build_ext.py", line 474, in _build_extensions_serial
self.build_extension(ext)
File "/home/han/anaconda3/envs/3dt/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 196, in build_extension
_build_ext.build_extension(self, ext)
File "/home/han/anaconda3/envs/3dt/lib/python3.7/distutils/command/build_ext.py", line 534, in build_extension
depends=ext.depends)
File "/home/han/anaconda3/envs/3dt/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 482, in unix_wrap_ninja_compile
with_cuda=with_cuda)
File "/home/han/anaconda3/envs/3dt/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1238, in _write_ninja_file_and_compile_objects
error_prefix='Error compiling objects for extension')
File "/home/han/anaconda3/envs/3dt/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1538, in _run_ninja_build
raise RuntimeError(message) from e
RuntimeError: Error compiling objects for extension

AttributeError When Evaluating on nuScenes data

I am currently trying to reproduce the nuScenes results as shown in the Getting Started page, but am running into an error when I try to run the run_eval_nusc.sh script. See the output trace below.

+ python3 -u ./tools/test_eval_video_exp.py nuscenes configs/Nusc/quasi_r101_dcn_3dmatch_multibranch_conv_dep_dim_cen_clsrot_sep_aug_confidence_scale_no_filter.py ./work_dirs/Nusc/quasi_r101_dcn_3dmatch_multibranch_conv_dep_dim_cen_clsrot_sep_aug_confidence_scale_no_filter/latest.pth ./work_dirs/Nusc/quasi_r101_dcn_3dmatch_multibranch_conv_dep_dim_cen_clsrot_sep_aug_confidence_scale_no_filter/output/output.pkl --data_split_prefix val --full_frames
Using agg as matplotlib backend
Starting ./work_dirs/Nusc/quasi_r101_dcn_3dmatch_multibranch_conv_dep_dim_cen_clsrot_sep_aug_confidence_scale_no_filter/output_val_box3d_deep_depth_motion_lstm_3dcen ...
loading annotations into memory...
Done (t=0.00s)
creating index...
index created!
Traceback (most recent call last):
  File "./tools/test_eval_video_exp.py", line 987, in <module>
    main()
  File "./tools/test_eval_video_exp.py", line 960, in main
    best_model(args, out_path)
  File "./tools/test_eval_video_exp.py", line 295, in best_model
    best_model_Nusc(args, out_path)
  File "./tools/test_eval_video_exp.py", line 380, in best_model_Nusc
    run_inference_and_evaluate(args, cfg, out_path_exp)
  File "./tools/test_eval_video_exp.py", line 80, in run_inference_and_evaluate
    run_inference(cfg, args.checkpoint, out_path, show_time=args.show_time)
  File "./tools/test_eval_video_exp.py", line 109, in run_inference
    dataset = build_dataset(cfg.data.test)
  File "/qd-3dt/qd3dt/datasets/builder.py", line 36, in build_dataset
    dataset = build_from_cfg(cfg, DATASETS)
  File "/qd-3dt/qd3dt/utils/registry.py", line 74, in build_from_cfg
    return obj_type(**args)
  File "/qd-3dt/qd3dt/datasets/video/bdd_vid_3d.py", line 20, in __init__
    super(BDDVid3DDataset, self).__init__(**kwargs)
  File "/qd-3dt/qd3dt/datasets/video/video_dataset.py", line 61, in __init__
    super(VideoDataset, self).__init__(*args, **kwargs)
  File "/qd-3dt/qd3dt/datasets/custom.py", line 69, in __init__
    self.img_infos = self.load_annotations(ann_file)
  File "/qd-3dt/qd3dt/datasets/video/video_dataset.py", line 131, in load_annotations
    self.cat_ids = api.getCatIds()
AttributeError: 'NoneType' object has no attribute 'getCatIds'

A series of other errors occur as the script attempts to execute subsequent commands after test_eval_video_exp.py errors out.

I am currently using Docker and have followed the installation and dataset setup instructions, which seem to have succeeded with no issues. Any ideas as to what is causing this error? Thanks.

Coordinate frame for camera pose

Hi everyone,

I am building a data pipeline to run with qd-3dt as follows:

  1. Extract RGB frames from a monocular video (I have the camera intrinsics)
  2. Generate depth maps using a depth detector (packnet-sfm/monodepth2, etc)
  3. Generate camera trajectory pose using RGBD SLAM (ORB-SLAM3)
  4. Pass the camera trajectory and the RGB frames to qd-3dt to get the 3D detections.

The camera trajectory from ORB-SLAM3 has the format [timestamp, tx, ty, tz, qx, qy, qz, qw], where (tx, ty, tz) is the translation and the (qx, qy, qz, qw) is the orientation in the form of a quaternion. The frame axis for these points is (z-forward, y-left and x-down).

What coordinate frame does the camera pose need to be in when we pass it to qd-3dt? I tried rotating the translation vector by 270 degrees XZ to get a (x-forward, y-right, z-down) frame, however, it does not seem to work. The vehicle trajectory is somehow represented upwards (screenshot: https://imgur.com/a/cAl3ptD).

Has anyone converted the TUM camera trajectory to work with this project?

How to results into a video?

How can I reproduce the results into a video (all three), I did not find a method in the read me and readme files.

KF3D in ablation study

Hi,
Have read your paper, i find that with kalman filter you achieve a high performance. Could you release the KF3D method?
image

Thank you

Inferencing the Model

Is there any documentation how to inference this model?, if it is available please share.

And also is it possible to test on 3d custom point clouds?, if the inference script available for this also

Thanks,

ValueError: current limit exceeds maximum limit

Hi, thanks for the great work.
I am getting the following error:
Traceback (most recent call last): File "./tools/test_eval_video_exp.py", line 10, in <module> from qd3dt.datasets import build_dataloader, build_dataset File "/home/husam/qd-3dt/qd3dt/datasets/__init__.py", line 3, in <module> from .loader import GroupSampler, DistributedGroupSampler, build_dataloader File "/home/husam/qd-3dt/qd3dt/datasets/loader/__init__.py", line 1, in <module> from .build_loader import build_dataloader File "/home/husam/qd-3dt/qd3dt/datasets/loader/build_loader.py", line 15, in <module> resource.setrlimit(resource.RLIMIT_NOFILE, (65535, rlimit[1])) ValueError: current limit exceeds maximum limit

when trying to reproduce your results on the KITTI data set using the command:
./scripts/test_eval_exp.sh kitti configs/KITTI/quasi_dla34_dcn_3dmatch_multibranch_conv_dep_dim_cen_clsrot_sep_aug_confidence_subtrain_mod_anchor_ratio_small_strides_GTA.py 0 1 --data_split_prefix subval_dla34_regress_GTA_VeloLSTM --add_ablation_exp all

I carefully followed the steps in the GETTING_STARTED.md file and I think I was successful with the previous steps. And now I don't know why I am getting this error ?

Expected 88 from C header, got 80 from PyObject

I installed the project follows the instructions and prepared the KITTI data only. Pre-trained weights are placed in related folders.
When I try test mode with the below script:
./scripts/test_eval_exp.sh kitti configs/KITTI/quasi_dla34_dcn_3dmatch_multibranch_conv_dep_dim_cen_clsrot_sep_aug_confidence_subtrain_mod_anchor_ratio_small_strides_GTA.py 0 1 --data_split_prefix subval_dla34_regress_GTA_VeloLSTM --add_ablation_exp all

but got the following error:

Traceback (most recent call last):
File "./tools/test_eval_video_exp.py", line 10, in
from qd3dt.datasets import build_dataloader, build_dataset
File "/home/kid/workspace/qd-3dt/qd3dt/datasets/init.py", line 1, in
from .custom import CustomDataset
File "/home/kid/workspace/qd-3dt/qd3dt/datasets/custom.py", line 12, in
from .extra_aug import ExtraAugmentation
File "/home/kid/workspace/qd-3dt/qd3dt/datasets/extra_aug.py", line 5, in
from qd3dt.core.evaluation.bbox_overlaps import bbox_overlaps
File "/home/kid/workspace/qd-3dt/qd3dt/core/init.py", line 3, in
from .evaluation import * # noqa: F401, F403
File "/home/kid/workspace/qd-3dt/qd3dt/core/evaluation/init.py", line 4, in
from .coco_utils import coco_eval, fast_eval_recall, results2json
File "/home/kid/workspace/qd-3dt/qd3dt/core/evaluation/coco_utils.py", line 3, in
from pycocotools.coco import COCO
File "/home/kid/anaconda3/envs/3dt/lib/python3.7/site-packages/pycocotools/coco.py", line 55, in
from . import mask as maskUtils
File "/home/kid/anaconda3/envs/3dt/lib/python3.7/site-packages/pycocotools/mask.py", line 3, in
import pycocotools._mask as _mask
File "pycocotools/_mask.pyx", line 1, in init pycocotools._mask
ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 88 from C header, got 80 from PyObject

My environment is ubuntu18, but I think it is not related to the system version.

How to get tracking_output_train.json and tracking_output_val.json ?

Hi I try to reproduce this code following the GETTING_STARTED.md

However, when I try to run

CUDA_VISIBLE_DEVICES=1 python qd3dt/models/detectrackers/tracker/motion_lstm.py nuscenes train
--session batch128_min10_seq10_dim7_VeloLSTM
--min_seq_len 10 --seq_len 10
--lstm_model_name VeloLSTM --tracker_model_name KalmanBox3DTracker
--input_gt_path data/nuscenes/anns/tracking_train.json
--input_pd_path data/nuscenes/anns/tracking_output_train.json
--cache_name work_dirs/LSTM/nuscenes_train_pure_det_min10.pkl
--loc_dim 7 -b 128 --is_plot --show_freq 500

these piece of code, it outcomes error that the two json files are not there. I have checked all the files and generation however I still cannot find these two files, Could you release these two files? That's quite important.

Soft-link pure detection results under data/${DATASET}/anns as tracking_output_train.json and tracking_output_val.json.

Many appreciations for your help if you could update this issue as soon as possible. I think many people has this issue as well.

Interpreting the Output of the QuasiDense3DSepUncertainty Model

I have been attempting to utilize your model with full 3D monocular tracking on custom data, and for that I would like to make use of the inference api. Although I want to use custom data, I am currently trying to run and visualize the model on the nuscenes dataset to verify that the API is working correctly. I am using the included monocular 3D Detection/Tracking result for nuscenes from the model zoo with the corresponding QuasiDense3DSepUncertainty model.

In order to work with the nuscenes configuration of the model, I had to modify the img_meta created in the api during _prepare_data as shown below. I believe this is necessary because this api was originally intended for a different model configuration.

def _prepare_data(img, calib, pose, img_transform, cfg, device):
    ori_shape = img.shape
    img, img_shape, pad_shape, scale_factor = img_transform(
        img,
        scale=cfg.data.test.img_scale,
        keep_ratio=cfg.data.test.get('resize_keep_ratio', True))
    img = to_tensor(img).to(device).unsqueeze(0)
    img_meta = [
        dict(
            ori_shape=ori_shape,
            img_shape=img_shape,
            pad_shape=pad_shape,
            scale_factor=scale_factor,
            flip=False,
            calib=calib,
            pose=pose,
            img_info = dict(
                type="TRK",
                cali=calib,
                pose=pose
            )
        )
    ]
    return dict(img=[img], img_meta=[img_meta])

I am now attempting to perform a 3D visualization of the model output, basing my approach to the visualization based on the scripts/plot_tracking.py code. However, the resulting model output is not what I would expect it to be.

results, use_3d_center = inference_detector(model, img_path, calib, pose, nuscenes_categories)
print(len(results["depth_results"]))
print(len(results["alpha_results"]))
print(results["track_results"])

A common output of this code would look like this:

30
30
defaultdict(<class 'list'>, {0: {'bbox': array([ 427.682,  518.581,  446.410,  540.689,  0.056], dtype=float32), 'label': 8}})

My main issues stems from the fact that the track_results always seem to only include one item, but tools/general_output.py seems to imply that the number of items should be the same as the length of the other results(depth_results, alpha_results, ect).

I have found that associating the 3d information(depth_results, dim_results, alpha_results) with the 2d bbox information output by the model, I can get 3d bboxes that seem to be working to an extent, but not of the quality seen when using the inference and detection scripts that read from your converted dataset format. See some examples below:

image
image
image

In short, I would appreciate any insight into the direct usage of the QuasiDense3DSepUncertainty model, which doesn't seem to behave as expected when using the api provided in qd3dt/api/inference.py. It seems, based on the code used to run inference in tools/test_eval_video_exp.py and tools/general_output.py, that the track_results returned in the output should have more items, but instead it only outputs one item every time.

Is my assessment of the track_results output correct? What should the track_results output actually look like? Are there any assumptions that this inference API makes that would cause issues when attempting to use it with this model with full 3D tracking?

Thank you for your time and assistance.

How to visualize the result?

Following the file GETTING_STARTED.md, I reproduced your result on the Nuscens dataset. But I don't know how to visualize it?

Minimalistic inference example

Hi

Nice work. Congrats!

Would it be possible to provide or give directions as to where to find a minimalistic inference example? Something like

  • Install (probably using instructions already provided)
  • Download models (same)
  • Run something like python predict.py -i input_video.mp4 --output results.json --overlay augm_video.mp4 potentially with some extra arguments to locate the pretrained models and produce results (3D boxes, tracking) + (optionally but would be very nice to have) the video with overlays?

Thank you.

Nuscenes Conversion Process Killed without Error Message

I am currently attempting to reproduce the nuScenes dataset results according to the instructions on the Getting Started page, but encountered the following result during the conversion:

Done loading in 44.116 seconds.
======
Reverse indexing ...
Done reverse indexing in 12.9 seconds.
======
total scene num: 850
exist scene num: 850
train scene: 700, val scene: 150
=====
Converting training set
=====
converting CAM_FRONT
100%|█████████████████████████████████████| 34149/34149 [09:24<00:00, 60.53it/s]
converting CAM_FRONT_RIGHT
100%|█████████████████████████████████████| 34149/34149 [08:31<00:00, 66.74it/s]
converting CAM_BACK_RIGHT
100%|█████████████████████████████████████| 34149/34149 [08:18<00:00, 68.45it/s]
converting CAM_BACK
100%|█████████████████████████████████████| 34149/34149 [09:40<00:00, 58.79it/s]
converting CAM_BACK_LEFT
 47%|█████████████████▏                   | 15902/34149 [14:47<59:00,  5.15it/s]
Killed

The lack of any error message makes it unclear what went wrong and how to avoid this error to get the proper conversion. Any idea why this occurs?

RuntimeError: CUDA out of memory. Tried to allocate 84.00 MiB (GPU 0; 3.82 GiB total capacity; 2.37 GiB already allocated; 76.44 MiB free; 2.52 GiB reserved in total by PyTorch)

Hi,
can you please share with us a way to solve this error:

RuntimeError: CUDA out of memory. Tried to allocate 84.00 MiB (GPU 0; 3.82 GiB total capacity; 2.37 GiB already allocated; 76.44 MiB free; 2.52 GiB reserved in total by PyTorch)

First, I though it might be a compatibility issue, even though the message is quite clear that this is not the case so nothing really worked for me.
Now I am having a hard time figuring out how to solve it, I would appreciate some help. Thanks

dependencies motmetrics==1.2.0 and nuscenes-devkit==1.1.1 clashes

Hi guys,

impressive work! I am in the process of reproducing some results but think I found a dependency issue:

Pip complained and I verified with the requirements file of the nuscenes-devkit 1.1.1. These two libraries whose versions are specified in the requirements file are not compatible. Pip complained:

ERROR: Cannot install -r requirements.txt (line 11) and motmetrics==1.2.0 because these package versions have conflicting dependencies.

The conflict is caused by:
The user requested motmetrics==1.2.0
nuscenes-devkit 1.1.1 depends on motmetrics<=1.1.3

My solution at the moment is to install nuscenes-devkit 1.1.3 instead but I am not sure yet if that doesn't break something. I will update this ticket if I find something not working.

results on validation set

Hi,

Thanks for your excellent work.

I wonder whether you can provide the inference results (the .json file, I think which can be accessed by running first part of the sciprs/run_eval_nusc.sh) on the nuScenes validation set. (In submission format, which can be called with evaluating tools provided by nuscenes-devkit. )

Because it is easier for us to visualization your algorithms, study the failure cases.
And it is also easier for other people to analyze the strength and weaknesses of your algorithms.

Best,
Tianyuan

Training on simulation and testing on real-world benchmark

Thank you for sharing your interesting work. Have you perhaps tried training your model on GTA data set and testing on real-world images? It would be interesting to see how model trained on synthetic data responds to real-world data.

Runing Fp16 version

Thanks for the impressive work!
I have a question concerning how to run the training script using Fp16 precision? (how to update the config file accordingly)

Visualization of the output.json

Hi Roy,

I already get the trained data output.json and detection_result.json & tracking_result.json for the Nuscence dataset. I am not sure how to visualization these results or visualization the dataset? Could you provide some examples in the readme files for data visualization? That would be greatly appreciated! For example. how to use the plot_tracking.py in the command or plot_utils.py?

Thank you very much!

which is "gt_folder" to use Plot_tracking.py?

Hi Thanks for providing this great work!

Now I have run estimation results of nuscenes under the work_dir folder. I have txt files of the detection results. However, I hope to use plot_tracking to plot it out. I read the file and the command should be:

python plot_tracking --dataset --gt_folder --res_folder

for dataset I use nuscenes, for res_folder I write qd-3dt-main/work_dirs/Nusc/quasi_r101_dcn_3dmatch_multibranch_conv_dep_dim_cen_clsrot_sep_aug_confidence_scale_no_filter/output_val_box3d_deep_depth_motion_lstm_3dcen/txts

however, I don't know which folder should I use as gt_folder?

When I use the dataset folder, catagory.json, such as:
qd-3dt-main/work_dirs/Nusc/quasi_r101_dcn_3dmatch_multibranch_conv_dep_dim_cen_clsrot_sep_aug_confidence_scale_no_filter/output_val_box3d_deep_depth_motion_lstm_3dcen/txts

It comes out bugs like:

File "/media/data1/yanran/ObjectsDetection/qd-3dt-main/scripts/object_ap_eval/coco_format.py", line 125, in read_file
for cat_dict in coco_annos['categories']
TypeError: list indices must be integers or slices, not str


Could you show me how to use plot_tracking.py and plot_utils.py?

like :

python plot_tracking --dataset --gt_folder --res_folder

which folder should I use as gt_folder for nuscenes dataset?

I hope to know this answer as soon as possible for our project. If you could update the readme file, that would be great helpful for a lot of followers and me, thank you very much! : )

ModuleNotFoundError: No module named 'qd3dt.version'

  • dataset=nuscenes
  • config_path=configs/Nusc/quasi_r101_dcn_3dmatch_multibranch_conv_dep_dim_cen_clsrot_sep_aug_confidence_scale_no_filter.py
  • gpu_ids=0
  • gpu_nums=1
  • PY_ARGS='--data_split_prefix train --pure_det'
  • root=.
    ++ dirname Nusc/quasi_r101_dcn_3dmatch_multibranch_conv_dep_dim_cen_clsrot_sep_aug_confidence_scale_no_filter.py
  • folder=work_dirs/Nusc
    ++ basename -s .py configs/Nusc/quasi_r101_dcn_3dmatch_multibranch_conv_dep_dim_cen_clsrot_sep_aug_confidence_scale_no_filter.py
  • config=quasi_r101_dcn_3dmatch_multibranch_conv_dep_dim_cen_clsrot_sep_aug_confidence_scale_no_filter
  • cd mmcv
    ++ pwd
  • export PYTHONPATH=/root/docker2/qd-3dt/mmcv:
  • PYTHONPATH=/root/docker2/qd-3dt/mmcv:
  • cd ..
  • CUDA_VISIBLE_DEVICES=0
  • python3 -u ./tools/test_eval_video_exp.py nuscenes configs/Nusc/quasi_r101_dcn_3dmatch_multibranch_conv_dep_dim_cen_clsrot_sep_aug_confidence_scale_no_filter.py ./work_dirs/Nusc/quasi_r101_dcn_3dmatch_multibranch_conv_dep_dim_cen_clsrot_sep_aug_confidence_scale_no_filter/latest.pth ./work_dirs/Nusc/quasi_r101_dcn_3dmatch_multibranch_conv_dep_dim_cen_clsrot_sep_aug_confidence_scale_no_filter/output/output.pkl --data_split_prefix train --pure_det
    Traceback (most recent call last):
    File "./tools/test_eval_video_exp.py", line 10, in
    from qd3dt.datasets import build_dataloader, build_dataset
    File "/root/docker2/qd-3dt/qd3dt/init.py", line 1, in
    from .version import version, short_version
    ModuleNotFoundError: No module named 'qd3dt.version'

About DistributedDataParallel

Hi,
I can see that the source code only use non_distributed training even with multiple GPUs training.
Is there any special reason why you use non_distributed training?

bash install.sh error

qd-3dt/qd3dt/ops/roi_align/src/roi_align_kernel.cu:3:10: fatal error: ATen/cuda/Atomic.cuh: No such file or directory
 #include <ATen/cuda/Atomic.cuh>
          ^~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
  File "/home/user/.local/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1672, in _run_ninja_build
    env=env)
  File "/home/user/anaconda3/envs/qdt_cuda11/lib/python3.7/subprocess.py", line 512, in run
    output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

I've got error from insalling, following your new requirements(torch 1.12.0).
Do you know any solution?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.