Code Monkey home page Code Monkey logo

mvp_benchmark's People

Contributors

caizhongang avatar liuziwei7 avatar paul007pl avatar wutong16 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

mvp_benchmark's Issues

Unable to make submission in point cloud completion

I uploaded my submission file, but the upload isn't getting reflected in the leaderboard after having waited for more than 2 hours . My Zip file has 1.25 GB size and has the required format. Since this is my first submission so I'm not sure how long it generally takes for the upload to complete or if my submission file size is normal. That said, my internet connection is good enough to make any upload that size in the passed amount of time. Is there some submission issue ongoing or is the problem on my end? Please help me out.

TypeError: zip argument #1 must support iteration

我在训练结束的时候,进入 eval() 函数进行测试的时候, 出现了以下问题

return type(out)(map(gather_map, zip(*outputs)))
TypeError: zip argument #1 must support iteration

下面是训练过程的详细信息:

`INFO:root:Munch({'batch_size': 32, 'workers': 0, 'nepoch': 100, 'model_name': 'vrcnet', 'load_model': None, 'start_epoch': 0, 'num_points': 2048, 'work_dir': 'log/', 'flag': 'debug', 'loss': 'cd', 'manual_seed': None, 'use_mean_feature': False, 'step_interval_to_print': 500, 'epoch_interval_to_save': 1, 'epoch_interval_to_val': 1, 'varying_constant': '0.01, 0.1, 0.5, 1', 'varying_constant_epochs': '5, 15, 30', 'lr': 0.0001, 'lr_decay': True, 'lr_decay_interval': 40, 'lr_decay_rate': 0.7, 'lr_step_decay_epochs': None, 'lr_step_decay_rates': None, 'lr_clip': 1e-06, 'optimizer': 'Adam', 'weight_decay': 0, 'betas': '0.9, 0.999', 'layers': '1, 1, 1, 1', 'distribution_loss': 'KLD', 'knn_list': '16', 'pk': 10, 'local_folding': True, 'points_label': True, 'num_coarse_raw': 1024, 'num_fps': 2048, 'num_coarse': 2048, 'save_vis': False, 'eval_emd': False})
(62400, 2048, 3)
(2400, 2048, 3) (62400,)
(41600, 2048, 3)
(1600, 2048, 3) (41600,)
INFO:root:Length of train dataset:62400
INFO:root:Length of test dataset:41600
INFO:root:Random Seed: 785
Jitting Chamfer 3D
Loaded JIT 3D CUDA chamfer distance
Loaded JIT 3D CUDA emd
INFO:root:vrcnet_cd_debug_2021-08-08T14:50:26 train [0: 0/1950]  loss_type: cd, fine_loss: 0.183416 total_loss: 4.883119 lr: 0.000100 alpha: 0.01
INFO:root:vrcnet_cd_debug_2021-08-08T14:50:26 train [0: 500/1950]  loss_type: cd, fine_loss: 0.041089 total_loss: 0.644301 lr: 0.000100 alpha: 0.01
INFO:root:vrcnet_cd_debug_2021-08-08T14:50:26 train [0: 1000/1950]  loss_type: cd, fine_loss: 0.039373 total_loss: 0.594741 lr: 0.000100 alpha: 0.01
INFO:root:vrcnet_cd_debug_2021-08-08T14:50:26 train [0: 1500/1950]  loss_type: cd, fine_loss: 0.034346 total_loss: 0.527504 lr: 0.000100 alpha: 0.01
INFO:root:Saving net...
INFO:root:Testing...
Traceback (most recent call last):
  File "/home/zhjp/project/MVP_Benchmark/completion/train.py", line 214, in <module>
    train()
  File "/home/zhjp/project/MVP_Benchmark/completion/train.py", line 153, in train
    val(net, epoch, val_loss_meters, dataloader_test, best_epoch_losses)
  File "/home/zhjp/project/MVP_Benchmark/completion/train.py", line 171, in val
    result_dict = net(inputs, gt, prefix="val")
  File "/home/zhjp/miniconda3/envs/mvp/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/zhjp/miniconda3/envs/mvp/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 156, in forward
    return self.gather(outputs, self.output_device)
  File "/home/zhjp/miniconda3/envs/mvp/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 168, in gather
    return gather(outputs, output_device, dim=self.dim)
  File "/home/zhjp/miniconda3/envs/mvp/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 68, in gather
    res = gather_map(outputs)
  File "/home/zhjp/miniconda3/envs/mvp/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 62, in gather_map
    for k in out))
  File "/home/zhjp/miniconda3/envs/mvp/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 62, in <genexpr>
    for k in out))
  File "/home/zhjp/miniconda3/envs/mvp/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 63, in gather_map
    return type(out)(map(gather_map, zip(*outputs)))
TypeError: zip argument #1 must support iteration

Process finished with exit code 1

我用四块显卡训练的,我看网上说这个问题是多显卡学习的时候的问题,但是找了好久没找到正确的解决方法。

 inputs = inputs.float().cuda()
            gt = gt.float().cuda()
            inputs = inputs.transpose(2, 1).contiguous()
            result_dict = net(inputs, gt, prefix="val")   # 就这个地方有问题
            for k, v in val_loss_meters.items():
                v.update(result_dict[k].mean().item(), curr_batch_size)

Benchmark with DCP, DeepGMR, and IDAM

Hi,
Thanks for the terrific dataset! Since several methods are provided in this repo, I am wondering if there are any existing results with these methods, i.e. DCP, DeepGMR, and IDAM?
Thanks in advance,
Jessy

How to convert training data to ply file

Can anyone please give a hint on how to convert each row of complete_pcds into ply file, with the aim of viewing it as 3D model eventually. Or any suggestion on how to visualize these training files as 3D model would be valuable. Thanks...

Training on multiple gpus

Dear Authors,

I noticed that you train the model for completion "using the Adam optimizer with initial learning rate 1e−4
(decayed by 0.7 every 40 epochs) and batch size 32 by NVIDIA TITAN Xp GPU" mentioned in your paper. Did you mean that you used a single GPU for training or more GPUs?

Thanks for your work and look forward to your favourable reply.

Welcome update to OpenMMLab 2.0

Welcome update to OpenMMLab 2.0

I am Vansin, the technical operator of OpenMMLab. In September of last year, we announced the release of OpenMMLab 2.0 at the World Artificial Intelligence Conference in Shanghai. We invite you to upgrade your algorithm library to OpenMMLab 2.0 using MMEngine, which can be used for both research and commercial purposes. If you have any questions, please feel free to join us on the OpenMMLab Discord at https://discord.gg/amFNsyUBvm or add me on WeChat (van-sin) and I will invite you to the OpenMMLab WeChat group.

Here are the OpenMMLab 2.0 repos branches:

OpenMMLab 1.0 branch OpenMMLab 2.0 branch
MMEngine 0.x
MMCV 1.x 2.x
MMDetection 0.x 、1.x、2.x 3.x
MMAction2 0.x 1.x
MMClassification 0.x 1.x
MMSegmentation 0.x 1.x
MMDetection3D 0.x 1.x
MMEditing 0.x 1.x
MMPose 0.x 1.x
MMDeploy 0.x 1.x
MMTracking 0.x 1.x
MMOCR 0.x 1.x
MMRazor 0.x 1.x
MMSelfSup 0.x 1.x
MMRotate 1.x 1.x
MMYOLO 0.x

Attention: please create a new virtual environment for OpenMMLab 2.0.

Questions about pre-trained weight and so on

Hello, I have three questions about this competition.

  1. Will it be illegal to use the pre-trained weight from other methods, like "PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers"? The result could be better if I train the network based on the pre-trained PCN/C3D weight.
  2. Is the submission on the competition website still effective during the private submission stage?
  3. Can we only give the private submission without showing it on the leaderboard?

Thank you very much!

Missing super resolution dataset

Hi,

thanks for releasing the code and dataset. However, on the paper you provide results at different resolutions (2.048 - 4.096 - 8.192 16.384), which are not present in the current released dataset. Why have they been removed ?

Thanks
Riccardo

cascade_gan

Could you please provide a code for cascade_gan?

Is it possible to provide your script for rendering partial point cloud (or some hints)?

Hi, thank you for your great work! I'm also recently trying to render some partial point clouds from mesh in my own dataset. I'm using pyrender but it turns out to be super slow. I just wonder if it's possible for you to provide the rendering script? Or may I ask how did you render those partial point clouds (e.g. what software/library did you use) and how long does it take? Thanks!

ERROR: Could not build wheels for pycocotools, which is required to install pyproject.toml-based projects

While trying to run-> pip install git+https://github.com/open-mmlab/mmdetection.git
in the mm3d_pn2 directory, I am getting the following error, how can i fix this?

$ pip install git+https://github.com/open-mmlab/mmdetection.git
Collecting git+https://github.com/open-mmlab/mmdetection.git
Cloning https://github.com/open-mmlab/mmdetection.git to c:\users\sshre34\appdata\local\temp\pip-req-build-gmxamwua
Running command git clone --filter=blob:none --quiet https://github.com/open-mmlab/mmdetection.git 'C:\Users\sshre34\AppData\Local\Temp\pip-req-build-gmxamwua'
Resolved https://github.com/open-mmlab/mmdetection.git to commit 94afcdc6aa1e2c849c9d8334ec7730bb52b17922
Preparing metadata (setup.py) ... done
Requirement already satisfied: matplotlib in c:\users\sshre34\anaconda3\envs\mvp\lib\site-packages (from mmdet==2.27.0) (3.5.3)
Requirement already satisfied: numpy in c:\users\sshre34\anaconda3\envs\mvp\lib\site-packages (from mmdet==2.27.0) (1.21.5)
Collecting pycocotools
Using cached pycocotools-2.0.6.tar.gz (24 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: scipy in c:\users\sshre34\anaconda3\envs\mvp\lib\site-packages (from mmdet==2.27.0) (1.7.3)
Requirement already satisfied: six in c:\users\sshre34\anaconda3\envs\mvp\lib\site-packages (from mmdet==2.27.0) (1.16.0)
Collecting terminaltables
Using cached terminaltables-3.1.10-py2.py3-none-any.whl (15 kB)
Requirement already satisfied: python-dateutil>=2.7 in c:\users\sshre34\anaconda3\envs\mvp\lib\site-packages (from matplotlib->mmdet==2.27.0) (2.8.2)
Requirement already satisfied: pillow>=6.2.0 in c:\users\sshre34\anaconda3\envs\mvp\lib\site-packages (from matplotlib->mmdet==2.27.0) (9.3.0)
Requirement already satisfied: fonttools>=4.22.0 in c:\users\sshre34\anaconda3\envs\mvp\lib\site-packages (from matplotlib->mmdet==2.27.0) (4.38.0)
Requirement already satisfied: packaging>=20.0 in c:\users\sshre34\anaconda3\envs\mvp\lib\site-packages (from matplotlib->mmdet==2.27.0) (22.0)
Requirement already satisfied: kiwisolver>=1.0.1 in c:\users\sshre34\anaconda3\envs\mvp\lib\site-packages (from matplotlib->mmdet==2.27.0) (1.4.4)
Requirement already satisfied: cycler>=0.10 in c:\users\sshre34\anaconda3\envs\mvp\lib\site-packages (from matplotlib->mmdet==2.27.0) (0.11.0)
Requirement already satisfied: pyparsing>=2.2.1 in c:\users\sshre34\anaconda3\envs\mvp\lib\site-packages (from matplotlib->mmdet==2.27.0) (3.0.9)
Requirement already satisfied: typing-extensions in c:\users\sshre34\anaconda3\envs\mvp\lib\site-packages (from kiwisolver>=1.0.1->matplotlib->mmdet==2.27.0) (4.4.0)
Building wheels for collected packages: mmdet, pycocotools
Building wheel for mmdet (setup.py) ... done
Created wheel for mmdet: filename=mmdet-2.27.0-py3-none-any.whl size=1468682 sha256=ee7389458986d8bcfb5c5f486cee48393c239c2200caf362b0b4d31e5a6348ff
Stored in directory: C:\Users\sshre34\AppData\Local\Temp\pip-ephem-wheel-cache-iu2x_jfa\wheels\8d\1d\4c\5ba147e9294578f513772158a509161804e08d1b5f62e95705
Building wheel for pycocotools (pyproject.toml) ... error
error: subprocess-exited-with-error

× Building wheel for pycocotools (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [16 lines of output]
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-cpython-37
creating build\lib.win-amd64-cpython-37\pycocotools
copying pycocotools\coco.py -> build\lib.win-amd64-cpython-37\pycocotools
copying pycocotools\cocoeval.py -> build\lib.win-amd64-cpython-37\pycocotools
copying pycocotools\mask.py -> build\lib.win-amd64-cpython-37\pycocotools
copying pycocotools_init_.py -> build\lib.win-amd64-cpython-37\pycocotools
running build_ext
cythoning pycocotools/_mask.pyx to pycocotools_mask.c
building 'pycocotools._mask' extension
C:\Users\sshre34\AppData\Local\Temp\pip-build-env-02b_xbrm\overlay\Lib\site-packages\Cython\Compiler\Main.py:369: FutureWarning: Cython directive 'language_level' not set, using 2 for now (Py2). This will change in a later release! File: C:\Users\sshre34\AppData\Local\Temp\pip-install-3k0didqo\pycocotools_c99855141cd14b4c944e7ec99dcb18aa\pycocotools_mask.pyx
tree = Parsing.p_module(s, pxd, full_module_name)
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
[end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for pycocotools
Successfully built mmdet
Failed to build pycocotools
ERROR: Could not build wheels for pycocotools, which is required to install pyproject.toml-based projects

About the trained model for baseline methods

Hi, @paul007pl ,

Thanks for releasing the code for the benchmark and baseline models. Do you also provide the trained models (checkpoints) for baseline methods (e.g. [1] PCN; [2] ECG; [3] VRCNet)? It'll be useful for having consistent comparison results.

Thanks~

ImportError cannot import name 'ball_query_ext' from 'mm3d_pn2.ops.ball_query' 急急急,在线等

当我运行 python train.py -c ./cfgs/vcrnet.yaml

报错信息是这样的 :ImportError cannot import name 'ball_query_ext' from 'mm3d_pn2.ops.ball_query'

下面是 'mm3d_pn2.ops.ball_query 这个文件夹中的内容,里面确实没有 'ball_query_ext' 这个文件

utils
metrics
mm3d_pn2
ops
ball_query
src
ball_query.cpp
ball_query_cuda.cu
init.py
ball_query.py

ball_query.py 这个文件里面有这么一条导入语句:

import torch
from torch.autograd import Function

from . import ball_query_ext  // 在当前目录中没有这个模块文件请问这个错误该咋解决呢class BallQuery(Function):
    """Ball Query.

    Find nearby points in spherical space.
    """

    @staticmethod
    def forward(ctx, min_radius: float, max_radius: float, sample_num: int,
                xyz: torch.Tensor, center_xyz: torch.Tensor) -> torch.Tensor:
            ................
            ...................
            .................

from . import ball_query_ext // 在当前目录中没有这个 ball_query_ext.py 模块文件,请问这个错误该咋解决呢?

Some problems about private test

  1. Is the dataset for the private test the same as the dataset for the public test?
  2. How do we get feedback on the private test?
  3. What is the experimental environment of the private test (both software and hardware environment), do we need to configure it?

About the GPU usage and training time of several methods in registration.

Hello,
I tried to test the code for point cloud registration in Benchmark, but I found that the GPU utilization for these methods is very low (jumping between 0% and 20%, and the training process is also very slow), I used multiple GPUs, do you use a single one for the training process or?
Can you provide us with your hardware environment and the approximate time required for a full training or an epoch?
I'm not sure if it's my environment or code problem, I'd like to get your help.

Two test environments: a). cuda10.2, GPU 2080Ti, pytorch1.5.0; b). cuda11.1, GPU 3090, pytorch1.7.0.

Tensor full of nan during the train of vrcnet

Hy,
thanks for your work!
During the training of the vrcnet, the following problem occurs:

nan vrcnet

Especially, the problem occurs whithin the function edge_preserving_sampling, which is called inside the forward of the SA_SKN_Res_encoder:

ds_features, p_idx, pn_idx, ds_points = edge_preserve_sampling(features, points, sample_num, k)

and the feature tensors become full nan tensors.
Is there a way to solve this problem? Did I do something wrong during the training?

Error with function vis_utils.plot_single_pcd

Hi,

Thank you for the amazing work. I tried to use the function plot_single_pcd to plot a point cloud from the training dataset, and I got the following error:


NotImplementedError                 Traceback (most recent call last)
<ipython-input-12-768b3faf1813> in <module>
----> 1 plot_single_pcd(Input['incomplete_pcds'][1],'incomplete.png')

~/internship/MVP_Benchmark/completion/vis_utils.py in plot_single_pcd(points, save_path)
     36     fig = plt.figure()
     37     ax = fig.add_subplot(111, projection='3d')
---> 38     ax.set_aspect('equal')
     39     pcd = o3d.geometry.PointCloud(o3d.utility.Vector3dVector(points))
     40     rotation_matrix = np.asarray([[1, 0, 0, 0], [0, 0, -1, 0], [0, 1, 0, 0], [0, 0, 0, 1]])

~/anaconda3/lib/python3.8/site-packages/mpl_toolkits/mplot3d/axes3d.py in set_aspect(self, aspect, adjustable, anchor, share)
    321         """
    322         if aspect != 'auto':
--> 323             raise NotImplementedError(
    324                 "Axes3D currently only supports the aspect argument "
    325                 f"'auto'. You passed in {aspect!r}."

NotImplementedError: Axes3D currently only supports the aspect argument 'auto'. You passed in 'equal'.

Then I changed 'equal' to 'auto', and the error disappeared. Seems like a bug?

About MVP-40 Dataset.

Thanks for your excellent work.
I have a request, can you provide the dataset? Thanks very much.

How to register as a team?

Hello,
I tried to register and participate but I did not find the option for registration as a team? How many members can be contained in a team?

about train.log

thank you for your amazing work! i meet a problem and i can not solve it, could you give me some advice?
Traceback (most recent call last):
File "/raid/MVP_Benchmark/completion/train.py", line 211, in
logging.basicConfig(level=logging.INFO, handlers=[logging.FileHandler(os.path.join(log_dir, 'train.log')),
File "/home/anaconda3/envs/mvp/lib/python3.7/logging/init.py", line 1087, in init
StreamHandler.init(self, self._open())
File "/home/anaconda3/envs/mvp/lib/python3.7/logging/init.py", line 1116, in _open
return open(self.baseFilename, self.mode, encoding=self.encoding)
FileNotFoundError: [Errno 2] No such file or directory: '/raid/jiangzw/MVP_Benchmark/completion/log/vrcnet_cd_debug_2021-06-18T16:36:20/train.log'

doesn't support CUDA >=11.0

Hi, @paul007pl ,

According to the setup.sh, CUDA 10.1 is used. However, my GPU (RTX 30xx series) only support CUDA >=11.0. And the error output after inputing command python train.py -c ./cfgs/pcn.yaml is displayed as follows, it related to the CUDA version problem.

root@milton-LabPC:/data/code13/MVP_Benchmark/completion# python train.py -c ./cfgs/pcn.yaml
INFO:root:Munch({'batch_size': 32, 'workers': 0, 'nepoch': 100, 'model_name': 'pcn', 'load_model': None, 'start_epoch': 0, 'num_points': 2048, 'work_dir': 'log/', 'flag': 'debug', 'loss': 'cd', 'manual_seed': None, 'use_mean_feature': False, 'step_interval_to_print': 500, 'epoch_interval_to_save': 1, 'epoch_interval_to_val': 1, 'varying_constant': '0.01, 0.1, 0.5, 1', 'varying_constant_epochs': '5, 15, 30', 'lr': 0.0001, 'lr_decay': True, 'lr_decay_interval': 40, 'lr_decay_rate': 0.7, 'lr_step_decay_epochs': None, 'lr_step_decay_rates': None, 'lr_clip': 1e-06, 'optimizer': 'Adam', 'weight_decay': 0, 'betas': '0.9, 0.999', 'save_vis': True, 'eval_emd': False})
(62400, 2048, 3)
(2400, 2048, 3) (62400,)
(41600, 2048, 3)
(1600, 2048, 3) (41600,)
INFO:root:Length of train dataset:62400
INFO:root:Length of test dataset:41600
INFO:root:Random Seed: 6693
Jitting Chamfer 3D
Traceback (most recent call last):
  File "train.py", line 213, in <module>
    train()
  File "train.py", line 48, in train
    model_module = importlib.import_module('.%s' % args.model_name, 'models')
  File "/root/anaconda3/envs/pytorch1.5_4d_pls/lib/python3.7/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
  File "<frozen importlib._bootstrap>", line 983, in _find_and_load
  File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 728, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "/media/root/mdata/data/code13/MVP_Benchmark/completion/models/pcn.py", line 10, in <module>
    from model_utils import gen_grid_up, calc_emd, calc_cd
  File "/media/root/mdata/data/code13/MVP_Benchmark/completion/model_utils.py", line 20, in <module>
    from metrics import cd, fscore, emd
  File "../utils/metrics/__init__.py", line 1, in <module>
    from .CD import (cd, fscore)
  File "../utils/metrics/CD/__init__.py", line 1, in <module>
    from .chamfer3D.dist_chamfer_3D import chamfer_3DDist as cd
  File "../utils/metrics/CD/chamfer3D/dist_chamfer_3D.py", line 15, in <module>
    "/".join(os.path.abspath(__file__).split('/')[:-1] + ["chamfer3D.cu"]),
  File "/root/anaconda3/envs/pytorch1.5_4d_pls/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 898, in load
    is_python_module)
  File "/root/anaconda3/envs/pytorch1.5_4d_pls/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1086, in _jit_compile
    with_cuda=with_cuda)
  File "/root/anaconda3/envs/pytorch1.5_4d_pls/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1179, in _write_ninja_file_and_build_library
    with_cuda=with_cuda)
  File "/root/anaconda3/envs/pytorch1.5_4d_pls/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1469, in _write_ninja_file_to_build_library
    cuda_flags = common_cflags + COMMON_NVCC_FLAGS + _get_cuda_arch_flags()
  File "/root/anaconda3/envs/pytorch1.5_4d_pls/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1316, in _get_cuda_arch_flags
    raise ValueError("Unknown CUDA arch ({}) or GPU not supported".format(arch))
ValueError: Unknown CUDA arch (8.6) or GPU not supported

Any suggestions to fix this issue?

Thanks~

Questions about installation

Thanks for your great work!
There is a problem when I enter pip install-v-e. under /utils/mm3d_pn2.
Running setup.py develop for mmdet3d Running command /home/lq/anaconda3/envs/mvp/bin/python -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/home/lq/New_p/MVP_Benchmark-main/utils/mm3d_pn2/setup.py'"'"'; __file__='"'"'/home/lq/New_p/MVP_Benchmark-main/utils/mm3d_pn2/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup;setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' develop--no-deps running develop running egg_info writing mmdet3d.egg-info/PKG-INFO writing dependency_links to mmdet3d.egg-info/dependency_links.txt writing requirements to mmdet3d.egg-info/requires.txt writing top-level names to mmdet3d.egg-info/top_level.txt listing git files failed - pretending there aren't any reading manifest file 'mmdet3d.egg-info/SOURCES.txt' writing manifest file 'mmdet3d.egg-info/SOURCES.txt' running build_ext building 'ops.spconv.sparse_conv_ext' extension Emitting ninja build file /home/lq/New_p/MVP_Benchmark-main/utils/mm3d_pn2/build/temp.linux-x86_64-3.7/build.ninja... Compiling objects... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) 1.10.2.git.kitware.jobserver-1 g++ -pthread -shared -B /home/lq/anaconda3/envs/mvp/compiler_compat -L/home/lq/anaconda3/envs/mvp/lib -Wl,-rpath=/home/lq/anaconda3/envs/mvp/lib -Wl,--no-as-needed -Wl,--sysroot=/ /home/lq/New_p/MVP_Benchmark-main/utils/mm3d_pn2/build/temp.linux-x86_64-3.7/ops/spconv/src/all.o /home/lq/New_p/MVP_Benchmark-main/utils/mm3d_pn2/build/temp.linux-x86_64-3.7/ops/spconv/src/reordering.o /home/lq/New_p/MVP_Benchmark-main/utils/mm3d_pn2/build/temp.linux-x86_64-3.7/ops/spconv/src/reordering_cuda.o /home/lq/New_p/MVP_Benchmark-main/utils/mm3d_pn2/build/temp.linux-x86_64-3.7/ops/spconv/src/indice.o /home/lq/New_p/MVP_Benchmark-main/utils/mm3d_pn2/build/temp.linux-x86_64-3.7/ops/spconv/src/indice_cuda.o /home/lq/New_p/MVP_Benchmark-main/utils/mm3d_pn2/build/temp.linux-x86_64-3.7/ops/spconv/src/maxpool.o /home/lq/New_p/MVP_Benchmark-main/utils/mm3d_pn2/build/temp.linux-x86_64-3.7/ops/spconv/src/maxpool_cuda.o -L/home/lq/anaconda3/envs/mvp/lib/python3.7/site-packages/torch/lib -L/usr/local/cuda-10.1/lib64 -lc10 -ltorch -ltorch_cpu -ltorch_python -lcudart -lc10_cuda -ltorch_cuda -o build/lib.linux-x86_64-3.7/ops/spconv/sparse_conv_ext.cpython-37m-x86_64-linux-gnu.so g++: error: /home/lq/New_p/MVP_Benchmark-main/utils/mm3d_pn2/build/temp.linux-x86_64-3.7/ops/spconv/src/all.o: No such file or directory g++: error: /home/lq/New_p/MVP_Benchmark-main/utils/mm3d_pn2/build/temp.linux-x86_64-3.7/ops/spconv/src/reordering.o: No such file or directory g++: error: /home/lq/New_p/MVP_Benchmark-main/utils/mm3d_pn2/build/temp.linux-x86_64-3.7/ops/spconv/src/reordering_cuda.o: No such file or directory g++: error: /home/lq/New_p/MVP_Benchmark-main/utils/mm3d_pn2/build/temp.linux-x86_64-3.7/ops/spconv/src/indice.o: No such file or directory g++: error: /home/lq/New_p/MVP_Benchmark-main/utils/mm3d_pn2/build/temp.linux-x86_64-3.7/ops/spconv/src/indice_cuda.o: No such file or directory g++: error: /home/lq/New_p/MVP_Benchmark-main/utils/mm3d_pn2/build/temp.linux-x86_64-3.7/ops/spconv/src/maxpool.o: No such file or directory g++: error: /home/lq/New_p/MVP_Benchmark-main/utils/mm3d_pn2/build/temp.linux-x86_64-3.7/ops/spconv/src/maxpool_cuda.o: No such file or directory error: command 'g++' failed with exit status 1 ERROR: Command errored out with exit status 1: /home/lq/anaconda3/envs/mvp/bin/python -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/home/lq/New_p/MVP_Benchmark-main/utils/mm3d_pn2/setup.py'"'"'; __file__='"'"'/home/lq/New_p/MVP_Benchmark-main/utils/mm3d_pn2/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' develop --no-deps Check the logs for full command output.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.