Code Monkey home page Code Monkey logo

pointnet2_pytorch's People

Contributors

amp180 avatar chenzhutian avatar erikwijmans avatar innovarul avatar lejafar avatar matinjugou avatar mfxox avatar noahstier avatar rig8f avatar wassname avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pointnet2_pytorch's Issues

Using additional Features like Normals or Intensity

Hello,
I would like to classify 3d-data, using additional features, like normal (Nx3) or intensity (Nx1) to improve my results.

What do i have to change, except the number of input_channels for the model and the Data Loader?
I have already updated the Data-set_Loader, which getitem method returns the point, label, normal and intensity, but now i don't know how to change the loss calculation. Especially, how to unwrap the batch into input and labels in the "model_fn.

Greetings

error when build _ext module

Hello,
Thanks for updating the pytorch 1.0 version.
My environment is:
python 2.7, pytorch 1.0, Ubuntu 16.04, Cuda 9.0.176, cudnn 7.1.3, gcc 5.5, Nvidia GPU driver: 390.87

I got following errors when run 'python setup.py build_ext --inplace'

/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9220): error: argument of type "const void *" is incompatible with parameter of type "const float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9231): error: argument of type "const void *" is incompatible with parameter of type "const float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9244): error: argument of type "const void *" is incompatible with parameter of type "const double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9255): error: argument of type "const void *" is incompatible with parameter of type "const double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9268): error: argument of type "const void *" is incompatible with parameter of type "const float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9279): error: argument of type "const void *" is incompatible with parameter of type "const float *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9292): error: argument of type "const void *" is incompatible with parameter of type "const double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9303): error: argument of type "const void *" is incompatible with parameter of type "const double *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9316): error: argument of type "const void *" is incompatible with parameter of type "const int *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9401): error: argument of type "const void *" is incompatible with parameter of type "const long long *"
/usr/lib/gcc/x86_64-linux-gnu/5/include/avx512fintrin.h(9410): error: argument of type "void *" is incompatible with parameter of type "float *"
...

I am guessing it is caused by the gcc version or cuda version problem.
May I know your environment? or any suggestions?
Thanks in advance.

performance degradation after using higher dimensional data

Hi,
im trying to find edges in pointclouds. first i down-sampled my data set to 5k-sized pointclouds and got the following performance:

precision 0.9722454458109377
recall 0.9774251897764888
specifity 0.9976463102707986
f1score 0.9748284372091011

then i tried to down-sample my dataset to 10k pointclouds and was surprised that the performance decreased immensely:
precision 0.6414813848515061
recall 0.4429522551934075
specifity 0.9649306344666823
f1score 0.5240442855918716

I have to add that in the 10k dataset i kept all my edge vertices. That means that instead of having one edge vertice out of every 30 vertices i have one edge vertice out of every 10 vertices. (not sure numbers are 100% correct but main thing is - used to have very very few edge vertices and now i have slightly more of them in the samples i feed the algorithm with)

Any idea what can cause the performance degradation? i would have expected that with more data and with more positive vertices the algorithm would learn better.

If i am not mistaken, all I had to change in the code was the "num_points" in the train function and load the 10k instead of the 5 k data.

Any help would be much appreciated

Training on ScanNet

Hi all,

has anybody tried to apply the network on ScanNet data?
I am currently trying to achieve similar performance as mentioned in PointNet++ paper, which is around 83% accuracy. However, when training with this model, I can hardly achieve values greater than 60%.

I tried using the same hyperparameters as in the original PointNet++ setting (i.e. changed lr to 1e-3).

Any suggestions or help would be appreciated.

Best,
Sophia

ImportError: No module named 'pointnet2.utils._ext'

Hello, I miss this problem:)

Traceback (most recent call last):
File "E:/Github/Pointnet2_PyTorch/pointnet2/train/train_sem_seg.py", line 11, in
from pointnet2.models import Pointnet2SemMSG as Pointnet
File "E:\Github\Pointnet2_PyTorch\pointnet2_init_.py", line 1, in
from . import utils
File "E:\Github\Pointnet2_PyTorch\pointnet2\utils_init_.py", line 1, in
from . import pointnet2_utils
File "E:\Github\Pointnet2_PyTorch\pointnet2\utils\pointnet2_utils.py", line 10, in
from pointnet2.utils._ext import pointnet2
ImportError: No module named 'pointnet2.utils._ext'

setup.py compiling error: '.../nvcc' failed with exit status 2. Cannot compile with CUDA=8.0?

When I try to run the python setup command I got the error. Here is the full out put:

python setup.py build_ext --inplace                                                                                        
running build_ext
building 'pointnet2._ext' extension
gcc -pthread -B /home/likewise-open/SENSETIME/qiudi/anaconda3/envs/pointnet2/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/likewise-open/SENSETIME/qiudi/anaconda3/envs/pointnet2/lib/python3.6/site-packages/torch/lib/include -I/home/likewise-open/SENSETIME/qiudi/anaconda3/envs/pointnet2/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -I/home/likewise-open/SENSETIME/qiudi/anaconda3/envs/pointnet2/lib/python3.6/site-packages/torch/lib/include/TH -I/home/likewise-open/SENSETIME/qiudi/anaconda3/envs/pointnet2/lib/python3.6/site-packages/torch/lib/include/THC -I/usr/local/cuda/include -I/home/likewise-open/SENSETIME/qiudi/anaconda3/envs/pointnet2/include/python3.6m -c pointnet2/_ext-src/src/sampling.cpp -o build/temp.linux-x86_64-3.6/pointnet2/_ext-src/src/sampling.o -O2 -Ipointnet2/_ext-src/include -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_ext -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
gcc -pthread -B /home/likewise-open/SENSETIME/qiudi/anaconda3/envs/pointnet2/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/likewise-open/SENSETIME/qiudi/anaconda3/envs/pointnet2/lib/python3.6/site-packages/torch/lib/include -I/home/likewise-open/SENSETIME/qiudi/anaconda3/envs/pointnet2/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -I/home/likewise-open/SENSETIME/qiudi/anaconda3/envs/pointnet2/lib/python3.6/site-packages/torch/lib/include/TH -I/home/likewise-open/SENSETIME/qiudi/anaconda3/envs/pointnet2/lib/python3.6/site-packages/torch/lib/include/THC -I/usr/local/cuda/include -I/home/likewise-open/SENSETIME/qiudi/anaconda3/envs/pointnet2/include/python3.6m -c pointnet2/_ext-src/src/group_points.cpp -o build/temp.linux-x86_64-3.6/pointnet2/_ext-src/src/group_points.o -O2 -Ipointnet2/_ext-src/include -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_ext -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
gcc -pthread -B /home/likewise-open/SENSETIME/qiudi/anaconda3/envs/pointnet2/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/likewise-open/SENSETIME/qiudi/anaconda3/envs/pointnet2/lib/python3.6/site-packages/torch/lib/include -I/home/likewise-open/SENSETIME/qiudi/anaconda3/envs/pointnet2/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -I/home/likewise-open/SENSETIME/qiudi/anaconda3/envs/pointnet2/lib/python3.6/site-packages/torch/lib/include/TH -I/home/likewise-open/SENSETIME/qiudi/anaconda3/envs/pointnet2/lib/python3.6/site-packages/torch/lib/include/THC -I/usr/local/cuda/include -I/home/likewise-open/SENSETIME/qiudi/anaconda3/envs/pointnet2/include/python3.6m -c pointnet2/_ext-src/src/ball_query.cpp -o build/temp.linux-x86_64-3.6/pointnet2/_ext-src/src/ball_query.o -O2 -Ipointnet2/_ext-src/include -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_ext -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
gcc -pthread -B /home/likewise-open/SENSETIME/qiudi/anaconda3/envs/pointnet2/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/likewise-open/SENSETIME/qiudi/anaconda3/envs/pointnet2/lib/python3.6/site-packages/torch/lib/include -I/home/likewise-open/SENSETIME/qiudi/anaconda3/envs/pointnet2/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -I/home/likewise-open/SENSETIME/qiudi/anaconda3/envs/pointnet2/lib/python3.6/site-packages/torch/lib/include/TH -I/home/likewise-open/SENSETIME/qiudi/anaconda3/envs/pointnet2/lib/python3.6/site-packages/torch/lib/include/THC -I/usr/local/cuda/include -I/home/likewise-open/SENSETIME/qiudi/anaconda3/envs/pointnet2/include/python3.6m -c pointnet2/_ext-src/src/bindings.cpp -o build/temp.linux-x86_64-3.6/pointnet2/_ext-src/src/bindings.o -O2 -Ipointnet2/_ext-src/include -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_ext -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
gcc -pthread -B /home/likewise-open/SENSETIME/qiudi/anaconda3/envs/pointnet2/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/likewise-open/SENSETIME/qiudi/anaconda3/envs/pointnet2/lib/python3.6/site-packages/torch/lib/include -I/home/likewise-open/SENSETIME/qiudi/anaconda3/envs/pointnet2/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -I/home/likewise-open/SENSETIME/qiudi/anaconda3/envs/pointnet2/lib/python3.6/site-packages/torch/lib/include/TH -I/home/likewise-open/SENSETIME/qiudi/anaconda3/envs/pointnet2/lib/python3.6/site-packages/torch/lib/include/THC -I/usr/local/cuda/include -I/home/likewise-open/SENSETIME/qiudi/anaconda3/envs/pointnet2/include/python3.6m -c pointnet2/_ext-src/src/interpolate.cpp -o build/temp.linux-x86_64-3.6/pointnet2/_ext-src/src/interpolate.o -O2 -Ipointnet2/_ext-src/include -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_ext -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
/usr/local/cuda/bin/nvcc -I/home/likewise-open/SENSETIME/qiudi/anaconda3/envs/pointnet2/lib/python3.6/site-packages/torch/lib/include -I/home/likewise-open/SENSETIME/qiudi/anaconda3/envs/pointnet2/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -I/home/likewise-open/SENSETIME/qiudi/anaconda3/envs/pointnet2/lib/python3.6/site-packages/torch/lib/include/TH -I/home/likewise-open/SENSETIME/qiudi/anaconda3/envs/pointnet2/lib/python3.6/site-packages/torch/lib/include/THC -I/usr/local/cuda/include -I/home/likewise-open/SENSETIME/qiudi/anaconda3/envs/pointnet2/include/python3.6m -c pointnet2/_ext-src/src/group_points_gpu.cu -o build/temp.linux-x86_64-3.6/pointnet2/_ext-src/src/group_points_gpu.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --compiler-options '-fPIC' -O2 -Ipointnet2/_ext-src/include -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_ext -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning).
/home/likewise-open/SENSETIME/qiudi/anaconda3/envs/pointnet2/lib/python3.6/site-packages/torch/lib/include/c10/Half-inl.h(21): error: identifier "__half_as_short" is undefined

1 error detected in the compilation of "/tmp/tmpxft_00007f64_00000000-7_group_points_gpu.cpp1.ii".
error: command '/usr/local/cuda/bin/nvcc' failed with exit status 2

After some search it seems like caused by CUDA driver version? What do you think the way out is?
Thanks so much!

Low Accuracy on Indoor3DSeg Dataset

Hi! Could you include your trained model & accuracy on these two datasets?

I tried the default configuration on Indoor3DSemSeg using MSG. However, the accuracy is pretty low (~0.33) on test set. Could you let me know if this is not a full implementation or I just missed something?

Thanks!

Best classification results in ModelNet40

I have noticed that you had a closed issue about best results in ModelNet40.
You said before the changing, your results was only 0.X% accuracy gap with the paper. But Now I run the code with 8 gpus, I only got 0.9023% for ModelNet40. I use the default hyper-parameters. So would you please tell me what's the exact accuracy you got and why I still can't matche the performance from the paper? Thx!

Visdom does not work

Hello, thank you for your great work. I have the following problems when I am running on the docker
python -m pointnet2.train.train_sem_seg --visdom
But the system displays
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f38901e03c8>: Failed to establish a new connection: [Errno 111] Connection refused
Can you help me?

Performance on segmentation

Hi, I wonder your performance on segmentation experiment. I only achieve ~81.0% accuracy, which is far away from the 85 % reported in the paper. Any suggestions to get similar performance with the original paper? Thanks a lot.

Mapping point-cloud to custom output

Hi Erik,

I have several point clouds that represent different sub-maps of a terrain map.
I don't need to perform segmentation or classification.
Instead, I need to output the raw response of a submap.
I then want to input this response into another neural network, which outputs a physically meaningful vector.

Could you suggest how to do this within Python? (not using the terminal)
Suppose I have a tensor of 20 submaps, each having 1000 3D points (x.shape = (20, 1000, 3)).

Multi-Gpu support

It seems that the code currently hasn't enabled the multi-GPU via nn.DataParallel?
Do we need to manually enable the multi-GPU?

Error in building CUDA kernel

I can not successfully build '_pointnet2.so'. I encountered the following error:

Scanning dependencies of target pointnet2_ext
[ 20%] Building NVCC (Device) object CMakeFiles/cuda_compile.dir/utils/csrc/cuda_compile_generated_ball_query_gpu.cu.o
[ 40%] Building NVCC (Device) object CMakeFiles/cuda_compile.dir/utils/csrc/cuda_compile_generated_interpolate_gpu.cu.o
[ 60%] Building NVCC (Device) object CMakeFiles/cuda_compile.dir/utils/csrc/cuda_compile_generated_group_points_gpu.cu.o
[ 80%] Building NVCC (Device) object CMakeFiles/cuda_compile.dir/utils/csrc/cuda_compile_generated_sampling_gpu.cu.o
[100%] Generating ../utils/_ext/pointnet2/_pointnet2.so
usage: build_ffi.py [-h] [--build | --clean]
build_ffi.py: error: unrecognized arguments: --objs //build/CMakeFiles/cuda_compile.dir/utils/csrc/./cuda_compile_generated_ball_query_gpu.cu.o //build/CMakeFiles/cuda_compile.dir/utils/csrc/./cuda_compile_generated_interpolate_gpu.cu.o //build/CMakeFiles/cuda_compile.dir/utils/csrc/./cuda_compile_generated_group_points_gpu.cu.o //build/CMakeFiles/cuda_compile.dir/utils/csrc/./cuda_compile_generated_sampling_gpu.cu.o
CMakeFiles/pointnet2_ext.dir/build.make:67: recipe for target '../utils/_ext/pointnet2/_pointnet2.so' failed
make[2]: *** [../utils/_ext/pointnet2/_pointnet2.so] Error 2
CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/pointnet2_ext.dir/all' failed
make[1]: *** [CMakeFiles/pointnet2_ext.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2

If I built '_pointnet2.so' with running 'build_ffi.py', there has an error as follow:

ImportError: /*/utils/_ext/pointnet2/_pointnet2.so: undefined symbol: three_interpolate_grad_kernel_wrapper

Pytorch 1.0.0 support issue

Hello, erikwijmans, I really appreciate that you wrote this awesome code in Pytorch.
I tried to run this code on my conda environment (Pytorch 1.0.0 installed), got this error:

(jmpark_py36) rit@rit:/media/rit/HDD/pdm/Pointnet2_PyTorch/build$ make
Scanning dependencies of target pointnet2_ext
[ 20%] Building NVCC (Device) object CMakeFiles/cuda_compile_1.dir/pointnet2/utils/csrc/cuda_compile_1_generated_ball_query_gpu.cu.o
[ 40%] Building NVCC (Device) object CMakeFiles/cuda_compile_1.dir/pointnet2/utils/csrc/cuda_compile_1_generated_group_points_gpu.cu.o
[ 60%] Building NVCC (Device) object CMakeFiles/cuda_compile_1.dir/pointnet2/utils/csrc/cuda_compile_1_generated_interpolate_gpu.cu.o
[ 80%] Building NVCC (Device) object CMakeFiles/cuda_compile_1.dir/pointnet2/utils/csrc/cuda_compile_1_generated_sampling_gpu.cu.o
[100%] Generating ../pointnet2/utils/_ext/pointnet2/_pointnet2.so
Traceback (most recent call last):
File "/media/rit/HDD/pdm/Pointnet2_PyTorch/pointnet2/utils/build_ffi.py", line 4, in
from torch.utils.ffi import create_extension
File "/home/rit/anaconda3/envs/jmpark_py36/lib/python3.6/site-packages/torch/utils/ffi/init.py", line 1, in
raise ImportError("torch.utils.ffi is deprecated. Please use cpp extensions instead.")
ImportError: torch.utils.ffi is deprecated. Please use cpp extensions instead.
CMakeFiles/pointnet2_ext.dir/build.make:67: recipe for target '../pointnet2/utils/_ext/pointnet2/_pointnet2.so' failed
make[2]: *** [../pointnet2/utils/_ext/pointnet2/_pointnet2.so] Error 1
CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/pointnet2_ext.dir/all' failed
make[1]: *** [CMakeFiles/pointnet2_ext.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2

I think this is because of the version mismatch. Can you provide me any suggestions?
Thank you!

Regards,

Best results in ModelNet40

Thanks for your job! Pytorch is more elegant for me.
I want to asks that what's your best result of classification trained on ModelNet40 using the default hyper-parameters? Or, what's the best accuracy when you tune the hyper-parameters appropriate?
I'm training the model using your code and I will be appreciated if you post the best results.

Time performance

Hi, thanks for your great work!
Could you please provide some time performance data? (compared to the implementation of the original tf version?

Error python setup.py build_ext --inplace

Hello,
I have an issue building the project.
This is the command line and the generated error message.

Thanks in advance,
Sebastian

system:

win10

CUDA 9.0
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\bin>nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2017 NVIDIA Corporation Built on Fri_Sep__1_21:08:32_Central_Daylight_Time_2017 Cuda compilation tools, release 9.0, V9.0.176

MSVC
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\bin>cl Microsoft (R) C/C++ Optimizing Compiler Version 19.00.24215.1 for x86 Copyright (C) Microsoft Corporation. All rights reserved.
(no environmental variables)

Pytorch 1.0.1
(installed via PIP)

Error Messages:

when i run python setup.py build_ext --inplace

... running build_ext C:\python36\lib\site-packages\torch\utils\cpp_extension.py:184: UserWarning: Error checking compiler version for cl: [WinError 2] Das System kann die angegebene Datei nicht finden warnings.warn('Error checking compiler version for {}: {}'.format(compiler, error)) building 'pointnet2._ext' extension

when i run python setup.py install

File "C:\python36\lib\distutils\command\build_ext.py", line 448, in build_extensions self._build_extensions_serial() File "C:\python36\lib\distutils\command\build_ext.py", line 473, in _build_extensions_serial self.build_extension(ext) File "C:\python36\lib\site-packages\setuptools\command\build_ext.py", line 199, in build_extension _build_ext.build_extension(self, ext) File "C:\python36\lib\distutils\command\build_ext.py", line 558, in build_extension target_lang=language) File "C:\python36\lib\distutils\ccompiler.py", line 717, in link_shared_object extra_preargs, extra_postargs, build_temp, target_lang) File "C:\python36\lib\distutils_msvccompiler.py", line 501, in link build_temp = os.path.dirname(objects[0]) IndexError: list index out of range

Difference pytorch 0.4.1 vs 1.1

Hi,
This may not be the correct place to ask this, but I thought it's worth a shot. First off, thanks for releasing this code, it has been very helpful and I learned a lot. I am evaluating this model for semantic segmentation of large scale laser scan data and I have so far used the older pytorch 0.4 build with CUDA 8. For several reasons, I recently tried out the newest release with pytorch 1.1 + CUDA 10 + CUdnn 7.3.1.

I am noticing a significantly worse convergence behaviour(depending on the model setup and training data, anywhere from 2 to 8% worse accuracy when no longer any gains are made) and visibly worse segmentation results on test data on the newer build, when using the same model parameters, same hyperparameters, same Data-Loading, hardware, same everything.
Also when loading the same model checkpoint and evaluating with model.eval() on the exact same data, there are some minor differences even after the first SA layer, although I am not sure this is the reason for the worse training performance.

I came across some tidbits here and there about different behaviour of log_softmax in the newer version of nn.CrossEntropyLoss(), or maybe the fact I had to compile pytorch 1.1 myself vs installing a pre-compiled version of 0.4 is the reason. For building pyTorch 1.1 and your extensions I used gcc 7.3 with nvcc 10. Everything is in anaconda3 on a ubuntu box.
If you have any ideas, I'd be very happy to hear them!
Cheers,
Johannes

Error: module 'pytorch_utils' has no attribute 'SharedMLP'

Hi Erik,
Thank you for sharing pointnet source.
When I run your code I get error:

Traceback (most recent call last): File "/home/minhnc-lab/WORKSPACES/Python/GRASP_DETECTION/PointNet/Pointnet2_PyTorch/train_cls.py", line 118, in <module> model = Pointnet(input_channels=0, num_classes=40, use_xyz=True) File "/home/minhnc-lab/WORKSPACES/Python/GRASP_DETECTION/PointNet/Pointnet2_PyTorch/models/pointnet2_msg_cls.py", line 62, in __init__ use_xyz=use_xyz File "/home/minhnc-lab/WORKSPACES/Python/GRASP_DETECTION/PointNet/Pointnet2_PyTorch/models/../utils/pointnet2_modules.py", line 107, in __init__ self.mlps.append(pt_utils.SharedMLP(mlp_spec, bn=bn)) AttributeError: module 'pytorch_utils' has no attribute 'SharedMLP'

I can not find out pytorch_utils module.
Could you share me that file?

Thank you very much.
Minhh

how to disable progress bar

When running train.py in Pycharm, I see such outputs:

epoch 3
                                                       
epochs:   0%|          | 1/200 [00:01<01:45,  1.89it/s]
                                                       
epochs:   0%|          | 1/200 [00:01<01:45,  1.89it/s]
                                                       
epochs:   0%|          | 1/200 [00:01<01:45,  1.89it/s]
                                                       
epochs:   0%|          | 1/200 [00:01<01:45,  1.89it/s]
                                                       
epochs:   0%|          | 1/200 [00:01<01:45,  1.89it/s]
epochs:   1%|          | 2/200 [00:01<01:42,  1.93it/s]
train: 100%|██████████| 1/1 [00:00<00:00,  4.37it/s, total_it=2]
epochs:   1%|          | 2/200 [00:01<01:42,  1.93it/s]
                                                                
val:   0%|          | 0/1 [00:00<?, ?it/s]
val: 100%|██████████| 1/1 [00:00<00:00,  8.19it/s]
                                                  
train:   0%|          | 0/1 [00:00<?, ?it/s]
train:   0%|          | 0/1 [00:00<?, ?it/s, total_it=3]=== Training Progress ===
acc  --- train: 0.2829	val: 0.1620	

loss --- train: 2.7759	val: 12.3251	

How can I disable the progress bar?

"Training Progress" can stay, and that one is located in site-packages... viz.py or so.

ImportError:_pointnet2.so: undefined symbol: PyInt_FromLong

hi erikwijmans:
environment of my computer is ubuntu16.04, cuda9.0 pytorch0.4.1 and gtx 1060
I just follow your steps:
1.mkdir build && cd build
2.cmake .. && make
and successful
But when I run the train_sem_seg.py , I got the error.
/usr/bin/python3.5 /home/mengzhen/PointNet/Pointnet2_PyTorch/pointnet2/train/train_sem_seg.py
Traceback (most recent call last):
File "/home/mengzhen/PointNet/Pointnet2_PyTorch/pointnet2/train/train_sem_seg.py", line 11, in
from pointnet2.models import Pointnet2SemMSG as Pointnet
File "/home/mengzhen/PointNet/Pointnet2_PyTorch/pointnet2/init.py", line 1, in
from . import utils
File "/home/mengzhen/PointNet/Pointnet2_PyTorch/pointnet2/utils/init.py", line 1, in
from . import pointnet2_utils
File "/home/mengzhen/PointNet/Pointnet2_PyTorch/pointnet2/utils/pointnet2_utils.py", line 10, in
from pointnet2.utils._ext import pointnet2
File "/home/mengzhen/PointNet/Pointnet2_PyTorch/pointnet2/utils/_ext/pointnet2/init.py", line 3, in
from ._pointnet2 import lib as _lib, ffi as _ffi
ImportError: /home/mengzhen/PointNet/Pointnet2_PyTorch/pointnet2/utils/_ext/pointnet2/_pointnet2.so: undefined symbol: PyInt_FromLong

Process finished with exit code 1
could you help me?
Thank you very much!
Smm

core dumped problem

Hi, thanks for your selfless code sharing. I encountered the following error when I run
python -m pointnet2.train.train_cls.
{'batch_size': 16,
'bn_momentum': 0.5,
'bnm_decay': 0.5,
'checkpoint': None,
'decay_step': 200000.0,
'epochs': 200,
'lr': 0.01,
'lr_decay': 0.7,
'num_points': 1024,
'run_name': 'cls_run_1',
'visdom': False,
'visdom_port': 8097,
'weight_decay': 1e-05}
epochs: 0%| | 0/200 [00:00<?, ?it/sSegmentation fault (core dumped)

I don't know how to sovle it by myself, hope you can help me. THANKS!

Issues while running setup.py

Hi.

I am running this on Windows 10, Pytorch 1.2.0 with CUDA 10.0. After executing setup.py, I tried

'import pointnet2._ext as _ext'

This gives me an error as follows:
'>>> import pointnet2.ext as ext
Traceback (most recent call last):
File "", line 1, in
File "C:\Users\amanja\Pointnet2_PyTorch\pointnet2_init
.py", line 17, in
from pointnet2 import utils
File "C:\Users\amanja\Pointnet2_PyTorch\pointnet2\utils_init
.py", line 9, in
from . import pointnet2_modules
File "C:\Users\amanja\Pointnet2_PyTorch\pointnet2\utils\pointnet2_modules.py", line 11, in
import etw_pytorch_utils as pt_utils
File "C:\Users\amanja\AppData\Local\Continuum\anaconda3\envs\pytorch\lib\site-packages\etw_pytorch_utils_init_.py", line 11, in
from .persistent_dataloader import DataLoader
File "C:\Users\amanja\AppData\Local\Continuum\anaconda3\envs\pytorch\lib\site-packages\etw_pytorch_utils\persistent_dataloader.py", line 20, in
_mp_ctx = multiprocessing.get_context('forkserver')
File "C:\Users\amanja\AppData\Local\Continuum\anaconda3\envs\pytorch\lib\multiprocessing\context.py", line 238, in get_context
return super().get_context(method)
File "C:\Users\amanja\AppData\Local\Continuum\anaconda3\envs\pytorch\lib\multiprocessing\context.py", line 192, in get_context
raise ValueError('cannot find context for %r' % method) from None
ValueError: cannot find context for 'forkserver''

Does this error has something to do with Windows installation instead of a linux one?

I tried to change the multiprocessing.get_context as 'spawn' as opposed to 'forkserver'. But it gives me a CUDA Out of memory error.

Validation set

Hi Erik, thanks for this amazing pytorch implementation.
Forget about that, I solve it, thanks!

python2.7 support

Probably a good idea to support python2.7. Supporting python2.7 will involve:

  1. Switch to pre-python3.6 typing.
  2. Add the necessary from __future__ import statements.

These steps will also need to be repeated for my pytorch utils library: https://github.com/erikwijmans/etw_pytorch_utils

Finally, py35 and py27 should be added to the envlist in tox.ini.

How to test the model?

Hello
I am able to train using your code but how do I test the model to see the results and visualize them? I don't see the script for that.

Wrong results for the grouping_operation.

Dear @erikwijmans, thank you for your great work on the pytorch version of pointnet2. I tested the grouping_operation, but the result is not as expected. I checked the cuda implementation, and could not find any bug. Could you please check the code?

Why the initial value of `bn_momentum` is 0.9?

Hi I have a silly question.
According to https://github.com/charlesq34/pointnet2/blob/master/scannet/train.py#L57, the initial value of BN_DECAY should be 0.5.

In the meantime, the 0.9 for momentum is used for momentum optimizer. (https://github.com/charlesq34/pointnet2/blob/master/scannet/train.py#L30)

So I wonder does the 0.9 bn_momentum comes from the original implementation?
https://github.com/erikwijmans/Pointnet2_PyTorch/blob/master/pointnet2/train/train_sem_seg.py#L44

Compilation error CUDA can't found

I did symbolic linke /usr/local/cuda to /usr/loca/cuda-9.0
Other application which use CUDA works fine and also exported CUDA_HOME.
And got below error which indicate can't find CUDA.

(py36) spk921@spk ~/git/Pointnet2_PyTorch [master*]$ python setup.py build_ext
running build_ext
building 'pointnet2._ext' extension
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/spk921/pyenv/py36/lib/python3.6/site-packages/torch/lib/include -I/home/spk921/pyenv/py36/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -I/home/spk921/pyenv/py36/lib/python3.6/site-packages/torch/lib/include/TH -I/home/spk921/pyenv/py36/lib/python3.6/site-packages/torch/lib/include/THC -I/usr/local/cuda:/usr/local/cuda:/usr/local/cuda:/usr/local/cuda:/include -I/usr/include/python3.6m -I/home/spk921/pyenv/py36/include/python3.6m -c pointnet2/_ext-src/src/bindings.cpp -o build/temp.linux-x86_64-3.6/pointnet2/_ext-src/src/bindings.o -O2 -Ipointnet2/_ext-src/include -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_ext -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/spk921/pyenv/py36/lib/python3.6/site-packages/torch/lib/include -I/home/spk921/pyenv/py36/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -I/home/spk921/pyenv/py36/lib/python3.6/site-packages/torch/lib/include/TH -I/home/spk921/pyenv/py36/lib/python3.6/site-packages/torch/lib/include/THC -I/usr/local/cuda:/usr/local/cuda:/usr/local/cuda:/usr/local/cuda:/include -I/usr/include/python3.6m -I/home/spk921/pyenv/py36/include/python3.6m -c pointnet2/_ext-src/src/interpolate.cpp -o build/temp.linux-x86_64-3.6/pointnet2/_ext-src/src/interpolate.o -O2 -Ipointnet2/_ext-src/include -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_ext -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
In file included from /home/spk921/pyenv/py36/lib/python3.6/site-packages/torch/lib/include/ATen/cuda/CUDAContext.h:5:0,
from pointnet2/_ext-src/include/utils.h:2,
from pointnet2/_ext-src/src/interpolate.cpp:2:
/home/spk921/pyenv/py36/lib/python3.6/site-packages/torch/lib/include/ATen/cuda/CUDAStream.h:6:30: fatal error: cuda_runtime_api.h: No such file or directory
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1

Error running the train_sem_seg.py

`----- Train Epoch 001 -----

0%| | 0/5230 [00:00<?, ?it/s]
Traceback (most recent call last):
File "train_sem_seg.py", line 152, in
best_loss=best_loss
File "/home/rewreu/notebook/Zhe/pyTorch/PointNet/Pointnet2_PyTorch/utils/pytorch_utils/pytorch_utils.py", line 842, in train
self._train_epoch(epoch, train_loader, self.eval_frequency)
File "/home/rewreu/notebook/Zhe/pyTorch/PointNet/Pointnet2_PyTorch/utils/pytorch_utils/pytorch_utils.py", line 713, in _train_epoch
self.model, data, epoch=epoch
File "/home/rewreu/notebook/Zhe/pyTorch/PointNet/Pointnet2_PyTorch/models/pointnet2_msg_sem.py", line 29, in model_fn
preds = model(xyz, points)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 491, in call
result = self.forward(*input, **kwargs)
File "/home/rewreu/notebook/Zhe/pyTorch/PointNet/Pointnet2_PyTorch/models/pointnet2_msg_sem.py", line 114, in forward
li_xyz, li_points = self.SA_modules[i](l_xyz[i], l_points[i])
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 491, in call
result = self.forward(*input, **kwargs)
File "/home/rewreu/notebook/Zhe/pyTorch/PointNet/Pointnet2_PyTorch/models/../utils/pointnet2_modules.py", line 50, in forward
new_points
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 491, in call
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/container.py", line 91, in forward
input = module(input)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 491, in call
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/container.py", line 91, in forward
input = module(input)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 491, in call
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/conv.py", line 176, in forward
self.padding, self.dilation, self.groups)
TypeError: conv1d(): argument 'input' (position 1) must be Tensor, not builtin_function_or_method`

Any idea where the issue might come from?

indoor3D data strcuture

Dear erikwijmans,

Thank you very much for your great codes!
I have a question about how data is organized in '.h5' files.
I run your code 'python -m pointnet2.train.train_sem_seg', then the data were automatically downloaded and unzipped.
When I open the '.h5' file, the data dimension is (1000, 4096, 9). It seems for every '.h5' file, it has 1000 point clouds. Each point cloud (item?) has 4096 points with xyz and other 6 features.
During the training, the dataloader get 'items' (point clouds) and sample 'num_points' for training.
I get several point clouds (4096,3) and visualize them. It seems each small point cloud was not randomly sampled from a large point cloud. I'm wondering how to well split a large point cloud into small pieces, and each piece has exactly 4096 points.

image
image

Performance Pointnet2:Acc,mAcc,mIoU

Hello @erikwijmans ,

I tested this code on the s3DIS dataset and I found 81.7 of acc (val), which paper, I can find this value off accurcy and how I compute mAcc and mIoU?
code of Acc:
acc = (classes == labels).float().sum() / labels.numel()
Thanks in advance,
Guesmi

number of points (xyz) is varying

If I do print(xyz.shape) inside the forward function of class QueryAndGroup in file pointnet2_utils.py I find the number of points varying.

I am using num_points as 1024. However, when I print I see it varying.

Pytorch 1.0 running problem

Hello, when I try to run the project with code python -m pointnet2.train.train_sem_seg, I got the following error.

=====>
Initializing visdom env [main]
server: http://localhost, port: 8097
Exception in user code:
------------------------------------------------------------
Traceback (most recent call last):
  File "/home/bistu/.virtualenvs/py3torch/lib/python3.5/site-packages/urllib3/connection.py", line 159, in _new_conn
    (self._dns_host, self.port), self.timeout, **extra_kw)
  File "/home/bistu/.virtualenvs/py3torch/lib/python3.5/site-packages/urllib3/util/connection.py", line 80, in create_connection
    raise err
  File "/home/bistu/.virtualenvs/py3torch/lib/python3.5/site-packages/urllib3/util/connection.py", line 70, in create_connection
    sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/bistu/.virtualenvs/py3torch/lib/python3.5/site-packages/urllib3/connectionpool.py", line 600, in urlopen
    chunked=chunked)
  File "/home/bistu/.virtualenvs/py3torch/lib/python3.5/site-packages/urllib3/connectionpool.py", line 354, in _make_request
    conn.request(method, url, **httplib_request_kw)
  File "/usr/lib/python3.5/http/client.py", line 1106, in request
    self._send_request(method, url, body, headers)
  File "/usr/lib/python3.5/http/client.py", line 1151, in _send_request
    self.endheaders(body)
  File "/usr/lib/python3.5/http/client.py", line 1102, in endheaders
    self._send_output(message_body)
  File "/usr/lib/python3.5/http/client.py", line 934, in _send_output
    self.send(msg)
  File "/usr/lib/python3.5/http/client.py", line 877, in send
    self.connect()
  File "/home/bistu/.virtualenvs/py3torch/lib/python3.5/site-packages/urllib3/connection.py", line 181, in connect
    conn = self._new_conn()
  File "/home/bistu/.virtualenvs/py3torch/lib/python3.5/site-packages/urllib3/connection.py", line 168, in _new_conn
    self, "Failed to establish a new connection: %s" % e)
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7facd10fefd0>: Failed to establish a new connection: [Errno 111] Connection refused

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/bistu/.virtualenvs/py3torch/lib/python3.5/site-packages/requests/adapters.py", line 449, in send
    timeout=timeout
  File "/home/bistu/.virtualenvs/py3torch/lib/python3.5/site-packages/urllib3/connectionpool.py", line 638, in urlopen
    _stacktrace=sys.exc_info()[2])
  File "/home/bistu/.virtualenvs/py3torch/lib/python3.5/site-packages/urllib3/util/retry.py", line 398, in increment
    raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8097): Max retries exceeded with url: /env/main (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7facd10fefd0>: Failed to establish a new connection: [Errno 111] Connection refused',))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/bistu/.virtualenvs/py3torch/lib/python3.5/site-packages/visdom/__init__.py", line 446, in _send
    data=json.dumps(msg),
  File "/home/bistu/.virtualenvs/py3torch/lib/python3.5/site-packages/requests/api.py", line 116, in post
    return request('post', url, data=data, json=json, **kwargs)
  File "/home/bistu/.virtualenvs/py3torch/lib/python3.5/site-packages/requests/api.py", line 60, in request
    return session.request(method=method, url=url, **kwargs)
  File "/home/bistu/.virtualenvs/py3torch/lib/python3.5/site-packages/requests/sessions.py", line 533, in request
    resp = self.send(prep, **send_kwargs)
  File "/home/bistu/.virtualenvs/py3torch/lib/python3.5/site-packages/requests/sessions.py", line 646, in send
    r = adapter.send(request, **kwargs)
  File "/home/bistu/.virtualenvs/py3torch/lib/python3.5/site-packages/requests/adapters.py", line 516, in send
    raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8097): Max retries exceeded with url: /env/main (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7facd10fefd0>: Failed to establish a new connection: [Errno 111] Connection refused',))
Without the incoming socket you cannot receive events from the server or register event handlers to your Visdom client.
<=====
Exception in user code:
------------------------------------------------------------
Traceback (most recent call last):
  File "/home/bistu/.virtualenvs/py3torch/lib/python3.5/site-packages/urllib3/connection.py", line 159, in _new_conn
    (self._dns_host, self.port), self.timeout, **extra_kw)
  File "/home/bistu/.virtualenvs/py3torch/lib/python3.5/site-packages/urllib3/util/connection.py", line 80, in create_connection
    raise err
  File "/home/bistu/.virtualenvs/py3torch/lib/python3.5/site-packages/urllib3/util/connection.py", line 70, in create_connection
    sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/bistu/.virtualenvs/py3torch/lib/python3.5/site-packages/urllib3/connectionpool.py", line 600, in urlopen
    chunked=chunked)
  File "/home/bistu/.virtualenvs/py3torch/lib/python3.5/site-packages/urllib3/connectionpool.py", line 354, in _make_request
    conn.request(method, url, **httplib_request_kw)
  File "/usr/lib/python3.5/http/client.py", line 1106, in request
    self._send_request(method, url, body, headers)
  File "/usr/lib/python3.5/http/client.py", line 1151, in _send_request
    self.endheaders(body)
  File "/usr/lib/python3.5/http/client.py", line 1102, in endheaders
    self._send_output(message_body)
  File "/usr/lib/python3.5/http/client.py", line 934, in _send_output
    self.send(msg)
  File "/usr/lib/python3.5/http/client.py", line 877, in send
    self.connect()
  File "/home/bistu/.virtualenvs/py3torch/lib/python3.5/site-packages/urllib3/connection.py", line 181, in connect
    conn = self._new_conn()
  File "/home/bistu/.virtualenvs/py3torch/lib/python3.5/site-packages/urllib3/connection.py", line 168, in _new_conn
    self, "Failed to establish a new connection: %s" % e)
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7facd111ff60>: Failed to establish a new connection: [Errno 111] Connection refused

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/bistu/.virtualenvs/py3torch/lib/python3.5/site-packages/requests/adapters.py", line 449, in send
    timeout=timeout
  File "/home/bistu/.virtualenvs/py3torch/lib/python3.5/site-packages/urllib3/connectionpool.py", line 638, in urlopen
    _stacktrace=sys.exc_info()[2])
  File "/home/bistu/.virtualenvs/py3torch/lib/python3.5/site-packages/urllib3/util/retry.py", line 398, in increment
    raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8097): Max retries exceeded with url: /events (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7facd111ff60>: Failed to establish a new connection: [Errno 111] Connection refused',))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/bistu/.virtualenvs/py3torch/lib/python3.5/site-packages/visdom/__init__.py", line 446, in _send
    data=json.dumps(msg),
  File "/home/bistu/.virtualenvs/py3torch/lib/python3.5/site-packages/requests/api.py", line 116, in post
    return request('post', url, data=data, json=json, **kwargs)
  File "/home/bistu/.virtualenvs/py3torch/lib/python3.5/site-packages/requests/api.py", line 60, in request
    return session.request(method=method, url=url, **kwargs)
  File "/home/bistu/.virtualenvs/py3torch/lib/python3.5/site-packages/requests/sessions.py", line 533, in request
    resp = self.send(prep, **send_kwargs)
  File "/home/bistu/.virtualenvs/py3torch/lib/python3.5/site-packages/requests/sessions.py", line 646, in send
    r = adapter.send(request, **kwargs)
  File "/home/bistu/.virtualenvs/py3torch/lib/python3.5/site-packages/requests/adapters.py", line 516, in send
    raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8097): Max retries exceeded with url: /events (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7facd111ff60>: Failed to establish a new connection: [Errno 111] Connection refused',))
epochs:   0%|                                           | 0/200 [00:00<?, ?it/s]
Traceback (most recent call last):                                              
  File "/usr/lib/python3.5/runpy.py", line 184, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib/python3.5/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/custom/1_project/post_201812/Pointnet2_PyTorch-pytorch-1.0.0/pointnet2/train/train_sem_seg.py", line 151, in <module>
    best_loss=best_loss
  File "/custom/1_project/post_201812/Pointnet2_PyTorch-pytorch-1.0.0/pointnet2/utils/pytorch_utils/pytorch_utils.py", line 756, in train
    res = self._train_it(it, batch)
  File "/custom/1_project/post_201812/Pointnet2_PyTorch-pytorch-1.0.0/pointnet2/utils/pytorch_utils/pytorch_utils.py", line 692, in _train_it
    _, loss, eval_res = self.model_fn(self.model, batch)
  File "/custom/1_project/post_201812/Pointnet2_PyTorch-pytorch-1.0.0/pointnet2/models/pointnet2_msg_sem.py", line 19, in model_fn
    preds = model(inputs)
  File "/home/bistu/.virtualenvs/py3torch/lib/python3.5/site-packages/torch/nn/modules/module.py", line 489, in __call__
    result = self.forward(*input, **kwargs)
  File "/custom/1_project/post_201812/Pointnet2_PyTorch-pytorch-1.0.0/pointnet2/models/pointnet2_msg_sem.py", line 143, in forward
    li_xyz, li_features = self.SA_modules[i](l_xyz[i], l_features[i])
  File "/home/bistu/.virtualenvs/py3torch/lib/python3.5/site-packages/torch/nn/modules/module.py", line 489, in __call__
    result = self.forward(*input, **kwargs)
  File "/custom/1_project/post_201812/Pointnet2_PyTorch-pytorch-1.0.0/pointnet2/utils/pointnet2_modules.py", line 42, in forward
    1, 2).contiguous() if self.npoint is not None else None
  File "/custom/1_project/post_201812/Pointnet2_PyTorch-pytorch-1.0.0/pointnet2/utils/pointnet2_utils.py", line 46, in forward
    return _ext.furthest_point_sampling(xyz, npoint)
AttributeError: module 'pointnet2._ext' has no attribute 'furthest_point_sampling'

the main problem seems to be in the folder _ext doesn't have the function or file which name is furthest_point_sampling.
How to solve this problem?
thanks

issue to run the training after successful install

Hello,

I have an issue to run the training after successful install of the repo.
This is the command line and the generated error message.

Thanks in advance,
Raouf

python -m pointnet2.train.train_cls

Traceback (most recent call last):
File "/home/raouf/workspace/gitprojects/Pointnet2_PyTorch/pointnet2/utils/pointnet2_utils.py", line 20, in
import pointnet2._ext as _ext
ImportError: /home/raouf/workspace/gitprojects/Pointnet2_PyTorch/pointnet2/_ext.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZN3c105ErrorC1ENS_14SourceLocationERKSs

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/raouf/anaconda3/lib/python3.6/runpy.py", line 183, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File "/home/raouf/anaconda3/lib/python3.6/runpy.py", line 109, in _get_module_details
import(pkg_name)
File "/home/raouf/workspace/gitprojects/Pointnet2_PyTorch/pointnet2/init.py", line 17, in
from pointnet2 import utils
File "/home/raouf/workspace/gitprojects/Pointnet2_PyTorch/pointnet2/utils/init.py", line 8, in
from . import pointnet2_utils
File "/home/raouf/workspace/gitprojects/Pointnet2_PyTorch/pointnet2/utils/pointnet2_utils.py", line 24, in
"Could not import _ext module.\n"
ImportError: Could not import _ext module.
Please see the setup instructions in the README: https://github.com/erikwijmans/Pointnet2_PyTorch/blob/master/README.rst

Why GatherOperation needs a backward method?

Hello, Erik. Thanks for sharing your pytorch implementation of Pointnet2. Actually, I want to know why we need a backward method for GatherOperation in ./utils/pointnet2_utils.py. I believe there is no gradient in gather_pointt function, and do we really need gather_point_grad function?

class GatherOperation(Function):
    @staticmethod
    def forward(ctx, features, idx):
        # type: (Any, torch.Tensor, torch.Tensor) -> torch.Tensor
        r"""
        Parameters
        ----------
        features : torch.Tensor
            (B, C, N) tensor
        idx : torch.Tensor
            (B, npoint) tensor of the features to gather
        Returns
        -------
        torch.Tensor
            (B, C, npoint) tensor
        """

        _, C, N = features.size()

        ctx.for_backwards = (idx, C, N)

        return _ext.gather_points(features, idx)

    @staticmethod
    def backward(ctx, grad_out):
        idx, C, N = ctx.for_backwards

        grad_features = _ext.gather_points_grad(grad_out.contiguous(), idx, N)
        return grad_features, None

Inconsistency description in _PointnetSAModuleBase

In pointnet2.utils.pointnet2_modules.py
The method forward in class _PointnetSAModuleBase has problematic description:

Returns
-------
new_xyz : torch.Tensor
(B, npoint, 3) tensor of the new features' xyz
new_features : torch.Tensor
(B, npoint, \sum_k(mlps[k][-1])) tensor of the new_features descriptors
"""
But in the end new_features has the shape of (B, \sum_k(mlps[k][-1]), npoint)
This may introduce confusion, perhaps it is better to return in the form of (B, npoint, \sum_k(mlps[k][-1]))

why is network architecture different from original?

Hi Erik,

Regarding the network architecture in your pytorch implementation. I noticed that in the SA and FP modules, the mlp / conv2d channel input and output dimensions differ from the dimensions used in the 'official' PointNet++ code by Charles Qi. Additionally, per SA layer, there are two SharedMLP's instead of one, as in Charles' code.

What are the reasons for the differences?

Thank you!
Tam

point clouds with various size

Hi Erik,

is it possible to use point clouds with various size? I am struggling to understand if the sampling and grouping step allows me to do that.

Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.