Code Monkey home page Code Monkey logo

dgcnn.pytorch's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dgcnn.pytorch's Issues

S3DIS training Error

Hi, I am trying to run training a model on S3DIS dataset using your code.
I think I've followed correct steps which are to download the Aligned version from the link and download another dataset
from https://github.com/charlesq34/pointnet/blob/master/sem_seg/download_data.sh, all under /data folder.
However, when I run the code, it gives the following error and I am not sure how to fix it.

Traceback (most recent call last):
File "main_semseg.py", line 318, in
test(args, io)
File "main_semseg.py", line 192, in test
test_loader = DataLoader(S3DIS(partition='test', num_points=args.num_points, test_area=test_area),
File "/home/ubuntu/shkim/dgcnn.pytorch/data.py", line 246, in init
self.data, self.seg = load_data_semseg(partition, test_area)
File "/home/ubuntu/shkim/dgcnn.pytorch/data.py", line 147, in load_data_semseg
data_batches = np.concatenate(data_batchlist, 0)
File "<array_function internals>", line 6, in concatenate
ValueError: need at least one array to concatenate

Could you please give me an advice how to fix this issue? Thank you!

question about gt

Here, the label of the incoming network is just the type of point cloud. Why not import SEG to enhance the fitting effect of training?
image

Visualizing Feature Map

Hello again, the original DGCNN paper was able to show the feature map as below.
Do you know how to construct and visualize the feature map like this?
I would like to visualize feature map for each edgeconv layer.
Thank you.

image

Could you share the training log of S3DIS and ShapeNet Part Training?

Thank you for your astonishing works,

Could you share the training log of S3DIS and ShapeNet Part Training?

If possible, it will be really helpful for researchers; it's about reproducibility of your codes in your local environment, i want to be sure your training logs are similar to ours.

Thank you.

About the partseg task

I found that every time I restarted the DGCNN-Partseg model, the train ACC was always the same from the beginning, although I didn't finish the training before retraining. Do you know the reason for this strange phenomenon?
runlog

Namespace(batch_size=16, class_choice=None, dataset='data/shapenetcore_partanno_segmentation_benchmark_v0', dropout=0.5, emb_dims=1024, epochs=300, eval=False, exp_name='partseg', k=20, lr=0.001, model='dgcnn', model_path='', momentum=0.9, no_cuda=False, num_points=1024, scheduler='cos', seed=1, test_batch_size=8, use_sgd=True)
Using GPU : 0 from 1 devices
Train 0, loss: 1.797249, train acc: 0.801439, train avg acc: 0.399633, train iou: 0.639207
Test 0, loss: 1.556620, test acc: 0.881248, test avg acc: 0.567119, test iou: 0.728525
Namespace(batch_size=16, class_choice=None, dataset='data/shapenetcore_partanno_segmentation_benchmark_v0', dropout=0.5, emb_dims=1024, epochs=300, eval=False, exp_name='partseg', k=20, lr=0.001, model='dgcnn', model_path='', momentum=0.9, no_cuda=False, num_points=1024, scheduler='cos', seed=1, test_batch_size=8, use_sgd=True)
Using GPU : 0 from 1 devices
Train 0, loss: 1.797249, train acc: 0.801439, train avg acc: 0.399633, train iou: 0.639207
Test 0, loss: 1.556620, test acc: 0.881248, test avg acc: 0.567119, test iou: 0.728525

Segmentation fault

Hi, is shows Segmentation fault when i run python main_partseg.py --exp_name=partseg_airplane_eval --class_choice=airplane --eval=True --model_path=pretrained/model.partseg.airplane.t7

and i use pdb to debug,
-> model = DGCNN_partseg(args, seg_num_all).to(device)
(Pdb) n
Segmentation fault

Question about Edge Conv

Hi, I have a question about Edge Conv implementation of yours.
EdgeConv constructs K number of local graph making the shape as Batch_size * C * Pts * K.
According to your implementation, kernel 1 * 1 is applied and it means that the local kernel weights are all same.
However, in image convolution, weights in each kernel is different.
For example, if there is 3 * 3 kernel conv, current dgcnn has same weight across the kernel.
Then, doesn't it make more sense to use (1 * K) like x-conv from PointCNN to capture local features?
What is your reason behind using 1 * 1 convolution rather than 1 * K convolution?
Thank you!

questions about the function 'prepare_test_data_semseg( )'

Hi, Thank you for your good works.
I noticed that you in the function 'prepare_test_data_semseg( )', it run the scripts 'prepare_data/collect_indoor3d_data.py' and 'prepare_data/gen_indoor3d_h5.py' to transform the raw dataset 'Stanford3dDataset_v1.2_Aligned_Version' to 'indoor3d_sem_seg_hdf5_data_test'.
However, as descripted in the pointnet repo https://github.com/charlesq34/pointnet/tree/master/sem_seg#dataset, the two scripts will transforms the raw data to the training dataset which is sampled 4096 points in each block and is not suitable for testing which requires the all points but not the sampled ones.

Problems with semseg

Excuse me,I have printed the data_batchlist and the label_batchlist and they are not empty,but it still outputs the error: ValueError: need at least one array to concatenate.

Another error I met is as follow:
Traceback (most recent call last):
File "prepare_data/collect_indoor3d_data.py", line 28, in
indoor3d_util.collect_point_label(anno_path, os.path.join(output_folder, out_filename), 'txt')
File "D:\dgcnn.pytorch-V2\prepare_data\indoor3d_util.py", line 65, in collect_point_label
fout = open(out_filename, 'w')
FileNotFoundError: [Errno 2] No such file or directory: 'D:\dgcnn.pytorchV2\data/stanford_indoor3d\Stanford3dDataset_v1.2_Aligned_Version\Area_1_conferenceRoom_1.txt'

about part segmention visualization script

Hi,
Thank you very much for sharing the code on DGCNN. I have completed the training on part segmentation and want to visualize it. I read your suggestions on #8 and #26. But I only exported points (x, y, z) and cannot assign the color (R,G,B ) by the label . Can you share your visualization script? thank you very much!
I'm sorry to trouble you, and I'm sorry to ask you such a simple question, I am a beginner in point cloud work, I hope you understand, thank you very much.

Training Time

Hi @antao97

Thank you for the great work! I wonder what the training time on the task of part segmentation with the default setting (4 GPUs, batch size is 32), for the full dataset and with class choice? I really appreciate your help! Thank you!

Best,
Yongcheng

训练语义分割时的问题

您好!抱歉这么快就又来叨扰您了。
我想运行您的训练语义分割模型的代码,即python main_semseg.py --exp_name=semseg_6 --test_area=6
在我执行这个代码之前,我已经下载了数据集Stanford3dDataset_v1.2_Aligned_Version.zip和数据集indoor3d_sem_seg_hdf5_data.zip。我根据提示在您的项目下创建了data/文件夹,然后将Stanford3dDataset_v1.2_Aligned_Version.zip放入其中,然后执行python main_semseg.py --exp_name=semseg_6 --test_area=6代码。但是执行完后并没有在data/目录下生成文件夹,在上级目录生成了Stanford3dDataset_v1.2_Aligned_Version文件夹。我将其放至data/文件夹中,并将indoor3d_sem_seg_hdf5_data.zip解压,然后也放到data/文件夹中,然后执行代码。在data/中得到了两个文件夹,一个是stanford_indoor3d文件夹,里面生成了一堆.npy文件,还得到了indoor3d_sem_seg_hdf5_data_test文件夹,里面有一个all_files.txt和一个room_filelist.txt,其中第一个txt是空的,第二个里面有很多类似Area_1_conferenceRoom_1这样的数据。代码运行到这里时就报错了,报错信息为
Namespace(batch_size=32, dataset='S3DIS', dropout=0.5, emb_dims=1024, epochs=100, eval=False, exp_name='semseg_6', k=20, lr=0.001, model='dgcnn', model_root='', momentum=0.9, no_cuda=False, num_points=4096, scheduler='cos', seed=1, test_area='6', test_batch_size=16, use_sgd=True)
Using GPU : 0 from 1 devices
Traceback (most recent call last):
File "main_semseg.py", line 314, in
train(args, io)
File "main_semseg.py", line 55, in train
test_loader = DataLoader(S3DIS(partition='test', num_points=args.num_points, test_area=args.test_area),
File "/home/zpf/chenhao/dgcnn.pytorch-master/data.py", line 247, in init
self.data, self.seg = load_data_semseg(partition, test_area)
File "/home/zpf/chenhao/dgcnn.pytorch-master/data.py", line 148, in load_data_semseg
data_batches = np.concatenate(data_batchlist, 0)
File "<array_function internals>", line 6, in concatenate
ValueError: need at least one array to concatenate
已经尝试了几次都是这样,不知道该怎么解决,是哪个步骤漏了还是怎么。希望您能告知,十分感谢!!

Problem in the training of seg

In the part of seg , I've got the dataset ready ,but when i run the code ,there is still something wrong.
How can i solve it ,thank you
image

部分分割中对于每类物体单独测试精度的问题

作者您好,我用跑出来的部分分割模型按照您的代码对每一类物体单独测试,出现了问题,我看到其他人也提出过这个问题,看到了您在#10里面的答复。
If you want to evaluate performance in a particular class with full dataset trained, the only thing you need to do is to calculate Acc and mIoU in some chosen dims of the 50-dim vector.
我根据您说的去修改calculate_shape_IoU这个函数,但是一直没成功,我不是很明白您在#10里面的答复。
您有时间时能详细跟我说说吗,万分感谢!

Test results.

Have you ever wondered why your code implement is higher than the original paper? Is there any changes on net structure or used some tricks?

About the transform_net

Thanks for your great work of pytorch implement of DGCNN! I want to ask why the cls and semseg model haven't the transform_net but the partseg model add it? The tansform_net can affect accuracy of the result?

part segmentation visualization script

Hi,
Thank you very much for sharing the code on DGCNN. I have completed the training on part segmentation and want to visualize it. I read your suggestions on #8 and #26. But I only exported points (x, y, z) and cannot assign the color (R,G,B ) by the label . Can you share your visualization script? thank you very much!
I'm sorry to trouble you, and I'm sorry to ask you such a simple question, I am a beginner in point cloud work, I hope you understand, thank you very much.

question about test on one class choice

I directly train the full dataset, then use this trained model to test on one class choice
but there was a problem:
RuntimeError: Error(s) in loading state_dict for DataParallel:
size mismatch for module.conv11.weight: copying a param with shape torch.Size([50, 256, 1]) from checkpoint, the shape in current model is torch.Size([4, 128, 1]).

I hope you can answer. Thank you very much!

image

Problem when train the semseg script

Sorry to bother you, but I have one question when I train the semseg script.

I have downloaded 2 datasets and unzipped them in 'data', then I have 'indoor3d_sem_seg_hdf5_data' and 'stanford_indoor3d'. However, when I run the script 'main_semseg.py', it shows 'ValueError: need at least one array to concatenate'.

Did I lost some steps? Coule you give me your help? Thanks.

Stanford shapenet site certificate expired

--2020-08-01 13:22:41-- https://shapenet.cs.stanford.edu/media/shapenet_part_seg_hdf5_data.zip
Resolving shapenet.cs.stanford.edu (shapenet.cs.stanford.edu)... 171.67.77.19
Connecting to shapenet.cs.stanford.edu (shapenet.cs.stanford.edu)|171.67.77.19|:443... connected.
ERROR: cannot verify shapenet.cs.stanford.edu's certificate, issued by ‘/C=US/ST=MI/L=Ann Arbor/O=Internet2/OU=InCommon/CN=InCommon RSA Server CA’:
Issued certificate has expired.
To connect to shapenet.cs.stanford.edu insecurely, use `--no-check-certificate'.

The returned feature after knn is not contiguous, slowing down the forward process tremendously.

This code
https://github.com/AnTao97/dgcnn.pytorch/blob/8d287d92479a79fc6ff5f70b5ad7eacbf3ecdd38/model.py#L60
should be modified as

 feature = torch.cat((feature-x, x), dim=3).permute(0, 3, 1, 2).contiguous()

Use this simple code to test the forwarding speed during training.

import torch
from model import DGCNN_partseg
from tqdm import tqdm

class A:pass
args = A
args.k = 16
args.emb_dims = 1024
args.dropout = 0.5

model = DGCNN_partseg(args, 10).cuda()
x = torch.randn((8, 3, 2048)).cuda()
l = torch.zeros((8, 16, 1)).cuda()

model.train()
for i in tqdm(range(1000)):
    out = model(x, l)

Following are the speed results in my computer( RTX 2080Ti):

  1. not contiguous

100%|██████████| 1000/1000 [02:39<00:00, 6.27it/s]

  1. continuous

100%|██████████| 1000/1000 [00:46<00:00, 21.44it/s]

about partseg visualization

Thanks for your share!
May I ask how to achieve the visual presentation of the part segmentation results, and could you please share the code about it? I'm a beginner and this question has been bothering me for several days!
Mank Thanks!

Part Segmentation visualization

Hi,
I have already go through all steps in part segmentation and I want to visualize the final results. I saw you post a really detail explanation at issue8 about how to visualize the final results. But I can not figure out where is the output file that I can know which points are from the same class. Could you please explain about how to store the class information? Sorry for ask such a silly question, I'm a beginner for this and this question actually bother me for few days : )
Thank you so much!

About CLS model

Hello,
I train the source network, but I can't get 92.2% accuracy. Can you tell me how to get the accuracy? Thank you.
Below is the train results.
Train 240, loss: 1.343585, train acc: 0.991972, train avg acc: 0.987482
Test 240, loss: 1.442848, test acc: 0.921394, test avg acc: 0.894500
Train 241, loss: 1.344278, train acc: 0.991463, train avg acc: 0.987030
Test 241, loss: 1.440947, test acc: 0.920989, test avg acc: 0.894244
Train 242, loss: 1.343533, train acc: 0.991565, train avg acc: 0.987816
Test 242, loss: 1.445475, test acc: 0.921394, test avg acc: 0.896413
Train 243, loss: 1.343830, train acc: 0.990650, train avg acc: 0.986168
Test 243, loss: 1.442397, test acc: 0.925041, test avg acc: 0.899953
Train 244, loss: 1.342421, train acc: 0.991463, train avg acc: 0.987199
Test 244, loss: 1.445072, test acc: 0.921394, test avg acc: 0.887744
Train 245, loss: 1.342044, train acc: 0.991565, train avg acc: 0.986175
Test 245, loss: 1.439755, test acc: 0.920989, test avg acc: 0.890576
Train 246, loss: 1.344775, train acc: 0.990955, train avg acc: 0.986748
Test 246, loss: 1.443565, test acc: 0.923825, test avg acc: 0.901326
Train 247, loss: 1.342665, train acc: 0.992683, train avg acc: 0.989145
Test 247, loss: 1.442239, test acc: 0.924230, test avg acc: 0.895576
Train 248, loss: 1.341297, train acc: 0.992276, train avg acc: 0.988662
Test 248, loss: 1.441331, test acc: 0.923825, test avg acc: 0.892285
Train 249, loss: 1.339861, train acc: 0.992785, train avg acc: 0.988616
Test 249, loss: 1.440879, test acc: 0.923015, test avg acc: 0.894663

get_graph_feature函数返回的特征的问题

您好,我是一个刚入门这个领域的小白。我在原文的公式(8)中看到作者给(xj-xi)和xi分别加了一个函数表示其权重,但是在您的代码中没有找到跟这个权重对应的地方,很困惑,希望您能告知一下是什么原因,万分感谢!

Training | Testing in seg

Fold # Training Testing
  (Area #) (Area #)
1 1, 2, 3, 4, 6 5
2 1, 3, 5, 6 2, 4
3 2, 4, 5 1, 3, 6
Certain areas in the dataset represent parts of buildings with similarities in their appearance and architectural features, thus we define standard training and testing splits so that no areas from similarly looking buildings appear in both. We split the 6 areas in the dataset as per the Table below and follow a 3-fold cross-validation scheme.
It's copied from the source of dataset. So ,I think use 5th area as test dataset instead of 6th will be better.

Results of ShapeNet dataset

Hi, I just have a question regarding ShapeNet dataset.
Is the official result for ShapeNet dataset based on Test data or Validation data?
Thank you.

Question about the KNN input channel

Thank you for the pytorch implementation. I was looking at the model.py and found that the input channel maybe vary.

def get_graph_feature(x, k=20, idx=None, moreFeatures=True):
    batch_size = x.size(0)
    num_points = x.size(2)
    x = x.view(batch_size, -1, num_points)
    if idx is None:
        if dim9 == False:
            idx = knn(x, k=k)   # (batch_size, num_points, k)
        else:
            idx = knn(x[:, 6:], k=k)

I am wondering when when I should pass all features through KNN and when I should pass partial.
(According to S3DIS, the each point as 9-dim which are XYZ RGB and normalized XYZ.)

about data augmentation

Hi, @antao97 ,

It seems that your code did not implement the data augmentation (e.g. random translation, random anisotropic scaling). With the augmentation appended, I think the result from your implementation will give much higher accuracy than the official one. Would you plan to append it?

Thanks~

Part segmentation pretrained model

Hi,
I want to run the evaluation script with pretrained model with class 'car', but i couldn't find the pretrained model for car, could you please upload the car pretrained model? Thank you so much :)

Some questions about part-seg

Dear Author:
Sorry to bother you,I'd like to ask you some confusing questions. How do you output Acc and IoU of each category in part-seg? I see that only mIoU is calculated in your code. How can I code so that each epoch of training will output IoU of each category. Because I am a little white code, I hope you can give me some help.

CLS Model

Hi @antao97

Just want to double check, the classification model and training scripts are identical to the Yue's original branch, right?
I've checked and find no differences.

About Part Seg

Hello!
I want to ask that how many epoches it took to get the best mIoU(85.2%) when you trained the model?
Also I noticed that this line ( line 117 )
seg = seg - seg_start_index
I don't understand the usage of this line, could you please explain?

Overfitting problems

Thank you very much for your code.

When I utilized your DGCNN model, I encountered overfitting problems. The mF1 metric in training set can reach nearly 90%. However, this metric is only about 40% in test set. weight decay and dropout do not work. This phenomenon is also the case for multiple runs. Could you can help me to solve it? Thanks in advance!

Results on part segmentation

Hi, I have another question to ask you.
I am currently using your part seg model, but doesn't seem to get the iou stated on the paper.
However, you have said in the other posts that you have re-run the same exp for multiple times, is it correct?
Why does it make the model to perform better and what is the difference between increasing epochs and re-run the same experiment for multiple times?
Also, how many times have you re-run the experiment to get the values on the paper?
Thank you.

No of Epochs in cls modelnet40

Thanks,
I train the same network but not gating 93.3% accuracy. Tell me How many epochs needed for that. Also, what is the use of T Networks?

Question about hyper parameters

Thanks for your pytorch implementation of dgcnn, it's a helpful job. BTW, I wonder how did you set the hyper parameters of classification task on ModelNet40. I used default setting, but couldn't reach the performance shown in README, I just got 92.7 overall accuracy and 89.2 mean class accuracy, with other adjustment, it helped a little. I would be glad if you can tell us how to set the parameters, thanks a lot!

Batch size

Hi, thank you for providing the pytorch version!
I am currently training my model using pytorch dgcnn model, but due to the size of my dataset and model, I am trying to train my model using stochastic gradient descent (batch size = 1).
However, it doesnt seem to be straight forward to implement it due to use of nn.batchnorm1d and I get errors when I set batch size=1.
Is there any solution or work around so that I can train my model using batch size =1?
Thank you

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.