Code Monkey home page Code Monkey logo

bsp-net-pytorch's People

Contributors

czq142857 avatar naynasa avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bsp-net-pytorch's Issues

SVR from RGB image input using pretrained model

Hi,

Thank you for making the code available publicly. I am interested in using your pretrained model to reconstruct 3D mesh from an RGB image (for testing). The BSP_SVR class in modelSVR.py has a method test_mesh_obj_material() that generates a 3D mesh with color/material in .obj format, which is the format I need. Can this method be used to reconstruct mesh directly from an image input? This method requires pixel data (self.data_pixels) as input in the following line of code.
for t in range(config.start, min(len(self.data_pixels),config.end))

In your code, self.data_pixels is extracted from the hdf5 files. For new test images for which we dont have the hdf5 files, could you please let us know what kind of data processing we need to do to define self.data_pixels from an RGB image input?

thanks

Net outs collapse to zero values in continuous phase on 2D experiments?

Hi there!

I have been experimenting with the 2D version of your BSP_SVR model in a PyTorch implementation of mine. I have been able to reproduce the results on the toy dataset. However, when I switched to more complex binary masks (such as segmentation pipeline outputs), I'm noticing that the bsp_loss_sp loss initially converges and then after a few epochs, collapses to a constant value in the continuous phase. When this happens, I notice that the net_outs tend to collapse to a tensor of all zero values.
However, initializing the model on these weights and continuing to train in the discrete phase still seems to work in the expected direction, albeit, the convergence of losses is very slow.

I've noticed that the model only works well with a very low learning rate (0.00002) in the 2D case. However, it appears the model takes a very long time to train using this learning rate on more complex binary masks. Do you have any suggestions or insights as to why the model collapses to all zero values (in the 2D case where binary masks are more complex than the toy dataset)? And on how this could be tackled?

Looking forward to your reply. Thank you.

Sensitive to initial value?

Hi Zhiqin,

I really liked your work and thanks for the awesome code.

I'm trying to expand BSPNet to point cloud, but training becomes unstable and sensitive to changes in "p_dim", "c_dim" and torch random seeds.

The first training iteration sometimes generates an empty or solid shape and kills all the gradients. Probably due to clamping operations in the generator.

Don't know if you faced similar issues before, any suggestions are really appreciated.

Best,

Daxuan

sample files

Hi,

Thank you for the code. I trained the AE model using train_ae.sh and using the prepared Shapenet dataset that you have provided. I did not make any changes to the code, except pointing to the correct data_dir in main.py. After training with size 16 and 32, I looked at the sample files generated during training and testing. The generated sample .ply files do not have any vertices, but all of them just have the header as shown below. Is this the expected output? If not, could you please tell me how I can fix the problem and get the correct sample output with vertex info? The training logs are also shown below.

Also, can the 4 progressive training steps (size 16 and 32 with 8M iterations and size 64 with 16M iterations) be run in parallel or do they have to be trained sequentially as shown in train_ae.sh?

Thank you.

---------sample ply file generated during training and testing----------
ply
format ascii 1.0
element vertex 0
property float x
property float y
property float z
element face 0
property list uchar int vertex_index
end_header

The following is a snippet of the training logs:
----------net summary----------
training samples 35019

32 Epoch: [ 0/228] time: 171.8615, loss_sp: 0.252307, loss_total: 0.259578
32 Epoch: [ 1/228] time: 344.7302, loss_sp: 0.252319, loss_total: 0.259573
32 Epoch: [ 2/228] time: 517.7831, loss_sp: 0.252298, loss_total: 0.259548
32 Epoch: [ 3/228] time: 690.9526, loss_sp: 0.252300, loss_total: 0.259559
32 Epoch: [ 4/228] time: 864.0516, loss_sp: 0.252307, loss_total: 0.259560

training time

Hi!

Thank you for sharing the code. What is the rough time needed for training on all category data?

Questions about the segmentation experiment

Hi,
Thanks for sharing your good research.
In the segmentation experiment, you said that for each point sample, label is assigned by measuring the distance among nearby primitives. Is there a code for this in your repo?

Phase 3, overlap loss

Thank you for your highly interesting work and for releasing the code. I have a few questions.

I see in the code that there is a training loss not described in the paper, the one where you encourage the values in T to move towards 0 or 1. How does this loss perform? If using phase 3, should it be after phase 0 training or should we start fro, scratch with phase 3?

Also, I saw that the implementation of the overlap loss in the code seems different from what is described in the paper. What is the reason for this difference?

Finally, are the pretrained network weights you provided trained with or without the overlap loss?

small question

No such file or directory: 'model_checkpoint_path: "BSP_AE.model-228"'

Train loss

Hi,

Thank you for the code. I am using my own data to train the AE model and meet some problems. When I train with size 16, train loss is normal, probably between 0.001 and 0.002. But when I train with size 32, the loss is about 0.02. When I train with size 64, the loss is about 0.05. I want to ask whether the loss is normal. The batch size is 24 for all training.

SVR training with RGB images

Hi,

I am interested in testing your code with different resolution RGB images. It is possible to use "modelSVR.py" also with RGB images? What changes need to be made?
As for the resolution of the images, is it necessary to add other levels of convolution?

Qusetion about the processed data from ShapeNet

Hi ZhiQin!

Thanks for your work and releasing scripts.
I want to know the way you prepare the voxel data, I notice that there are "data_voxels" and "coords" needed for the 3d AE.
The "data_voxels" are the model voxel martrixs and the "coords" are float coordinates of the voxels. The "coords" are splited to 8 subsets, values of which are limited to[-0.4922, 0.4766]. After you get the point clouds from ShapeNet, how you normalize the data and how you voxelize the data after normalization?

Best,
hmax

About texture

Hi! Thank you so much for this wonderful work.

I'm interested in applying textures on the output meshes, as shown in the paper. The textures seem continuous across different convexes. May I ask how do you do the UV mapping to apply the textures please? Thank you!

Prerained model of SVR

Thank you for your code.I have seen other issues that the pretrained model is without overlap loss and code of generator class in modelSVR.py corresponds to Phase 1.
So to produce better results for shape reconstruction,should I retrain the AE model on phase 0 โ†’ phase 3 and change the code of generator class in modelSVR.py and then retrain the SVR model?

Train SVR with different models

Hi,

can I train the SVR with 3d models and renderings created by me but still using the pre-trained autoencoder provided by you?
In this case my models would always be from the same classes used to train the autoencoder (chairs, tables, planes...)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.