Code Monkey home page Code Monkey logo

if-net's People

Contributors

jchibane avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

if-net's Issues

"508 heavily distorted objects" and shapenet preprocessing

Thank you for releasing and maintaining your code!

What's the difference between the 15 meshes to skip in "voxelization_32.npy" and the 508 heavily distorted objects mentioned in your paper?

Also, do you mind elaborating on the watertight mesh preprocessing in Wu and Wang et al. (DISN)? I wasn't able to find any details in their paper.

Problem with building dependencies

Hi Julian,

Thanks for providing the code for IF-Net. I run into a compiling issue for the dependencies.

I fixed it with the following two changes to the setup.py file in libmesh:

added import numpy
changed setup(...) to setup(name = 'libmesh', ext_modules = cythonize("*.pyx"), include_dirs=[numpy.get_include()])

Might be helpful for other users of the project.

Cheers!

Single-View Human Reconstruction

The reconstruction results of your approach are amazing!
Could you provide an inference example for single-view human reconstruction?

conda create env fail

Hi Julian, I followed your instructions to create the env for if-nets, but it shows this error.

Preparing transaction: done                               
Verifying transaction: done                              
Executing transaction: failed                             
ERROR conda.core.link:_execute(502): An error occurred while installing package
'conda-forge::cloudpickle-1.3.0-py_0'.                        
FileNotFoundError(2, "No such file or directory: '/home/yuhan/anaconda3/envs/if-
net/bin/python3.7'")
Attempting to roll back.                                                        
                     
Rolling back transaction: done

FileNotFoundError(2, "No such file or directory: '/home/yuhan/anaconda3/envs/if-
net/bin/python3.7'")

Could you please help? or is there other way to create the same env? Thank you.

Query point p and surrounding points with distance d

Hi @jchibane , I notice that the value of d is different in models, like 0.035 in ShapeNet32Vox, 0.0722 in SVR. May I ask how is the value of d decided? Besides, since the convolution has already extract deep features, it should capture features from neighbours already, is it necessary to add additional neighbor points? Thanks.

Human reconstruction from point cloud captured with kinect Azure

Hi,

Since there was no script for human reconstruction, I wrote a script and followed the pipeline in the readme file. These are the results I get which are in no way as good as the examples you have provided in the paper.

Here is my script: (I put it in the main directory of ifnet and ran it from there, tha paths need to be corrected)

kiya_recounstruct_pc.txt

here is the point cloud I used (it's a CSV file):

frame_20.txt

I should mention that for the reconstruction to work the point cloud should have a specific orientation. (since Conv layers are not rotation invariant). The head should be in the direction of positive y, and the human should face the positive z (I guess). (so I have done this for my point cloud)

This is my result with default parameters (points sampled from the point cloud 3,000 input res 125, output res: 256)

deafult_params_ifnet

This is the best result I got, which is just a little better: ( points sampled from the point cloud 20,000 input res 256, output res: 400)

best_result_ifnet

I was wondering if I'm doing something wrong ? and what hyperparameters you have used (res, sample size, retrieval_res etc) to reconstruct the examples in the paper (say the BUFF dataset) ?

Maybe it is the case that the point clouds the network was trained on and the ones from kinect are different, since if I remember correctly you synthetically create your point clouds from the scans. But the difference between the qualities of reconstruction is a lot and I don't think this could solely explain the problem.

I was also wondering if you were also using the same checkpoint you have provided here (for the SVR model) to do your reconstructions ?

thanks

Issue with custom PointCloud on SVR

When using the custom point cloud, I was getting a flat result as shown below.

Screenshot (68)

Then I tested with the BMan0201-HD2-O04P05-S_3DSV_H250_W250_res256.npz file you have provided. The result was amazing.

To find out the issue, I read the BMan0201-HD2-O04P05-S_3DSV_H250_W250_res256.off file and used the vertices to compute the npz file using code below. The output was not flat as before but the reconstruction was different and parts above some point were missing as shown below.

So, the issue should be with the voxelization. How to do it correctly? Thank you very much for this awesome project.

Screenshot (70)

bb_min = -0.5 
bb_max = 0.5
res = 256

def create_grid_points_from_bounds(minimun, maximum, res):
    x = np.linspace(minimun, maximum, res)
    X, Y, Z = np.meshgrid(x, x, x, indexing='ij')
    X = X.reshape((np.prod(X.shape),))
    Y = Y.reshape((np.prod(Y.shape),))
    Z = Z.reshape((np.prod(Z.shape),))

    points_list = np.column_stack((X, Y, Z))
    del X, Y, Z, x
    return points_list

def voxelized_pointcloud_sampling(path, grid_points, kdtree):
    off_path = path
    out_file = r"./" + off_path.split(os.sep)[-1]+".npz"

    point_cloud = trimesh.load(path)
    vertices = point_cloud.vertices

    occupancies = np.zeros(len(grid_points), dtype=np.int8)

    _, idx = kdtree.query(vertices)
    occupancies[idx] = 1

    compressed_occupancies = np.packbits(occupancies)


    np.savez(out_file, point_cloud=vertices, compressed_occupancies = compressed_occupancies, bb_min = bb_min, bb_max = bb_max, res = res)
    print(out_file)
    print('Finished {}'.format(path))



def convert_to_npz(path):
    
    grid_points = create_grid_points_from_bounds(bb_min, bb_max, res)
    kdtree = KDTree(grid_points)
    voxelized_pointcloud_sampling(path, grid_points, kdtree)
    

This is the test script I have used :

gen = Generator(net, 0.5, exp_name, checkpoint=checkpoint, resolution=256, batch_points=1000000)
occupancies = np.unpackbits(np.load(voxel_path)['compressed_occupancies']) # voxel_path = npz file path 
input = np.reshape(occupancies, (res,) * 3)
data = {}
data["inputs"] = np.array(input, dtype=np.float32)
logits = gen.generate_mesh(data)
mesh = gen.mesh_from_logits(logits)

libmesh setup error

Hello

I get this when I run python setup.py build_ext --inplace

Compiling triangle_hash.pyx because it changed.
[1/1] Cythonizing triangle_hash.pyx
/home/logi/anaconda3/envs/if-net/lib/python3.7/site-packages/Cython/Compiler/Main.py:369: FutureWarning: Cython directive 'language_level' not set, using 2 for now (Py2). This will change in a later release! File: /home/logi/Desktop/ifnet/if-net/data_processing/libmesh/triangle_hash.pyx
  tree = Parsing.p_module(s, pxd, full_module_name)
running build_ext
building 'triangle_hash' extension
creating build
creating build/temp.linux-x86_64-3.7
gcc -pthread -B /home/logi/anaconda3/envs/if-net/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/logi/anaconda3/envs/if-net/include/python3.7m -c triangle_hash.cpp -o build/temp.linux-x86_64-3.7/triangle_hash.o
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
triangle_hash.cpp:626:10: fatal error: numpy/arrayobject.h: No such file or directory
  626 | #include "numpy/arrayobject.h"
      |          ^~~~~~~~~~~~~~~~~~~~~
compilation terminated.

any help appreciated.

pretrained model missing

Thanks for the codebase~
I tried to download your pretrained model but it has been deleted, would you upload again or restore it from trash?

Test own pointcloud for SVR

Thank you for you codebase.
Do you have any simple example from own pointcloud to human shape using your SVR?

Questions about the evaluation on point cloud completion

As I understand, the network input is the the (grid_coords, input_voxelized points). I found the grid_coords is sampled using the boundary_sampling , but these points are sampled based on the groud truth .off model and you add displacement sto each point. It seems that the evaluation process is also done with this progress. But if I have a raw incomplete point cloud, how can I decide the grid_coords?
I don't know if I misunderstand your work, thank you so much.

Custom dataset

Hi. Thank you for great work, Julian!

I need your advice for my research.

I want to reconstruct human mesh data as implicit representation.

but this mesh has big hole.

I'm thinking these days how i can make occupancy data from this.

Can Your watertight mesh construction algorithm make this as a filled occupancy grid?

or do you think any other good ways for this?

thank you again!

How to set the value of displacment in decoder?

I am trying to do other experiments based on your codes on other resolutions of voxels.
If I want to change the value of displacment in decoder, do you have any advice? Or what rules should I obey?

Best regards,

Will you release the trained model?

Hi:

Thanks for your contribution.
I am intreasted in human reconstruction. Will you release the trained model SVR for SingleView
Human Reconstruction?

Thanks!

About Human Reconstruction

I am really interested in your work, especially human reconstruction using implicit function. However, I didn't find any codes and checkpoints for human reconstruction in this repo. Will you plan to release the relevant codes and checkpoints in the future?

data_processing/convert_to_scaled_off.py

image

I can not understand why transform obj to off form.

Why not just change the line 33th from "mesh = trimesh.load(path + '/isosurf.off', process=False)" to "mesh = trimesh.load(path + '/isosurf.obj', process=False)"

Is there something error when load obj file? Looking for your reply.

Best,
Ying

error when build triangle_hash

copying build/lib.linux-x86_64-3.7/utils/libmesh/triangle_hash.cpython-37m-x86_64-linux-gnu.so -> utils/libmesh
error: could not create 'utils/libmesh/triangle_hash.cpython-37m-x86_64-linux-gnu.so': No such file or directory

errors of convert obj to off

Dear Julian
image
image

i noticed the command line above and tried to uncomment the annotation but still error, my enviroment is ubuntu18, and i found that i can't find xvfb-run package. help!!!

human dataset

I really appreciate your work , i want to ask where can i get preprocessed human dataset or how can i process my own dastset, the readme file seem like just guide how to deal with shapenet dataset.

about the trained models

Hi,

I really appreciate your great work. I am following the commands to prepare data and train, and it has taking a long time.
Will you release the trained models? I want to learn the effect first.

Thanks!

question about boundary_sampling

def boundary_sampling(path):
    try:

        if os.path.exists(path +'/boundary_{}_samples.npz'.format(args.sigma)):
            return

        off_path = path + '/isosurf_scaled.off'
        out_file = path +'/boundary_{}_samples.npz'.format(args.sigma)

        mesh = trimesh.load(off_path)
        points = mesh.sample(sample_num)

        boundary_points = points + args.sigma * np.random.randn(sample_num, 3)
        grid_coords = boundary_points.copy()
        grid_coords[:, 0], grid_coords[:, 2] = boundary_points[:, 2], boundary_points[:, 0]

        grid_coords = 2 * grid_coords

        occupancies = iw.implicit_waterproofing(mesh, boundary_points)[0]

        np.savez(out_file, points=boundary_points, occupancies = occupancies, grid_coords= grid_coords)
        print('Finished {}'.format(path))
    except:
        print('Error with {}: {}'.format(path, traceback.format_exc()))

Hi, thank you for your excellent work! I have 2 questions about boundary_sampling, can you help me?

  1. Why do you exchange grid_coords[:, 0] and grid_coords[:, 2]? Do you want to rotate grid_coords ?
  2. grid_coords = 2 * grid_coords before this you normalize the range to [-0.5, 0.5] while you generate isosurf_scaled.off, now why do you change it to [-1, 1]

Split file

I have trouble because of the split file. After inspecting the file, each item contains an empty list:

train
[]

test
[]

val
[]

I am running this with Python 3.7.

Warning occurs when install libraries

Hi Julian, when I follow the command to install libraries libmesh and libvoxelize, it shows some warning info, does it matters or can I just ignore them? Thank you.

when install libmesh

Compiling triangle_hash.pyx because it changed.
[1/1] Cythonizing triangle_hash.pyx
/home/yuhan/anaconda3/envs/if-net/lib/python3.7/site-packages/Cython/Compiler/Main.py:369: FutureWarning: Cython directive 'language_level' not set, using 2 for now (Py2). This will change in a later release! File: /disk2/yuhan/Proj/if-net/data_processing/libmesh/triangle_hash.pyx
  tree = Parsing.p_module(s, pxd, full_module_name)
running build_ext
building 'triangle_hash' extension
creating build
creating build/temp.linux-x86_64-3.7
gcc -pthread -B /home/yuhan/anaconda3/envs/if-net/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/yuhan/anaconda3/envs/if-net/include/python3.7m -c triangle_hash.cpp -o build/temp.linux-x86_64-3.7/triangle_hash.o
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC
but not for C++
In file included from /usr/include/numpy/ndarraytypes.h:1809:0,
                 from /usr/include/numpy/ndarrayobject.h:18,
                 from /usr/include/numpy/arrayobject.h:4,
                 from triangle_hash.cpp:626:
/usr/include/numpy/npy_1_7_deprecated_api.h:15:2: warning: #warning "Using depre
cated NumPy API, disable it by " "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp]
 #warning "Using deprecated NumPy API, disable it by " \
  ^~~~~~~
triangle_hash.cpp: In function ‘PyObject* __pyx_f_13triangle_hash_12TriangleHash
_query(__pyx_obj_13triangle_hash_TriangleHash*, __Pyx_memviewslice, int)’:
triangle_hash.cpp:3400:33: warning: comparison between signed and unsigned integ
er expressions [-Wsign-compare]
   for (__pyx_t_6 = 0; __pyx_t_6 < __pyx_t_21; __pyx_t_6+=1) {                  
                       ~~~~~~~~~~^~~~~~~~~~~~                  
triangle_hash.cpp:3423:33: warning: comparison between signed and unsigned integ
er expressions [-Wsign-compare]      
   for (__pyx_t_6 = 0; __pyx_t_6 < __pyx_t_21; __pyx_t_6+=1) {
                       ~~~~~~~~~~^~~~~~~~~~~~
creating build/lib.linux-x86_64-3.7
g++ -pthread -shared -B /home/yuhan/anaconda3/envs/if-net/compiler_compat -L/hom
e/yuhan/anaconda3/envs/if-net/lib -Wl,-rpath=/home/yuhan/anaconda3/envs/if-net/l
ib -Wl,--no-as-needed -Wl,--sysroot=/ build/temp.linux-x86_64-3.7/triangle_hash.
o -o build/lib.linux-x86_64-3.7/triangle_hash.cpython-37m-x86_64-linux-gnu.so
copying build/lib.linux-x86_64-3.7/triangle_hash.cpython-37m-x86_64-linux-gnu.so
 ->

when install libvoxelize

Compiling voxelize.pyx because it changed.
[1/1] Cythonizing voxelize.pyx
/home/yuhan/anaconda3/envs/if-net/lib/python3.7/site-packages/Cython/Compiler/Main.py:369: FutureWarning: Cython directive 'language_level' not set, using 2 for now (Py2). This will change in a later release! File: /disk2/yuhan/Proj/if-net/data_processing/libvoxelize/voxelize.pyx
  tree = Parsing.p_module(s, pxd, full_module_name)
running build_ext
building 'voxelize' extension
creating build
creating build/temp.linux-x86_64-3.7
gcc -pthread -B /home/yuhan/anaconda3/envs/if-net/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/yuhan/anaconda3/envs/if-net/include/python3.7m -c voxelize.c -o build/temp.linux-x86_64-3.7/voxelize.o
voxelize.c:2648:12: warning: ‘__pyx_f_8voxelize_test_triangle_aabb’ defined but
not used [-Wunused-function]
 static int __pyx_f_8voxelize_test_triangle_aabb(__Pyx_memviewslice __pyx_v_boxcenter, __Pyx_memviewslice __pyx_v_boxhalfsize, __Pyx_memviewslice __pyx_v_triverts) {
            ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
creating build/lib.linux-x86_64-3.7
gcc -pthread -shared -B /home/yuhan/anaconda3/envs/if-net/compiler_compat -L/home/yuhan/anaconda3/envs/if-net/lib -Wl,-rpath=/home/yuhan/anaconda3/envs/if-net/lib -Wl,--no-as-needed -Wl,--sysroot=/ build/temp.linux-x86_64-3.7/voxelize.o -o
build/lib.linux-x86_64-3.7/voxelize.cpython-37m-x86_64-linux-gnu.so
copying build/lib.linux-x86_64-3.7/voxelize.cpython-37m-x86_64-linux-gnu.so ->

Cannot find reference 'voxelize' in 'imported module data_processing.libvoxelize'

hi,
When I run python data_processing/voxelized_pointcloud_sampling.py -res 128 -num_points 300, I met this problem:

Traceback (most recent call last):
File "data_processing/voxelized_pointcloud_sampling.py", line 1, in
import implicit_waterproofing as iw
File "/home/ubuntu/Documents/if-net/data_processing/implicit_waterproofing.py", line 3, in
from data_processing.libmesh.inside_mesh import check_mesh_contains
ModuleNotFoundError: No module named 'data_processing'
but the data_processing file exists in the project.

errors of running SVR test inference

Hi,Julian

I really appreciate your great work!
I want to run the SVR test inference, but met some problems:
I downloaded your SVR trained model "checkpoint_epoch_6.tar" , and the input data "test_inference_example".
And I make the folder "if-net-master/experiments/iPC3000_dist-0.5_0.5_sigmas-0.2_0.015_v256_mSVR/checkpoints", and put the "checkpoint_epoch_6.tar" "BMan0201-HD2-O04P05-S_3DSV_H250_W250_res256.npz" "BMan0201-HD2-O04P05-S_3DSV_H250_W250_res256.off" all in the folder.
Then I run the command:
python generate.py -pointcloud -pc_samples 3000 -std_dev 0.2 0.015 -res 256 -m SVR -checkpoint 6 -batch_points 100000
But something goes wrong:

Loaded checkpoint from: /home/ang/if-net-master/models/../experiments/iPC3000_dist-0.5_0.5_sigmas-0.2_0.015_v256_mSVR/checkpoints/checkpoint_epoch_6.tar
experiments/iPC3000_dist-0.5_0.5_sigmas-0.2_0.015_v256_mSVR/evaluation_6_@256/
<torch.utils.data.dataloader.DataLoader object at 0x7f33bc5d7dd0>
0it [00:00, ?it/s]
Traceback (most recent call last):
  File "generate.py", line 63, in <module>
    gen_iterator(out_path, dataset, gen)
  File "/home/ang/if-net-master/generation_iterator.py", line 29, in gen_iterator
    for i, data in tqdm(enumerate(loader)):
  File "/home/ang/anaconda3/envs/if-net/lib/python3.7/site-packages/tqdm/std.py", line 1165, in __iter__
    for obj in iterable:
  File "/home/ang/anaconda3/envs/if-net/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 560, in __next__
    batch = self.collate_fn([self.dataset[i] for i in indices])
  File "/home/ang/anaconda3/envs/if-net/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 560, in <listcomp>
    batch = self.collate_fn([self.dataset[i] for i in indices])
  File "/home/ang/if-net-master/models/data/voxelized_data_shapenet.py", line 55, in __getitem__
    occupancies = np.unpackbits(np.load(voxel_path)['compressed_occupancies'])
  File "/home/ang/anaconda3/envs/if-net/lib/python3.7/site-packages/numpy/lib/npyio.py", line 422, in load
    fid = open(os_fspath(file), "rb")
FileNotFoundError: [Errno 2] No such file or directory: 'shapenet/data//04256520/8918d572cff6b15df36ecf951968a8b0/voxelized_point_cloud_256res_3000points.npz'

I'm really new to this, and I hope you can give me some advice. Thanks a lot!
2020-11-07 22-21-29folder
2020-11-07 22-22-04error

Questions on "Implicit Functions"

Hi,Julian

Your work is so interesting and amazing, and I've read your paper several times and some of your paper's references .
But I'm really new in "implicit functions", and I don't really understand what implicit function is.
Could you please give me a brief explanation about the "implicit functions" in 3D reconstruction and the effects of "implicit functions" for 3D reconstruction?
Or, could you recommend some articles to me to understand the implict functions in 3D reconstruction?
I would really appreciate it!

Best wishes~

About the usage of `grid_coords`

Hi, good work! But I got a little confused about the usage of the variable grid_coords.
In the training part, it seems to come from the ground truth mesh(while in the generation part it is created directly from a cube).
I thought that in SVR mode the if-net should only take sampled points as input, and I don't understand why there is a grid from GT mesh. Can I use a cube created grid_coords in the training? Thanks

Voxel visualization

Hi,

I have been preparing the training data recently.
I am not sure if the generated voxel is correct.
Could you tell me how to visualize the voxel data?

How to train SVR

I am confused with how to train if-net in SVR mode, since the human reconstruction data in not available now, I want to apply if-net to my own one-side-view point cloud.
In voxelized_data_shapenet.py

else:
voxel_path = path + '/voxelized_point_cloud_{}res_{}points.npz'.format(self.res, self.pointcloud_samples)
occupancies = np.unpackbits(np.load(voxel_path)['compressed_occupancies'])
input = np.reshape(occupancies, (self.res,)*3)
points = []
coords = []
occupancies = []
for i, num in enumerate(self.num_samples):
boundary_samples_path = path + '/boundary_{}_samples.npz'.format(self.sample_sigmas[i])
boundary_samples_npz = np.load(boundary_samples_path)
boundary_sample_points = boundary_samples_npz['points']
boundary_sample_coords = boundary_samples_npz['grid_coords']
boundary_sample_occupancies = boundary_samples_npz['occupancies']
subsample_indices = np.random.randint(0, len(boundary_sample_points), num)
points.extend(boundary_sample_points[subsample_indices])
coords.extend(boundary_sample_coords[subsample_indices])
occupancies.extend(boundary_sample_occupancies[subsample_indices])

Should I change the voxelized_point_cloud*.npz to voxelized_oneside_point_cloud*.npz created from one-side-view point cloud and keep the boundary_sample.npz as the same created from ground truth mesh?

loss value

Dear author,

I retrained the shapenet model with spare point cloud as input. The initial loss values are about 20000. Is it normal since I think the loss is really large.

Do you remember the final loss value?

Best regards,
Yingjie CAI

pretrained models are not available

Hi!

This is really an amazing work!

But when I try to download the pretrained models, it seems to be expired. Would you mind uploading the models again? Thanks a lot.

Best regards.

voxelize.py vs voxelized_pointcloud_sampling.py

Hi, Julian
How are you doing? thanks for sharing your excellent work to public.

I have a question about voxelization
Does voxelize.py and voxelized_pointcloud_sampling.py do exactly the same thing but with different code ?
I see you use voxelize.py for super_resolution work and use voxelized_pointcloud_sampling.py for Point Cloud Completion ?
for generating input occupancy volume, is there any difference between the two?
Is that occupancy volume for voxelize.py directly obtained from mesh , therefore only if voxel cross any triangle surface, that voxel is occupied --(input occupancy volume is always dense)?
and voxelized_pointcloud_sampling.py do sampling a predefined number of points on mesh first, then the voxelized occupancy volume somehow depends on both number of points and voxel resolution ? So where would be some empty voxels even that voxel is supposed to be on surface (input occupancy volume is sparse maybe)?

Yet another question, for Voxel Super-Resolution work, I see from paper that voxel with 128 get better reconstruction, that is what you want to tell in paper that higher voxel resolution can give better reconstruction (super resolution) ?

For the three tasks of voxel super-resolution , point cloud completion and SVR, all the networks use input occupancy volume (but different resolution ) and output occupancy , right ?

Thanks a lot for your time.

Best regards

How to use the pretrained models to perform inference on my own data ?

Hello

after looking at the readme and some of the issues I still don't know how I can use your pretrained models to reconstruct a human from a single-view point cloud. (I have captured the point cloud using kinect).

I am also confused about the names of the pretrained models. From the paper it seems that you used 2 datasets for training the human reconstruction, and Shapenet for reconstructing solid objects. Does this mean I should use the model named SVR to do human reconstruction ?

Any help would be appreciated.
thanks

AttributeError: 'Trainer' object has no attribute 'val_data_iterator'

I can‘t find the val_data_iterator,The code is
training.py
def compute_val_loss(self):
self.model.eval()

    sum_val_loss = 0
    num_batches = 15
    for _ in range(num_batches):
        try:
            val_batch = self.val_data_iterator.next()
        except:
            self.val_data_iterator = self.val_dataset.get_loader().__iter__()
            val_batch = self.val_data_iterator.next()

        sum_val_loss += self.compute_loss( val_batch).item()

    return sum_val_loss / num_batches

"IndexError: index 0 is out of bounds for axis 0 with size 0" when running voxelized_pointcloud_sampling.py

Hi,Julian

I know you've been really busy in recent days, sorry to bother you!
I found something strange when running the 2 commands:
(1)python data_processing/convert_to_scaled_off.py
(2)python data_processing/voxelized_pointcloud_sampling.py -res 256 -num_points 3000

When I use the ShapeNet models, the above 2 commands go correctly, and do generate the correct files, such as isosurf.off, isosurf_scaled.off, and voxelized_point_cloud_256res_3000points.npz.

BUT, when I use your release test models(e.g. BMan0201-HD2-O04P05-S_3DSV_H250_W250_res256.off) or my own single-view human point cloud models, (I've already change their name to "isosurf.off" to run the commands), the above 2 commands go wrong:

(1)when running: python data_processing/convert_to_scaled_off.py, the errors are:

$ python data_processing/convert_to_scaled_off.py
111
Error with shapenet/data/123/BMan0201-HD2-O04P05-S_3DSV_H250_W250_res256
Finished shapenet/data/123/BMan0201-HD2-O04P05-S_3DSV_H250_W250_res256

PS: I add 2 lines of codes in "convert_to_scaled_off.py" to print sth, the screenshot is as follows.
'111' can be printed, so it proves the file "isosurf.off" is already loaded,
but mesh.bounds[0] cannot be printed, so I guess sth goes wrong from there.
2020-11-14 19-45-20-codes
2020-11-14 19-53-31-Error

(2)when running: python data_processing/voxelized_pointcloud_sampling.py -res 256 -num_points 3000,the errors are:

$ python data_processing/voxelized_pointcloud_sampling.py -res 256 -num_points 3000
Error with shapenet/data/123/BMan0201-HD2-O04P05-S_3DSV_H250_W250_res256/: Traceback (most recent call last):
  File "data_processing/voxelized_pointcloud_sampling.py", line 29, in voxelized_pointcloud_sampling
    point_cloud = mesh.sample(args.num_points)
  File "/home/ang/anaconda3/envs/if-net/lib/python3.7/site-packages/trimesh/base.py", line 2132, in sample
    samples, index = sample.sample_surface(mesh=self, count=count)
  File "/home/ang/anaconda3/envs/if-net/lib/python3.7/site-packages/trimesh/sample.py", line 52, in sample_surface
    tri_origins = tri_origins[face_index]
IndexError: index 0 is out of bounds for axis 0 with size 0

2020-11-14 19-54-57-IndexError

I've been searching the related information on the google, but still cannot find ways to solve them.
So I need your advice. Sorry again to bother you! Thanks a lot!

Best Wishes!

Chamfer L2 * 10^-2

Hi @jchibane , I notice that you report the Chamfer L2 results with *10^-2, may I ask why? Where is this 10^-2 from? Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.