simpleig / geo-pifu Goto Github PK
View Code? Open in Web Editor NEWThis repository is the official implementation of Geo-PIFu: Geometry and Pixel Aligned Implicit Functions for Single-view Human Reconstruction.
This repository is the official implementation of Geo-PIFu: Geometry and Pixel Aligned Implicit Functions for Single-view Human Reconstruction.
Firstly, congratulations on acceptance for your paper!
I have being concentrating on your updating about code of this paper since I saw the preprint of your paper.I wonder when your code will be available for trainging about Geo-PIFu?
I am training GeoPIFu with the Deephuman Dataset (same as the author).
The training data required for TrainDataset.py is
OBJ, RENDER, MASK, PARAM, UV_RENDER, UV_MASK, UV_NORMAL, and UV_POS.
And render_mesh.py provides only few of them.
How are we supposed to get the other items ?
Are we supposed to use the PIFu github to get these.
And even if we train on PIFu code we dont have a texture map(UV) provided with the Deephuman Dataset.
So,
do we need to create textures from UVTextureConverter.UVConverter.create_texture() and then use PIFu github code to get the above ?
Sry for the doubts. I am new to all this.
Hi, I have finished the data preprocessing following your instruction and start to train the model. However, I found that it is too time-consuming to train Geo-PIFu. The first stage, i.e., apps.train_shape_coarse
has been running for 3 days but is still at the 5th epoch (30 epochs in total). In README it is claimed that this stage only spend 2 days, so what is the problem? I train with 4 TITAN RTX GPUs and set the batch size to 16, it should not be so slow I guess.
I'm using a GTX 1080 Ti (11GB Memory) for the testing of the given model with the pre-trained checkpoints. When I run the test_shape_iccv it displays CUDA out of memory. Does the model require more than 11 GB of memory for infernece?
Thanks in Advance.
Hi, great work!
Is there a demo code so a single image can be reconstructed? I see the test code for all the test data but not for only one example.
Thanks
First congratulations on acceptance! We are really interested in your paper and hope to follow this one for the following works. Could you please share your codebase as soon as possible?
First of all, thank you very much for your valuable work and I have found it inspiring!
I am trying to understand the code and I am reading the part where 3D features are queried according to the points sampled for a forward pass, after image features are pre-computed and before 2D & 3D features are passed to the surface classifier. Here, I would like to ask a question about the use of normalized z values, e.g., in lines 145-165 of Geo-PIFu/geopifu/lib/geometry.py.
Geo-PIFu/geopifu/lib/model/HGPIFuNet.py
Lines 145 to 165 in 0b57080
As in line 150, 2D pixel-aligned features corresponding to the sampled query points are associated with normalized depth values (z_feat
). However, in line 156 or line 159, 3D features are queried based on the depth values that are not yet normalized (z
). May I ask what would be the considerations behind the use of un-normalized z values for querying 3d features?
I further looked into how the 2D & 3D features are combined in the surface classifier (line 162), but I do not see any special manipulation involving the depth values.
If I am missing or misunderstanding anything there, I will appreciate a lot if you could let me know. Thank you.
How do we use your code to test our own images/videos? Thank you in advance.
Your paper looks super interesting and Iโd love to try running it on my computer.
Any plans to publish the code?
While training train_query.py we need deepVoxels tensor.
That tensor can be created using get_deepVoxels( ) from TrainDataset.py
with the help of a numpy file named "__index__deepVoxels.npy"
deepVoxels_path = "%s/%06d_deepVoxels.npy" % (self.opt.deepVoxelsDir, idx)
I have gone through render_mesh.py but couldn't find a way to create deepVoxels.npy
can anyone tell me how to create it.
Hi,
Thanks for the code. Can we use render_mesh with other dataset? I have dataset which consists of only SMPL obj and and corresponding mesh in obj?
May I know how to proceed with data generation?
Thanks,
Sai
The instructions describe how to configure the conda environment based on some unexisting file in the current master HEAD.
The geopifu_requirements.yml file does not exist.
conda env create -f geopifu_requirements.yml
conda activate geopifu
Are these instructions obsolete?
Excuse me, I would like to ask what is the difference between CD metrics and PSD metrics for evaluating the quality of mesh reconstruction?
I have tried every method to setup env from the provided opendr_requirements.yml file.
All methods are from anaconda prompt (base):
I know you are busy but my dream is depended on this project.
Need to finish it somehow.
So Please respond.
I am interested in your work. your code is rich, that includes rendering, DeepHuman and PIFu, which makes me beneficial for my research. As far as I know, PIFu needs other rendering results. However, there is no dataset in the same format as the original PIFu, and only same code is obtained. Whether PIFu or Geo-PIFu, they use TrainDatasetICCV class. What's the difference between TrainDataset and TrainDatasetICCV?
I am trying for the past 3 days to create query points on colab pro but its crashing.
I did try on my pc (high end gaming pc) but it still crashed.
Problem is that the code gets stuck on :
Geo-PIFu/geopifu/apps/prepare_shape_query.py
Line 225 in 0b57080
My solution
I noticed you did 3 things:
So i tested the on only one of them at a time (replacing the other 2 with the org PIFu defaults).
Then i tested 2 of them at a time.
The problem still continued.
Then I took the mesh(deephuman dataset #10198) to load inside points using PIFu script:
It worked after i changed
Geo-PIFu/geopifu/apps/prepare_shape_query.py
Line 216 in 0b57080
to
sample_points = surface_points + np.random.normal(scale=5.0, size=surface_points.shape) #(as given in PIFu for Gaussian noise)
1.Can anyone tell me why this happened ?
PLz tell me how can i proceed futher.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.