Code Monkey home page Code Monkey logo

pergamo's Introduction

PERGAMO

Teaser

[Project website] [Dataset] [Video]

Abstract

Clothing plays a fundamental role in digital humans. Current approaches to animate 3D garments are mostly based on realistic physics simulation, however, they typically suffer from two main issues: high computational run-time cost, which hinders their development; and simulation-to-real gap, which impedes the synthesis of specific real-world cloth samples. To circumvent both issues we propose PERGAMO, a data-driven approach to learn a deformable model for 3D garments from monocular images. To this end, we first introduce a novel method to reconstruct the 3D geometry of garments from a single image, and use it to build a dataset of clothing from monocular videos. We use these 3D reconstructions to train a regression model that accurately predicts how the garment deforms as a function of the underlying body pose. We show that our method is capable of producing garment animations that match the real-world behaviour, and generalizes to unseen body motions extracted from motion capture dataset.

Install instructions

Python dependencies

IGL only supports and recommends the use of Anaconda. However, the environment can be set up using only pip by installing the IGL bindings from source.

The general steps are as follows:

  1. Install PyTorch according to your system ( https://pytorch.org/get-started/locally/ )
  2. See the requirements.txt file to check the needed packages
    • This is usually done with pip install -r requirements.txt, but Anaconda may have a different way of doing things
  3. Install IGL bindings ( https://github.com/libigl/libigl-python-bindings )
  4. Install Kaolin ( https://kaolin.readthedocs.io/en/latest/notes/installation.html )

Models

  • You can download the weights from OneDrive . Place the weights folder from OneDrive into the data folder of this repository.
  • PERGAMO needs SMPL. You can download it from SMPL. Rename the file from basicmodel_neutral_lbs_10_207_0_v1.1.0.pkl to smpl_neutral.pkl and save it under data/smpl/.

Running the project

To run the reconstruction, please check out run_recons.sh.

To run the regression, there are 2 sets of 3 scripts. Please check out run_regression.sh to see how it works.

Visualizing regression results

The output is generated under data (test_sequence for AMASS scripts, train/validation_sequence for reconstructed scripts).

To visualize using Blender, load the .obj file with the option Geometry > Keep Vert Order. Then, add a Mesh Cache modifier to the loaded mesh. Change the type to PC2 and then load the .pc2 file adjacent to the .obj.

Datasets

You can download a dataset from OneDrive .

Structure

Each data set has the following folder hierarchy:

DataDanXXXXX
├─ clips (video files)
| ├─ dan-X01.mp4
| ├─ dan-X02.mp4
| ├─ ...
├─ reconstruction_input
| ├─ dan-X01
| | ├─ dan-X01 (video frames)
| | ├─ dan-X01_expose
| | ├─ dan-X01_parsing
| | ├─ dan-X01_pifu
| | ├─ dan-X01_smpl
| ├─ dan-X02
| | ├─ ...
| ├─ ...
├─ reconstruction_output (reconstructed garment meshes)
| ├─ dan-X01
| ├─ dan-X02
| ├─ ...
├─ regressor_training_data
├─ train_sequences
| ├─ meshes (reconstructed garment meshes in Tpose)
| | ├─ dan-X01
| | ├─ dan-X02
| | ├─ ...
| ├─ poses (encoded poses using the SoftSMPL encoding)
| | ├─ dan-X01
| | ├─ dan-X02
| | ├─ ...
├─ validation_sequences (same structure as train)
├─ ...

For reconstruction

Datasets for the reconstruction script are made by processing each frame with:

  • ExPose (output is SMPL-X, they need to be converted to SMPL too)
  • PifuHD
  • Self-Correction-Human-Parsing

The necessary files are provided in the reconstruction_input folder. We also provide reconstructed meshes for each dataset (reconstruction_input folder) and the same meshes in Tpose space (inside the meshes folder on regressor_training_data).

For training

Our regressors predict wrinkles (vertex displacements with respect to a template mesh) from SMPL poses encoding using the SoftSMPL encoding. We provide such encoded poses for the DataDanGrey dataset and also the scripts to generate such encoding from arbitrary SMPL paramteres.

For regression

You can use AMASS sequences by placing the .npz files under data/test_sequence.

Alternatively, you can run the regression on sequences of SMPL poses saved as .pkl files. Check the set of reconstructed scripts.

Citation

@article {casado2022pergamo,
    journal = {Computer Graphics Forum (Proc. of SCA), 2022},
    title = {{PERGAMO}: Personalized 3D Garments from Monocular video},
    author = {Casado-Elvira, Andrés and Comino Trinidad, Marc and Casas, Dan},
    year = {2022}
}

pergamo's People

Contributors

andrescasado avatar marccomino avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

pergamo's Issues

how to use events.out.tfevents.xxxxx?

I have downloaded DataDanBrown and ran the reconstruction_scipy.py. Then I got many events.out.tf.events.xxxxx.xxxx.xxx. Soory, But I don't know how to use it.

How are the .pkl files in the poses of the training set obtained?

Hello, @AndresCasado
I'm very sorry to bother you again. Recently, I've encountered an issue regarding the .pkl files in the poses of my training dataset. Are these .pkl files simply obtained by processing the .ply files generated by the expose code through the smplx2smpl code, or are there further operations involved? Thank you for taking the time to read my message; I look forward to receiving your reply as soon as possible.

The directory named '0001.jpg_094' which contains the results from running the expose-master code

Hello, @AndresCasado
I am in great need of your assistance.For each frame extracted from the video, I applied the 'expose-master' method using the original video 'dan-005', but obtained different results. Your outcomes were as follows,
image
while mine were as such
image
After processing the original video through a series of steps, I am not obtaining the correct results, and I suspect there might be an issue here.The reconstructed garment looks like this below.
image
image
Could you explain if there's any special significance to the folder naming convention '0001.jpg_094'? Thank you for taking the time to read and respond to my message. I look forward to your reply.

DatosGreen doesn't match the expected filenames

Hello
I am trying to recreate the results using DataDanGreen folder

I got the following error when running run_recons.sh

RuntimeError:  Folder "smpl" does not exist for sequence "clips". Full path tested: "data/DataDanGreen/clips/clips_smpl".
Uncomment one of the commands and edit the path

Was hoping to get some help resolving this

Thank you

Can't find 'poses'

Hello, After I successfully installed Expose、pifuHD、Self-Correction-Human-Parsing on ubuntu18.04(windows 10 before) and ran their own demo,I still have some issues. Could you please help me?
I get photos from my own video by ffmpeg and try to get smpl params by using Expose. The error is "Key Error: 'poses' is not a file in the archive". I find that the *.npz don't have the "pose" key(There are things like 'body_pose.npy' 'full_pose.npy' in the *.npz). So it seems that we need to use openpose with Expose and pifuHD?

Thanks a lot!

Some problems about `Running the project`

Sorry I am newbie.

I have finished Install instructions. But I have some problems about Running the project now.
I want to input my own video to drive the SMPL model. It seems that we need to convert the video(mp4) to amass scripts(motion sequences), but I don't find the mothods or third-party(like alphapose) to do the thing. I find that there are many sequences in dataset, so Could you tell me how to convert the video(mp4) to the certain motion sequences in this project?

Thank you!

The program has no errors, but it does not produce any results.

Hello @AndresCasado
I am trying to recreate the results using DatosDanCompressed.zip.
However, when I run the "run_recons.sh" file, it executes successfully but yields no results. I suspect that there might be an error with my path settings. Despite repeatedly modifying it, I still cannot get any results. I am very much looking forward to your reply.
77b50c44010ea24e752de3a25979c78
d446a3ffdfc3257e0614d52710385e5
I am replicating the code in a Windows environment and have installed Git for Windows.I am very much looking forward to your reply.

Visualized result only shows the clothes, not the body model. It should be the body model wearing the clothes, right?

Hello, @AndresCasado
The method you guided me through in Blender didn't combine the clothes with the body model. What exactly is the role of _body.pc2? Currently, my visualization result only shows the clothes. I'm a bit lost on how to solve this issue, so I'd like to ask for your advice. Also, regarding the animation effect of the clothes, are they supposed to be colorless?
image
Looking forward to your response.

Can't run the code

Hello, the code is not complete, running can not run the project, can you send a detailed operation, thank you!

How to convert SMPL-X to SMPL?

Hello, @AndresCasado
I apologize for bothering you again. I wanted to ask about the method you used for converting the results from EXPOSE, which are in SMPL-X format, to SMPL format. The readme mentions that the output of EXPOSE is SMPL-X. I've tried several methods for this conversion, but the results haven't been accurate.
f42b39a939f90688b78f01c6d9354d9
I greatly appreciate it if you could take the time to review my message. I'm looking forward to hearing back from you soon.

other paper

Do you have the source code for the article "ULNeF: Untangled Layered Neural Fields for Mix-and-Match Virtual Try-On", I'm more interested in running it.

The partial code is now runnable, but there are still some minor issues.

Hello @AndresCasado :
I'm sorry for bothering you again with another question.
After several days of debugging, the code can now run preliminarily. However, I'm a bit unclear about which files correspond to the train and test processes. If I want to train, what sequence of files should I execute? If I want to test, what sequence of files should I execute? My current understanding is that during training, I should first run run_recons.sh, then train_regressor.py. During testing, I should first run run_recons.sh, then run_regression.sh. I'm not sure if my understanding is correct.
When I run the file train_regressor.py, it throws the following error:
3fd9c6bb8785d5dd781b9b3e48970e6
Does the dataset originally included in the project, as shown in the image, still needed?
7a2c617388289bda0168efa634e8e0d
I'm eagerly awaiting your response. Thank you so much in advance.

The function of reconstruction_script.py is ?

Hello, @AndresCasado
The command "python reconstruction_script.py --dir dataset\DatosDan\sequences" executes this Python script. However, no .pkl files are generated in the output, which are needed as input for the subsequent regressor.
image
Additionally, the code mentions the generation of rendering results, which are also not produced. How can this issue be resolved?
image
Looking forward to your response.

Chumpy uses deprecated imports from Numpy

The environment setup as it's explained in the repo no longer works, because Chumpy is not up to date with numpy.

Temporal solution is to install an old version of numpy (with pip install numpy==1.21.2 for example, which I confirm works) and recompile/reinstall kaolin (go to its folder and rerun python setup.py develop)

dataset problem

When I try to download the data, I am prompted with this problem, making it impossible to log in
1665995765660

test

Which test did you use for the amass dataset, or is this-project predictable for all of amass?

SoftSMPL?

Hello, @AndresCasado
Thank you very much for your guidance. The problem was solved successfully last time. Now there 's a little question to ask, the pose under the train _ sequence folder, as shown in the following image :
f8bd1ed6831087d577d168ba47b8cdd
In readme, there 's a mention of using the SoftSMPL method, which here refers to a particular method ? I found a literature is SoftSMPL : Data-driven Modeling of Nonlinear Soft-tissue Dynamics for Parametric Humans, but this no code, not sure is this. Thank you very much for taking the time to read my message, hope to get your guidance, look forward to your reply.

run

when run python predict_amass_sequences.py
OSError: dlopen: cannot load any more object with static TLS

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.