Code Monkey home page Code Monkey logo

human_body_prior's Introduction

A Github Pages template for academic websites. This was forked (then detached) by Stuart Geiger from the Minimal Mistakes Jekyll Theme, which is © 2016 Michael Rose and released under the MIT License. See LICENSE.md.

I think I've got things running smoothly and fixed some major bugs, but feel free to file issues or make pull requests if you want to improve the generic template / theme.

Note: if you are using this repo and now get a notification about a security vulnerability, delete the Gemfile.lock file.

Instructions

  1. Register a GitHub account if you don't have one and confirm your e-mail (required!)
  2. Fork this repository by clicking the "fork" button in the top right.
  3. Go to the repository's settings (rightmost item in the tabs that start with "Code", should be below "Unwatch"). Rename the repository "[your GitHub username].github.io", which will also be your website's URL.
  4. Set site-wide configuration and create content & metadata (see below -- also see this set of diffs showing what files were changed to set up an example site for a user with the username "getorg-testacct")
  5. Upload any files (like PDFs, .zip files, etc.) to the files/ directory. They will appear at https://[your GitHub username].github.io/files/example.pdf.
  6. Check status by going to the repository settings, in the "GitHub pages" section
  7. (Optional) Use the Jupyter notebooks or python scripts in the markdown_generator folder to generate markdown files for publications and talks from a TSV file.

See more info at https://academicpages.github.io/

To run locally (not on GitHub Pages, to serve on your own computer)

  1. Clone the repository and made updates as detailed above
  2. Make sure you have ruby-dev, bundler, and nodejs installed: sudo apt install ruby-dev ruby-bundler nodejs
  3. Run bundle clean to clean up the directory (no need to run --force)
  4. Run bundle install to install ruby dependencies. If you get errors, delete Gemfile.lock and try again.
  5. Run bundle exec jekyll liveserve to generate the HTML and serve it from localhost:4000 the local server will automatically rebuild and refresh the pages on change.

Changelog -- bugfixes and enhancements

There is one logistical issue with a ready-to-fork template theme like academic pages that makes it a little tricky to get bug fixes and updates to the core theme. If you fork this repository, customize it, then pull again, you'll probably get merge conflicts. If you want to save your various .yml configuration files and markdown files, you can delete the repository and fork it again. Or you can manually patch.

To support this, all changes to the underlying code appear as a closed issue with the tag 'code change' -- get the list here. Each issue thread includes a comment linking to the single commit or a diff across multiple commits, so those with forked repositories can easily identify what they need to patch.

human_body_prior's People

Contributors

michaeljblack avatar nghorbani avatar salmedina avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

human_body_prior's Issues

sampled models' vertices are out of order

Hello, I tried to sample novel models, but it seems that the vertices of models are not corresponding. Is it possible to obtain the models with corresponding vertices?
thx.

TypeError due to unexpected argument `dtype` while loading body_model

Attempting to load smplh body model results in TypeError. This is because the lbs method in vchoutas/smplx does not take dtype as input argument (unlike body_model/lbs.py). See https://github.com/vchoutas/smplx/blob/master/smplx/lbs.py#L152-L162. I verified that removing dtype argument in the call to lbs() in body_model/body_model.py works for my use case.

  File "/Users/dgopinath/code/venv/fairmotion/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/Users/dgopinath/code/venv/fairmotion/lib/python3.7/site-packages/human_body_prior/body_model/body_model.py", line 233, in forward
    dtype=self.dtype)
TypeError: lbs() got an unexpected keyword argument 'dtype'

Would this be the right fix? If so, I can send a pull request. Since I'm using human_body_prior as a dependency via PyPI, could you help me by uploading a new package with the fix?

Searching for 'github_data/dmpl_sample.npz'

Hello,
I am working on bringing DMPL to a real time collision like seen in Unity.
Are the contents of the dmpl_sample.npz file from the DMPL visualization .ipynb notebook the Dyna registrations?
Meaning you need the eig vectors of the errors between the vertices of SMPL and Dyna correct?
Also I did not see much of a difference between the animation with DMPLs and without. Is this because the number of DMPL blendshape principal components provided by SMPL are not enough?
Keep up the good work!
Thanks
Abishek

vp.decode ouputs NAN values... any help?

I am using cvpr19 branch, cuda10.0, pytorch 1.1.0, and windows11

I downloaded trained model and smplx model from smplify-x repo

For my case I cannot run any of given jupyter tutorial

from below code

#Sample a 32 dimentional vector from a Normal distribution
poZ_body_sample = torch.from_numpy(np.random.randn(1,32).astype(np.float32)).to('cuda')
pose_body = vp.decode(poZ_body_sample, output_type='aa').view(-1, 63)

print('poZ_body_sample.shape', poZ_body_sample.shape)
print('pose_body.shape', pose_body.shape)

images = render_smpl_params(bm, pose_body).reshape(1,1,1,400,400,3)
img = imagearray2file(images)
show_image(np.array(img)[0])

i got some nan values for pose_body as below

tensor([[    nan,     nan,     nan,  0.0290, -0.2194, -0.3330,  0.0783,  0.0997,
         -0.0150,     nan,     nan,     nan,     nan,     nan,     nan,     nan,
             nan,     nan, -0.0117,  0.1610, -0.0294,     nan,     nan,     nan,
             nan,     nan,     nan,     nan,     nan,     nan,     nan,     nan,
             nan,     nan,     nan,     nan,  0.0286,  0.0723, -0.4455, -0.0173,
          0.0512,  0.3359,     nan,     nan,     nan,  0.0081, -0.1747, -0.9129,
          0.0860,  0.3054,  0.8218, -0.0635, -0.8694,  0.3641,  0.0156,  1.1161,
         -0.2180,     nan,     nan,     nan,     nan,     nan,     nan]],
       device='cuda:0', grad_fn=<ViewBackward>)

what is wrong with this?

vposer.ipynb Tutorial Does not Work

The tutorial is very unclear, but attempting to follow the instructions by downloading SMPL-X Model v1.1 from the website in the jupyter notebook does not appear to work. I get the following error

KeyError: 'posedirs is not a file in the archive'

About versions of VPoser

Hello.
Thank you for your wonderful work!

I want to ask about the versions of this project.

I just noticed the VPoser from SMPLify-X(CVPR_2019, Version 1) and from your master branch version(Version 2) are quite different.
The loss functions or defined joints for training are different and I think this will produce different results.
Did you update your version because the Version 2 produce quite better results?
I'm confused which version I should use.
Is there any plans to report results in papers with this new VPoser?

Additionally, I want to ask about my trained results with version 2.

I trained VPoser with your master branch, and nothing has been modified without debugging some errors.
I used your sample dataset which has about 500k poses for training.

These are the training log and the results of tutorials.
Because of the earlyStopping moddule, the epoch stops at 8.
image

  1. From your pretrained model V02_05, the tutorial result which encode and decode sample pose is shown below.
    image

  2. And my results shows that whatever pose it is, it shows the result of the arm being bent backwards.
    image

I even tried by removing earlyStopping module and tried to train as much as I wanted, but there was still a problem.

I'd appreciate it if you could give me suggestion with it.

Thanks.

TypeError: can't assign a NoneType to a Variable[CUDAType]

Hi,
Thanks for your excellent works. when I follow the tutorial of VPoser PoZ Space for Body Models ,I meet a error in function bm.randomize_pose() .

Found Trained Model: /home/zhao/human_body_prior/vposer_v1_0/snapshots/TR00_E096.pt
Traceback (most recent call last):
File "./snippet.py", line 12, in
bm.randomize_pose()
File "/home/zhao/human_body_prior/human_body_prior/body_model/body_model.py", line 398, in randomize_pose
self.root_orient.data[:] = root_orient
TypeError: can't assign a NoneType to a Variable[CUDAType]

How to deal with it?

Question about vposer.encode

If i have a body_pose([1, 63]), and i want to encode the body_pose to pose_embedding([1, 32]). What should i do?
Looking foward to your kind reply!

e.g

body_pose = torch.zeros([1, 63], dtype=torch.float32)
a = vposer.encode(body_pose)
print(a)

return

Normal(loc: torch.Size([1, 32]), scale: torch.Size([1, 32]))

IndexError: list index out of range

Hi guys
It presented to me this error:

Processing: DATA_FOLDER/images/ariba.jpg
Traceback (most recent call last):
File "smplifyx/main.py", line 272, in
main(**args)
File "smplifyx/main.py", line 262, in main
**args)
File "/home/luisa/vsmpl2/programas/smplify-x/smplifyx/fit_single_frame.py", line 188, in fit_single_frame
vposer, _ = load_vposer(vposer_ckpt, vp_model='snapshot')
File "/home/luisa/vsmpl2/lib/python3.6/site-packages/human_body_prior/tools/model_loader.py", line 56, in load_vposer
ps, trained_model_fname = expid2model(expr_dir)
File "/home/luisa/vsmpl2/lib/python3.6/site-packages/human_body_prior/tools/model_loader.py", line 31, in expid2model
best_model_fname = sorted(glob.glob(os.path.join(expr_dir, 'snapshots', '*.pt')), key=os.path.getmtime)[-1]
IndexError: list index out of range

Could you help me with this?

How to penalize the impossible poses

Thanks for the great work!
As you mentioned in description, VPoser can penalize the impossible pose. May I ask how does it work like that?
I'd like to use VPoser to replace the Mixed Gaussian Prior of SMPL to penalize the impossible pose predicted by model.

installation error--ERROR: No matching distribution found for torch==1.1.0 (from human-body-prior==0.9.3.0)

Could you guide how to fix the following installation problem or provide an alternative method?
$ pip install git+https://github.com/nghorbani/configer
Collecting git+https://github.com/nghorbani/configer
  Cloning https://github.com/nghorbani/configer to /tmp/pip-req-build-5fe5vdpd
Collecting configparser
  Downloading configparser-5.0.1-py3-none-any.whl (22 kB)
Building wheels for collected packages: configer
  Building wheel for configer (setup.py) ... done
  Created wheel for configer: filename=configer-1.4.1-py3-none-any.whl size=7848 sha256=350a2ce6730e7f4195f153d2845827b6543ddbf94de249e6e685ded679d452bd
  Stored in directory: /tmp/pip-ephem-wheel-cache-k4y99j5t/wheels/05/22/ea/db8ac88e6ea839287675cbe7eb5923e2307ad8ee28ac29d58e
Successfully built configer
Installing collected packages: configparser, configer
Successfully installed configer-1.4.1 configparser-5.0.1
$ pip install git+https://github.com/nghorbani/human_body_prior
Collecting git+https://github.com/nghorbani/human_body_prior
  Cloning https://github.com/nghorbani/human_body_prior to /tmp/pip-req-build-nmpstyti
ERROR: Could not find a version that satisfies the requirement torch==1.1.0 (from human-body-prior==0.9.3.0) (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2, 1.4.0, 1.5.0, 1.5.1, 1.6.0, 1.7.0, 1.7.1)
ERROR: No matching distribution found for torch==1.1.0 (from human-body-prior==0.9.3.0)

vpose_v1.0 no longer able to load?

vpose_v1.0 no longer able to load?

human_body_prior/src/human_body_prior/tools/model_loader.py", line 33, in exprdir2model
    assert len(available_ckpts) > 0, ValueError('No checck points found at {}'.format(model_snapshots_dir))
AssertionError: No checck points found at data/vposer_v1_0/snapshots

but

ls data/vposer_v1_0/snapshots                                                                     
TR00_E096.pt

it's only pt not ckpts

error running BodyModelWithPoser with the smplx model

I have and issue with this code snippet (https://github.com/nghorbani/human_body_prior/blob/master/human_body_prior/body_model/README.md)

It seems that the code tries to load something related to the MANO model, but this path/file is defined as “None”

self.poser_handL_pt, self.poser_handL_ps = poser_loader(mano_exp_dir)

I did find some files and information related to smplh and mano here

https://github.com/vchoutas/smplx/tree/master/tools

But these seem to be all .pkl files and not train model .pt files...

Any suggestions?

Thanks,

-Maarten

Problem with sample_poses

Hi!

I'm having some problems with vposer_sampling notebook with sample_poses call.
Below is the stacktrace. Thanks!

image

\smplifyx\fit_single_frame.py", line 46, in <module> from human_body_prior.tools.model_loader import load_vposer ImportError: cannot import name 'load_vposer' from 'human_body_prior.tools.model_loader' (C:\ProgramData\Anaconda3\lib\site-packages\human_body_prior\tools\model_loader.py)

\smplifyx\fit_single_frame.py", line 46, in
from human_body_prior.tools.model_loader import load_vposer
ImportError: cannot import name 'load_vposer' from 'human_body_prior.tools.model_loader' (C:\ProgramData\Anaconda3\lib\site-packages\human_body_prior\tools\model_loader.py)

SMPL body models can't be loaded

I downloaded the SMPL body models from the website and they're .pkl files. The if statement in the main body model class doesn't check for .pkl suffix: https://github.com/nghorbani/human_body_prior/blob/master/src/human_body_prior/body_model/body_model.py#L57-L61

I tried rewriting the SMPL body model to an .npz archive but that won't load either because it doesn't contain "shapedirs":

  File "/home/gngdb/miniconda3/envs/gait/lib/python3.9/site-packages/human_body_prior/body_model/body_model.py", line 89, in __init__
    num_total_betas = smpl_dict['shapedirs'].shape[-1]
  File "/home/gngdb/miniconda3/envs/gait/lib/python3.9/site-packages/numpy/lib/npyio.py", line 260, in __getitem__
    raise KeyError("%s is not a file in the archive" % key)
KeyError: 'shapedirs is not a file in the archive'

psbody_mesh_tools missing for IK

Hello, thanks for the code!

It seems to me that there's some function/file missing for the IK engine. In ik_engine.py line 27 it imports from body visualizer, but the file is missing in the original repo I believe.

Thanks again for the help!

David

Problem

Hello,My name is Long.
this is my problem:
Found Trained Model: /home/long/resource/model/human_pose_prior/vposer_v1_0/snapshots/TR00_E096.pt
Traceback (most recent call last):
File "run.py", line 14, in
vertices = c2c(bm.forward().v)[0]
File "/media/long/data/human_body_prior/human_body_prior/body_model/body_model.py", line 323, in forward
new_body = super(BodyModelWithPoser, self).forward(pose_body=pose_body, **kwargs)
File "/media/long/data/human_body_prior/human_body_prior/body_model/body_model.py", line 227, in forward
dtype=self.dtype)
File "/home/long/.local/lib/python3.6/site-packages/smplx/lbs.py", line 195, in lbs
pose_offsets = torch.matmul(pose_feature, posedirs)
RuntimeError: size mismatch, m1: [1 x 207], m2: [486 x 31425] at /pytorch/aten/src/THC/generic/THCTensorMathBlas.cu:268

Thx.

Getting joint position from body model

Hi.

I am trying to extract joint positions from body model.
I drew the mesh image and it looked fine.
However, getting Jtr return values did not return correct result.

The code that I used is as follows :

`def render_smpl_params(bm, pose_body, pose_hand = None, trans=None, betas=None, root_orient=None):
'''
:param bm: pytorch body model with batch_size 1
:param pose_body: Nx21x3
:param trans: Nx3
:param betas: Nxnum_betas
:return: N x 400 x 400 x 3
'''

from human_body_prior.tools.omni_tools import copy2cpu as c2c
from human_body_prior.tools.omni_tools import colors
from human_body_prior.mesh.mesh_viewer import MeshViewer
faces = c2c(bm.f)

imw, imh = 400, 400

mv = MeshViewer(width=imw, height=imh, use_offscreen=True)

images = []
for fIdx in range(0, len(pose_body)):

    bm.pose_body.data[0,:] = bm.pose_body.new(pose_body[fIdx].reshape(1,-1))
    if pose_hand is not None: bm.pose_hand.data[0,:] = pose_hand
    if trans is not None: bm.trans.data[0,:] = bm.trans.new(trans[fIdx])
    if betas is not None: bm.betas.data[0,:len(betas[fIdx])] = bm.betas.new(betas[fIdx])
    if root_orient is not None: bm.root_orient.data[0,:] = bm.root_orient.new(root_orient[fIdx])

    v = c2c(bm.forward().v)[0]
    jtr = c2c(bm.forward().Jtr)[0]
    mesh = trimesh.base.Trimesh(v, faces, vertex_colors=np.ones_like(v)*colors['grey'])
    mv.set_meshes([mesh], 'static')

    images.append(mv.render())

return np.array(images).reshape(len(pose_body), imw, imh, 3)`

I used jtr from bm.forward().Jtr to extract key point.
The mesh I got is as follows

스크린샷, 2020-09-11 22-05-50

And when I plotted joint position from jtr value, I got this

스크린샷, 2020-09-11 22-06-50

Where do you think did I got wrong..?
How can I directly extract joint positions from body model?
I mean, the shoulder looks too large, the legs aren't supposed to be leaning backwards...

RuntimeError: Early stopping conditioned on metric `val_loss` which is not available. Pass in or modify your `EarlyStopping` callback to use any of the following: ``

Hello,
I'm trying to train the vposer with my own train and val dataset, but it always said val_loss is not available. I guessed it might be caused by the little validate dataset, but after I reduce the batch size, the error still exists. I found some verizon of pytorch-ligthning might have this issue. Could you please tell me the verison you use and give me some advise if you have this issue as well?

Epoch 0: 88%|████████▊ | 15/17 [00:00<00:00, 37.29it/s, loss=89.1, v_num=29]
Validating: 0it [00:00, ?it/s]
Validating: 0%| | 0/2 [00:00<?, ?it/s]{'weighted_loss': {'loss_kl': tensor(0.0516, device='cuda:0'), 'loss_mesh_rec': tensor(81.0408, device='cuda:0'), 'matrot': tensor(3.5944, device='cuda:0'), 'loss_total': tensor(84.6868, device='cuda:0')}, 'unweighted_loss': {'v2v': tensor(55.1946, device='cuda:0'), 'loss_total': tensor([55.1946], device='cuda:0')}}
{'weighted_loss': {'loss_kl': tensor(0.0580, device='cuda:0'), 'loss_mesh_rec': tensor(86.9938, device='cuda:0'), 'matrot': tensor(3.5297, device='cuda:0'), 'loss_total': tensor(90.5815, device='cuda:0')}, 'unweighted_loss': {'v2v': tensor(59.5597, device='cuda:0'), 'loss_total': tensor([59.5597], device='cuda:0')}}
[1] -- Epoch 0: val_loss:57.38
[1] -- lr is [0.001]
Traceback (most recent call last):
File "/home/drow/human_body_prior/src/train.py", line 54, in
main()
File "/home/drow/human_body_prior/src/train.py", line 50, in main
train_vposer_once(job)
File "/home/drow/human_body_prior/src/human_body_prior/train/vposer_trainer.py", line 351, in train_vposer_once
trainer.fit(model)
File "/home/drow/anaconda3/envs/vae/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 552, in fit
self._run(model)
File "/home/drow/anaconda3/envs/vae/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 917, in _run
self._dispatch()
File "/home/drow/anaconda3/envs/vae/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 985, in _dispatch
self.accelerator.start_training(self)
File "/home/drow/anaconda3/envs/vae/lib/python3.7/site-packages/pytorch_lightning/accelerators/accelerator.py", line 92, in start_training
self.training_type_plugin.start_training(trainer)
File "/home/drow/anaconda3/envs/vae/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 161, in start_training
self._results = trainer.run_stage()
File "/home/drow/anaconda3/envs/vae/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 995, in run_stage
return self._run_train()
File "/home/drow/anaconda3/envs/vae/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1044, in _run_train
self.fit_loop.run()
File "/home/drow/anaconda3/envs/vae/lib/python3.7/site-packages/pytorch_lightning/loops/base.py", line 111, in run
self.advance(*args, **kwargs)
File "/home/drow/anaconda3/envs/vae/lib/python3.7/site-packages/pytorch_lightning/loops/fit_loop.py", line 200, in advance
epoch_output = self.epoch_loop.run(train_dataloader)
File "/home/drow/anaconda3/envs/vae/lib/python3.7/site-packages/pytorch_lightning/loops/base.py", line 118, in run
output = self.on_run_end()
File "/home/drow/anaconda3/envs/vae/lib/python3.7/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 235, in on_run_end
self._on_train_epoch_end_hook(processed_outputs)
File "/home/drow/anaconda3/envs/vae/lib/python3.7/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 276, in _on_train_epoch_end_hook
trainer_hook(processed_epoch_output)
File "/home/drow/anaconda3/envs/vae/lib/python3.7/site-packages/pytorch_lightning/trainer/callback_hook.py", line 109, in on_train_epoch_end
callback.on_train_epoch_end(self, self.lightning_module)
File "/home/drow/anaconda3/envs/vae/lib/python3.7/site-packages/pytorch_lightning/callbacks/early_stopping.py", line 170, in on_train_epoch_end
self._run_early_stopping_check(trainer)
File "/home/drow/anaconda3/envs/vae/lib/python3.7/site-packages/pytorch_lightning/callbacks/early_stopping.py", line 185, in _run_early_stopping_check
logs
File "/home/drow/anaconda3/envs/vae/lib/python3.7/site-packages/pytorch_lightning/callbacks/early_stopping.py", line 134, in _validate_condition_metric
raise RuntimeError(error_msg)
RuntimeError: Early stopping conditioned on metric val_loss which is not available. Pass in or modify your EarlyStopping callback to use any of the following: ``
Epoch 0: 100%|██████████| 17/17 [00:00<00:00, 35.66it/s, loss=89.1, v_num=29]
Epoch 0: 100%|██████████| 17/17 [00:00<00:00, 32.31it/s, loss=89.1, v_num=29]

Body pose format

Hi,

Could you please refer me to a pose format (specification of what the joint indices when a 21x3 Tensor is obtained) is VPoser using? I was not able to find this information anywhere after researching for the entire afternoon.

Thank you very much beforehand!

differences between v1.0 and v2.5?

It seems v2.5 model larger than v1.0, 87M vs 2.5M

then what's their differences? which one is better? Seems v2.5 is very slow compare with v1.0? since the model much more bigger and larger.

penalize impossible poses

Hi, I am curious how I can use VPoser to penalize impossible smpl body_pose (63-dim paramters).

The encoder takes the angles and outputs a distribution. Can I expect the encoded distribution to be mean=0 std=1 for valid pose?

Are all dependencies actually necessary?

There are a lot of dependencies included in vposer, mostly related to visualization is seems. Any interest in breaking out some of the visualization utils into their own project so that I can use this with my own visualization tools and don't have to fight the extra dependencies and locked versions?

install_requires=[
  'torch==1.1.0',
  'tensorboardX>=1.6',
  'torchgeometry==0.1.2',
  'opencv-python>=4.1.0.25',
  'configer>=1.4',
  'configer',
  'imageio>=2.5.0',
  'transforms3d>=0.3.1',
  'trimesh',
  'smplx',
  'pyrender',
  'moviepy'
],

Question about vposer doceder

Can you tell me how to use the vposer decoder to recover the pose vector from smplify-x which model style is smpl to the pose vector from smplify? And the former pose vector has 32 parameters and the after pose vector has 72 parameters.
Thanks

About ContinuousRotReprDecoder

I do not fully understand the principles behind ContinuousRotReprDecoder. Is there any tutorial on why you design a decoder like this to get continous rotation? Why don't you just decode to [batchsize * 3 * 3] rotation matrix?

UV Maps for SMPLX Mesh

Thank you for this project.

Is it possible to texturize these SMPLX meshes similar to SMPL meshes?
Thank you.

Share trained model

Hello,
I started to work with VPoser and I realized that there is not a public trained model, is it?
I am looking for the files inside the "vposer_v2_05" folder.
Can you share these files so I can get VPoser up and running?
Thanks!

Untangling interpenetration

Hi, thanks for your great work.
I wonder why you removed untangling self interpenetration code in body_model_vposer.py and kept 'mesh-intersection' as dependency in main 'READ.ME'.

It turns out that when I sample SMPL parameters with the pretrained VPoser and reconstruct a 3D mesh with unusual identity parameters, there are countless interpenetration cases.

Do you think it would better to just train VPoser from scratch with those unusual identities?
Or somehow figure out the way to use mesh-intersection library?

smpl的姿势参数

在看smpl的原论文时也有同样的困惑:我们用3+3*23=72个参数来表示每一帧的pose,这里面是用了轴角式的表达,那对于任意一个关节点我们只有三个参数,但是对于轴角式表达应该有四个参数,旋转轴和旋转向量,我查了一些资料,《视觉slam十四讲中》这样描述旋转向量(我们认为任意一个旋转都可以用一个旋转和一个旋转角来表示,于是我们使用一个向量,方向与旋转轴一致,长度等于旋转角,这种向量叫做旋转向量,(或者轴角/axis-angle),只需要一个三维向量就可以描述旋转)。
那smpl的某一个关节点的这三个参数是代表了一个向量对吧,这个向量的方向就是旋转轴,那向量的模就是旋转角度吗?我看这个参数基本在-1到1之间,这样计算得到的旋转角度很小啊,结果是不对的。还原不到预定的姿势。

Error Installing Human Body Prior

I am installing human body prior following the same procedure as shown in github. I am getting following errors. I installed configure through pip3 as provided in this github.

Could not find a version that satisfies the requirement configer>=1.4 (from human-body-prior==0.9.3.0) (from versions: 0.9, 1.1, 1.2, 1.2.1, 1.2.2, 1.2.3, 1.2.4, 1.3.0, 1.3.1)
No matching distribution found for configer>=1.4 (from human-body-prior==0.9.3.0)

Dimension error running VPoser Decoder with basic SMPL model

Hi,

I'm trying to run the VPoser Decoder demo found here with the standard SMPL model (not SMPL-X). I updated the path accordingly, and when I run the snippet

from human_body_prior.body_model.body_model import BodyModel
bm = BodyModel(bm_path=bm_path, batch_size=1).to('cuda')  

I get the error

ValueError                                Traceback (most recent call last)
<ipython-input-9-1fdea2200188> in <module>()
      2 from human_body_prior.body_model.body_model import BodyModel
      3 
----> 4 bm = BodyModel(bm_path=bm_path, batch_size=1).to('cuda')

/content/drive/MyDrive/human_body_prior-master/human_body_prior/body_model/body_model.py in __init__(self, bm_path, params, num_betas, batch_size, v_template, num_dmpls, path_dmpl, num_expressions, use_posedirs, dtype)
    113 
    114         shapedirs = smpl_dict['shapedirs'][:, :, :num_betas]
--> 115         self.register_buffer('shapedirs', torch.tensor(shapedirs, dtype=dtype))
    116 
    117         if self.model_type == 'smplx':

ValueError: too many dimensions 'Select'

Any advice for how to fix this would be appreciated!

decoder for pose_hand, pose_jaw and pose_eye?

Hello.

I successfully decoded body parameter which changes number of data column from 32 to 63.

I noticed that in order to render hand, I need 1532 (which is, from my knowledge, is 15 joints * x y z * left and right hand). However, I couldn't figure out how to decode hand pose parameter which has only 12 pose parameters.

It looks like I need to change 12 parameters to 45 data columns.

How can I solve it?

Installation discrepancies

Hi there,

I'd like two address two minor issues in the installation process. Firstly, in the README there's an error in the line:

python install -r requirements.txt

Either python should be replaced by pip (which isn't recommended by documentation), or by python -m pip.

Additionally, on a fresh Ubuntu install (such as a Docker image), the Boost libraries have to also be installed, which isn't mentioned in the readme. For anyone bumping into the issue, on Ubuntu-like systems this can be resolved with:

$ sudo apt-get install libboost-all-dev    

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.