I’m currently a PhD student at ETH Zurich. I received my bachelor's and master's degree at Tsinghua University in 2019 and 2022.
In my daily life, I enjoy cycling and running.
Please check my homepage for more details.
NumPy, TensorFlow and PyTorch implementation of human body SMPL model and infant body SMIL model.
License: MIT License
I’m currently a PhD student at ETH Zurich. I received my bachelor's and master's degree at Tsinghua University in 2019 and 2022.
In my daily life, I enjoy cycling and running.
Please check my homepage for more details.
I ran preprocess.py for the male and female pkl seperately, and got two converted models. However, when I ran smpl_torch.py for inference, it said that "_pickle.UnpicklingError: pickle data was truncated" when loading the converted models.
Traceback (most recent call last):
File "/home/yuan/neural_body_fitting/SMPL/smpl_torch.py", line 208, in
test_gpu([1])
File "/home/yuan/neural_body_fitting/SMPL/smpl_torch.py", line 202, in test_gpu
model = SMPLModel(device=device)
File "/home/yuan/neural_body_fitting/SMPL/smpl_torch.py", line 13, in init
params = pickle.load(f)
_pickle.UnpicklingError: pickle data was truncated
Process finished with exit code 1
@CalciferZh
I run your code but find the arms are always flat.
Is this right?
Thank you for your job, I am reading the paper now.
I know the pose parameters represent the rotation vectors of joints,
But I didn't find the specific meaning of the shape parameters. There are ten of them, would you please tell me what's the specific meaning of them? (like weight, height?)
Thank you very much.
Hey, why does the first term have a cosine function?
Line 154 in b8ace71
Hi! Do you know how to get the indices of vertices that belong to a specific part of human? I know there are 24 parts and 6890 vertices but don't know how to match them. Thanks!!!!!!!!!!!!!!
Hi,
I can't understand the mathematical reason for the "remove the transformation due to the rest pose". Can you give me some hints? I know that it is the equation (3) in the paper. But I can't explain why it is implemented like this.
Thank you very much.
Lines 109 to 115 in f7a2eb3
I noticed we regress the joints twice in the scripts smpl_torch_batch.py and SMIL_torch_batch.py, and I was wondering about the correctness of this.
In the SMPL paper, it seems they define the joint positions as a regression on the rest pose, without pose blend shapes. We regress the final joint positions from the posed vertices with pose blend shapes.
In case this is an error, it can be fixed in SMIL_torch_batch.py by e.g. adding after line 190:
Jtr = G[..., :4, 3]
And then just deleting line 200, where we regress the joints a second time.
The difference is negligible, but it may be significant if you are doing 3D pose estimation. I saw a difference of at most 4 mm for the infant model, so I would estimate a worst-case error of 4 cm for the SMPL dataset.
Hello,
I used pose and beta from a model file generated by SMPLify. But when the obj file is opened in blender its not face up.
I used smpl_np.py:
File "smpl_torch.py", line 132, in forward
v_shaped = torch.tensordot(self.shapedirs, betas, dims=([2], [0])) + self.v_template
AttributeError: module 'torch' has no attribute 'tensordot'
Any ideas why and how could this be fixed?
Hi, I have a question about this:
lrotmin = tf.squeeze(tf.reshape((R_cube - I_cube), (-1, 1)))
I think that the code may correspond to the eq.9 in the paper. but the eq.9 is to subtract R_n(theta*) instead of an identity matrix. Are they have the same value? thanks a lot. btw, I have read the code offered by the paper writer, but I am still confused with that.
Thanks a lot:)
What do the first three parameters in pose mean? Is it a radian around xyz? The second and third parameters do not conform to this rule. I tried to change the second parameter according to a certain angle. The rotation of the smpl model did not conform to the input rotation angle.
I had thought about it for hours but didn't get it.
Please help me understand it.
results = stacked - \ pack( tf.matmul( stacked, tf.reshape( tf.concat((J, tf.zeros((24, 1), dtype=tf.float64)), axis=1), (24, 4, 1) ) ) )
if I have a .obj file with SMPL format (6890 veticals and 13766 faces) , is it possible to regress the joints position?
hello, thanks your great job that help me .But when I use the pytorch version SMPL,I find some conflict. FOr example:
self.shapedirs = torch.from_numpy(params['shapedirs']).type(torch.float64)
I got:
TypeError: expected np.ndarray (got Ch)
There still many like this. I want to know is it the version of my environment that cause the problem. And if we can have a conversion offline or by email.
Thx your great job again~
I don't understand what's the meaning of SMPL pose parameters. There are 72 pose parameters, does every parameter have specific geometrical or physical meaning?
Hello,
It's a bit out off-topic here, but could you please show me how to rotate a temporal SMPL sequence along the z axis?
Specifically, we have an SMPL animation that consists of N translation and global rotation. Can you suggest me how to rotate the whole animation by an angle along the Z-axis?
Do need to rotate both translation and the global rotation simultaneously with the same rotation matrix?
Thanks
I noticed that there are joints information in the implementation version of the pytorch batch, and these parameters are not available in the model provided by SMPL. Where is this model? Thank you!
when I run SMIL_torch_batch.py, it gave me "No module named 'file_utils"
Also, find_mini_rgbd and MicroRGBD cannot be found as well. I guess they are from file_utils package.
Thank you for your useful code
In smpl_np.py, I do not know the range of normalization such as pose information
I'm sorry to trouble you, but could I ask you to do this for me?
Thank you for this great implementation of SMPL!
Now I have a question. I wonder how to calculate joint positions after applying specific pose. In SMPL paper, we can calculate joint positions of rest pose using J_regressor matrix. And I think posed joint positions should be obtained by applying joint transformation matrix (equation 3 in SMPL paper) to rest pose joint positions, instead of directly using J_regressor to vertices after Linear Blend Skinning. Can you give me some hint on this question?
Line 23 in 20787fb
Hi,
Could you please specify SMPL joint indexes and its names ?
Hi,
In the paper of SMPL, the Rodrigues formula is used as:
However, in your smpl_np.py
, I find that,
R = cos * i_cube + (1 - cos) * dot + np.sin(theta) * m
(2)
By converting these mathematical notations in (2), (2) can be rewritten as:
And, to my knowledge about Rodrigues formula (e.g., https://en.wikipedia.org/wiki/Rodrigues%27_rotation_formula),
it should be written as the formula of yours, not as the formula used in SMPL paper, right?
Hi, when I run
python preprocess.py ../smpl_python_v.1.0.0/models/basicModel_f_lbs_10_207_0_v1.0.0.pkl
I got the this error. I checked the keys of src_data
and found that there is no cocoplus_regressor
.
Did I load the wrong pkl file? Thank you in advance.
Thanks for the great work!
I am trying to understand the code and I have some problems which have confused me for a long time
I am not an english native speaker, so I am sorry for my terrible and abstract expression... appreciate for any help :)
Hi,
I am trying to convert 3D joints positions to pose rotations in order to create a SMPL model from these positions.
In the SMPL paper, it is said : "The pose of the body is defined by a standard skeletal rig, where w_k denotes the axis-angle representation of the relative rotation of part k with respect to its parent in the kinematic tree."
I have tried to compute the axis and the angle of each joint with respect to its parents but without success. The method is however simple : if J is the joint position, P the parent position and G the grandparent position, then
axis = GP x PJ
angle = arccos(GP.PJ / (norm(GP).norm(PJ))
To have the angle-axis, I normalise axis and do normalized_axis x angle.
What am I doing wrong ? Should I use a t-pose to find the good poses ?
your job is great!
I read the paper.In paper author just say "72 pose parameters ; 3 for each part plus 3 for the root orientation".I can understand it can calculated by axis-angle, but what's the meaning of root orientation?
Hello. Thank you for the work. Do you think its possible to write to .fbx format using the output of the SMPL model ? I have the following parameters :
pred_cam (n_frames, 3) # weak perspective camera parameters in cropped image space (s,tx,ty)
orig_cam (n_frames, 4) # weak perspective camera parameters in original image space (sx,sy,tx,ty)
verts (n_frames, 6890, 3) # SMPL mesh vertices
pose (n_frames, 72) # SMPL pose parameters
betas (n_frames, 10) # SMPL body shape parameters
joints3d (n_frames, 49, 3) # SMPL 3D joints
joints2d (n_frames, 21, 3) # 2D keypoint detections by STAF if pose tracking enabled otherwise None
bboxes (n_frames, 4) # bbox detections (cx,cy,w,h)
frame_ids (n_frames,) # frame ids in which subject with tracking id #1 appears
Now I have a 3d scan human model (.ply), but cant find a way to render it into smpl model. Does smpl model can only be generated from parameters?
I want to make my SMPL model move around, so I need the sequence betas and thetas to change the pose of the SMPL model frame by frame.
Wonder how to get the groundtruth betas and thetas, instead of the model predicted ones like VIBE.
Thanks
When running the smpl_torch_batch.py in the google colaboratory enviroment, I encounter a problem.
command1:
!python3 smpl_torch_batch.py
error1:
Traceback (most recent call last):
File "smpl_torch_batch.py", line 228, in
test_gpu([1])
File "smpl_torch_batch.py", line 211, in test_gpu
model = SMPLModel(device=device)
File "smpl_torch_batch.py", line 18, in init
np.array(params['joint_regressor'].T.todense())
KeyError: 'joint_regressor'
cmmand2:
!python3 SMIL_torch_batch.py
error2:
Traceback (most recent call last):
File "SMIL_torch_batch.py", line 217, in
from smil_np import SMILModel
ModuleNotFoundError: No module named 'smil_np'
Hi, Thanks for your SMPL_in_TF, which is very useful.
I tried the code and it works very well. but one thing is dimension error when the input is batch files. I used your code in the CNN network and my input is [N, 10] for betas, now I use the tf.while_loops to solve this problem. Maybe later we could consider to improve it for batch inputs. Thanks again for your code
I test this project and official implementation, and the same pose
parameter get different result. Is that true? or I make some mistakes?
Thanks for your work about SMPL. I'm a beginner of human modeling, and I have a question about how to get the pose parameter given a set of joints coordinate coefficient . Is it that calculating the angle between to adjacent bones and represent it in terms of axis-angle as the pose parameter? Do I need to substract the rest pose? Could you please provide a detailed calculation steps
Hello! Thank you for your great work?
Do you know how to get the joints corresponding to the vertices of smpl?
I know that I can get joint points from vertices through ‘J_regressor’ matrix
But how do you know which joint a vertex is bound to? Or the vertex is bound to multiple joint points, which joint point has the most weight?
Hi.
Thanks for the great work.
I am trying to export result to Unity humanoid rig.
After running SMPL, I got 24*3=72 pose parameters.
How would I be able to change these parameters to joint rotations of rig?
From my understanding, pose parameters seem to be axis-angle.
Converting axis-angle requires an angle, theta, along with x y z vectors which are pose parameters of SMPL.
Pose parameters I got were mostly in range of [-1,1] which is unlikely to be Euler degree...
Would anyone have good idea to solve this?
Thank you
都没人
Hi, I have a question that why smpl choose axis-angle representation? Cause I find I can't rotate the part to a position that I want by giving angles, why don't use euler angle? Euler seems more intuitional. Are there any other considerations?
in the paper, they do the pose training using multi-pose dataset. and get the parameters. How can we train like that?
what does 'J_regressor_prior ' , 'pose' , 'f' , 'J_regressor' , 'betas' , 'kintree_table' , 'J', 'weights_prior' , 'trans' , 'weights' , 'vert_sym_idxs' , 'posedirs' , 'pose_training_info', 'bs_style' , 'v_template' , 'shapedirs' , 'bs_type' in smpl pkl file means ..What values they define .. what is the use of them each..? Thanks in Advance.
Thanks for your great work.
I have a question about your implementation.Whether this implementation is based on HMR or the original SMPL code ?
Thanks !
Hi when I run smpl_torch_batch.py
It gets:
File "C:\Users\xjsxu\Anaconda3\lib\site-packages\torch\functional.py", line 755, in norm
return torch._C._VariableFunctions.frobenius_norm(input, dim, keepdim=keepdim)
RuntimeError: Could not run 'aten::conj.out' with arguments from the 'CUDATensorId' backend. 'aten::conj.out' is only available for these backends: [CPUTensorId, VariableTensorId].
CPU version is fine.
Anaconda 3.7.3
torch: 1.4.0
Thanks for great work!
I noticed that the joint positions (smpl.J
) after smpl.set_params
are different from the correct positions.
I think smpl.update
may need updating joints at the last line.
If it is not acceptable for a performance reason, implementing it as an option might be good.
Here is code.
import smpl_np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from mpl_toolkits.mplot3d.art3d import Poly3DCollection
def smpl_plot():
smpl = smpl_np.SMPLModel(os.path.join(os.environ['HOME'],"smpl/smpl/models/basicModel_neutral_lbs_10_207_0_v1.0.0.pkl"))
np.random.seed(9608)
pose = (np.random.rand(*smpl.pose_shape) - 0.5) * 0.4
beta = (np.random.rand(*smpl.beta_shape) - 0.5) * 0.06
trans = np.ones(smpl.trans_shape)*0.1
smpl.set_params(beta=beta, pose=pose, trans=trans)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
mesh = Poly3DCollection(smpl.verts[smpl.faces], alpha=0.05)
mesh.set_edgecolor((0.3,0.3,0.3))
mesh.set_facecolor((0.7,0.7,0.7))
ax.add_collection3d(mesh)
J = smpl.J
for j in range(len(J)):
pos1 = J[j]
ax.scatter3D([pos1[0]], [pos1[1]], [pos1[2]], label=f"{j}")
plt.legend()
plt.show()
smpl_plot()
import smpl_np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from mpl_toolkits.mplot3d.art3d import Poly3DCollection
def smpl_plot():
smpl = smpl_np.SMPLModel(os.path.join(os.environ['HOME'],"smpl/smpl/models/basicModel_neutral_lbs_10_207_0_v1.0.0.pkl"))
np.random.seed(9608)
pose = (np.random.rand(*smpl.pose_shape) - 0.5) * 0.4
beta = (np.random.rand(*smpl.beta_shape) - 0.5) * 0.06
trans = np.ones(smpl.trans_shape)*0.1
smpl.set_params(beta=beta, pose=pose, trans=trans)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
mesh = Poly3DCollection(smpl.verts[smpl.faces], alpha=0.05)
mesh.set_edgecolor((0.3,0.3,0.3))
mesh.set_facecolor((0.7,0.7,0.7))
ax.add_collection3d(mesh)
# modification
J = smpl.J_regressor.dot(smpl.verts)
for j in range(len(J)):
pos1 = J[j]
ax.scatter3D([pos1[0]], [pos1[1]], [pos1[2]], label=f"{j}")
plt.legend()
plt.show()
smpl_plot()
In test.py
Lines 58 to 59 in 3a572a6
What does 0.4 and 0.06 mean in this context? Are they really arbitrary or is there and intuitive interpretation?
Thanks in advance.
(SMPL) huangxinkai@ubuntu:/project/SMPL$ python preprocess.py smpl/models/basic Model_f_lbs_10_207_0_v1.0.0.pkl/project/SMPL$ python preprocess.py smpl/models/basic model_m_lbs_10_207_0_v1.0.0.pkl
(SMPL) huangxinkai@ubuntu:
(SMPL) huangxinkai@ubuntu:/project/SMPL$ python smpl_torch.py/project/SMPL$ python smpl_torch_batch.py
cuda
(SMPL) huangxinkai@ubuntu:
Traceback (most recent call last):
File "smpl_torch_batch.py", line 228, in
test_gpu([1])
File "smpl_torch_batch.py", line 211, in test_gpu
model = SMPLModel(device=device)
File "smpl_torch_batch.py", line 18, in init
np.array(params['joint_regressor'].T.todense())
KeyError: 'joint_regressor'
I find nothing after running python smpl_torch.py and an error occur after running python smpl_torch_batch.py.
here's my package list:
Package Version
chumpy 0.69
numpy 1.17.3
pip 19.3.1
scipy 1.3.1
setuptools 41.6.0
six 1.12.0
torch 1.0.0
wheel 0.33.6
my python vision is 3.6
I have seen the "lrotmin = torch.reshape(R_cube - I_cube, (-1, 1)).squeeze()" in pytorch version code. So what lrotmin means? Why R_cube minus I_cube?
@CalciferZh im trying to use the tensorflow version as I think it may be faster. However, how do I load the whole model and all its parameters into tensorflow only once, and then keep changing the model by only changing the beta? Is this possible? Otherwise it seems now that it runs slower than CPU version.
Run python preprocess.py /PATH/TO/THE/DOWNLOADED/MODEL.
However, the Model downloaded has two items. One for male and one for female. How to combine them? Make the model available for man and women both.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.