Code Monkey home page Code Monkey logo

Comments (20)

Wasserwecken avatar Wasserwecken commented on June 27, 2024 1

For that, look at the first example in the readme, you can set the selection of channels and their order for each joint. The lib will convert the values to the new set and order.

from bvhio.

Wasserwecken avatar Wasserwecken commented on June 27, 2024 1

Nope, the z-score is applied to the positions and rotations, in the line before it is stated that r and p is concatinated to x.

But whatever, my point is that they are already optimizting their data. If you want to improve the preprocessing you have to adapt their code or use my lib and create a whole new preprocessing pipeline that fits their model.

from bvhio.

Wasserwecken avatar Wasserwecken commented on June 27, 2024

Im not sure if i understood the problem correctly, but I had to restore the T-pose for my thesis. This is the major reason wyh I created this lib. You can edit the T-Pose of .bvh files without losing the actual motion.

That means you can change the T-Pose and the keyframe data is adjusted such that the resulting motion is equal to the original one. Further more, I dont know which kind of output you need, but you can access the edited data as matrices, quaternions or in eulers. Please look here for the joint data access: https://github.com/Wasserwecken/spatial-transform

This is for the LAFAN1-Dataset:
https://gist.github.com/Wasserwecken/0802f3a678931e8408f7b78ecb99b00a

And this for the BandaiNamco-Dataset:
https://gist.github.com/Wasserwecken/58ae6579be8ac43508b9b347956afc9a

Im not sure if these snippets are the traditional T-Pose, because later in my work I needed an average aproximation pose of all animations. But this should be a good start to fiddle around.

Basically, you have to convert your .bvh files. Just load, edit and save them, so you dont have to do the transformation all time

from bvhio.

HospitableHost avatar HospitableHost commented on June 27, 2024

In this code https://gist.github.com/Wasserwecken/0802f3a678931e8408f7b78ecb99b00a, the format of result bvh file is different from the raw LAFAN1 bvh file, the channels of some joints are changed to six "Xposition Yposition Zposition Zrotation Xrotation Yrotation" not raw "Zrotation Yrotation Xrotation", so how do I modify your code to keep only the 3 channels and the zyx order? Thanks very much!

from bvhio.

HospitableHost avatar HospitableHost commented on June 27, 2024

@Wasserwecken Thanks very much, I successfully transferred the joints' rotation of human poses in LAFAN1(in which the T-Pose is strange) to the the rotations which correspond to the correct T-Pose.

I still have a problem, How do I reverse this process? What I mean is, assuming that the bvh files are all correct T Pose now, how can I modify it back to the T Pose of LAFAN1 while ensuring that the motion remains unchanged?

from bvhio.

Wasserwecken avatar Wasserwecken commented on June 27, 2024

Ahm... why would you want to do that? I mean, you just have to use the same script but setting the pose to the original values. But then youre back to the original data, so why changing it in the first place then?

from bvhio.

HospitableHost avatar HospitableHost commented on June 27, 2024

@Wasserwecken A lot of works trained their models based on the raw LAFAN1 dataset, as a result, when using the T-Pose restored LAFAN1 dataset to evaluate these models, they failed to work.

from bvhio.

Wasserwecken avatar Wasserwecken commented on June 27, 2024

I see, but thats only partially true ;) A lot of papers mention a normalisation of the data. That does not result in a T-Pose but a kind of "average pose" of the motion.

But if youre bound to a fixed preprocessing, just store the original rest pose values and setting them back later. There is no caching or any related in the lib. You have to implement that on your own.

But a hint for the dataset, the restpose is equal for all animations, except for two files, but they differ only in their hand rotations

from bvhio.

HospitableHost avatar HospitableHost commented on June 27, 2024

I tested this paper "Motion In-betweening via Two-stage Transformers." The results are invalid, so I don't think data normalization will solve the problem.
​You can see that the result of his motion-inbetweening is wrong because it was trained on the original dataset. I fixed TPose on the test set using your code, and then I used the new bvh files to evaluate.
image

from bvhio.

HospitableHost avatar HospitableHost commented on June 27, 2024

So, What do I need to do is turn the angle in this code (https://gist.github.com/Wasserwecken/0802f3a678931e8408f7b78ecb99b00a) its inverse?
Do I have to do the same for lines 63 to 74 in the code?

from bvhio.

Wasserwecken avatar Wasserwecken commented on June 27, 2024

The results are invalid, so I don't think data normalization will solve the problem.

That is not what i ment, the code itself does the normalisation already, you dont have to do anything. But youre bound to the preprocessing.

So, What do I need to do is turn the angle in this code

You have to set the angles either manually, or you copy the values from the original fiels. But it is not the inverse!

from bvhio.

HospitableHost avatar HospitableHost commented on June 27, 2024

For example, modify this code layout[ 1][0].setEuler(( 0, 0, 180)) # 1 LeftUpLeg to layout[ 1][0].setEuler(( 0, 0, -180)) # 1 LeftUpLeg.
Is this wrong?

from bvhio.

HospitableHost avatar HospitableHost commented on June 27, 2024

I want to post-optimize the results of "Motion In-betweening via Two-stage Transformers."
However, this method is trained on the original LAFAN1, which requires me to fix the T pose first, and then post-optimize it. After post-optimization, I have to change it back to the original LAFAN1 format for comparison.

from bvhio.

HospitableHost avatar HospitableHost commented on June 27, 2024

The essence of the matter is the conversion of joint rotation angles between two different T-Poses systems. Thank you very much for your code and your help.

from bvhio.

Wasserwecken avatar Wasserwecken commented on June 27, 2024

I dont know, because it is not the inverse. I suggest that you read the rest pose from an original file with the second snippet in the readme. You can get the angles with joint.getEuler(order='ZYX').

Then, 'in theory' (never tried that), you can use these values for your modified anmations.

However, this method is trained on the original LAFAN1, which requires me to fix the T pose first

Are you sure? Because their code should already do that in the background as far as i understood their code

from bvhio.

HospitableHost avatar HospitableHost commented on June 27, 2024

Are you sure? Because their code should already do that in the background as far as i understood their code

Yes, the dataset class just load the bvh file and read the raw rotations. And they don't modify the bvh files of LAFAN1

from bvhio.

Wasserwecken avatar Wasserwecken commented on June 27, 2024

Unfortunatly, if you tried to edit the dataset and then pushing it to the code from your paper, that does not work. They are optimizing it already in the background

Look at the comment in the file https://github.com/victorqin/motion_inbetweening/blob/master/packages/motion_inbetween/benchmark.py

Note:
In order to get the same loss metric as Robust Motion Inbetweening, we should:
1) Use mean centered clip data (data_utils.to_mean_centered_data())
2) Get mean and std of global position (mean_rmi, std_rmi = rmi.get_rmi_benchmark_stats_torch())
3) For global pos loss, apply zscore normalization before calculating the loss
For global quaternion loss, no need to apply zscore.

They are modifing the data after loading with https://github.com/victorqin/motion_inbetweening/blob/fa9b6dc5f0791fd28bfccb6783e6bfd26d578515/packages/motion_inbetween/data/utils_torch.py#L416C9-L416C9

Which they also point out in their paper:
image

from bvhio.

HospitableHost avatar HospitableHost commented on June 27, 2024

z score is only for the global position, not for the rotation https://github.com/victorqin/motion_inbetweening/blob/fa9b6dc5f0791fd28bfccb6783e6bfd26d578515/packages/motion_inbetween/benchmark.py#L92

from bvhio.

HospitableHost avatar HospitableHost commented on June 27, 2024

You are right, I changed the statistics (related pkl files) and they can complete the motion in-betweening task correctly on the LAFAN1 dataset after modifying T-Pose. Thank you very much for your help and wish you good health and academic success.

from bvhio.

Wasserwecken avatar Wasserwecken commented on June 27, 2024

Thank you! have success too!

from bvhio.

Related Issues (12)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.