Code Monkey home page Code Monkey logo

jump's Introduction

SIGGRAPH 2021: Discovering Diverse Athletic Jumping Strategies

project page

paper

demo video

image_0032

Prerequisites

Important Notes

We suspect there are bugs in linux gcc > 9.2 or kernel > 5.3 or our code somehow is not compatible with that. Our code has large numerical errors from unknown source given the new C++ compiler. Please use older versions of C++ compiler or test the project on Windows.

C++ Setup

This project has C++ components. There is a cmake project inside Kinematic folder. We have setup the CMake project so that it can be built on both linux and Windows. Use cmake, cmake-gui or visual studio to build the project. It requires eigen library.

Python Setup

Install the Python requirements listed in requirements.txt. The version shouldn't matter. You should be safe to install the latest versions of these packages.

Rendering Setup

To visualize training results, please set up our simulation renderer.

  • Clone and follow build instructions in UnityKinematics. This is a flexible networking utility that will send raw simulation geometry data to Unity for rendering purpose.
  • Copy [UnityKinematics build folder]/pyUnityRenderer to this root project folder.
  • Here's a sample Unity project called SimRenderer in which you can render the scenes for this project. Clone SimRenderer outside this project folder.
  • After building UnityKinematics, copy [UnityKinematics build folder]/Assets/Scripts/API to SimRenderer/Assets/Scripts. Start Unity, load SimRenderer project and it's ready to use.

Training P-VAE

We have included a pre-trained model in results/vae/models/13dim.pth. If you would like to retrain the model, run the following:

python train_pose_vae.py

This will generate the new model in results/vae/test**/test.pth. Copy the .pth file and the associated .pth.norm.npy file into results/vae/models. Change presets/default/vae/vae.yaml under the model key to use your new model.

Train Run-ups

python train.py runup

Modify presets/custom/runup.yaml to change parts of the target take-off features. Refer to Appendix A in the paper to see reference parameters.

After training, run

python once.py runup no_render results/runup***/checkpoint_2000.tar

to generate take-off state file in npy format used to train take-off controller.

Train Jumpers

Open presets/custom/jump.yaml, change env.highjump.initial_state to the path to the generated take-off state file, like results/runup***/checkpoint_2000.tar.npy. Then change env.highjump.wall_rotation to specify the wall orientation (in degrees). Refer to Appendix A in the paper to see reference parameters (note that we use radians in the paper). Run

python train.py jump

to start training.

Start the provided SimRenderer (in Unity), enter play mode, the run

python evaluate.py jump results/jump***/checkpoint_***.tar

to evaluate the visualize the motion at any time. Note that env.highjump.initial_wall_height must be set to the training height at the time of this checkpoint for correct evaluation. Training height information is available through training logs, available both in the console and through tensorboard logs. You can start tensorboard through

python -m tensorboard.main --bind_all --port xx --logdir results/jump***/

jump's People

Contributors

arpspoof avatar

Stargazers

Nam Jung Hyun avatar  avatar  avatar  avatar Michael Basnak avatar Michael Xu avatar  avatar Min avatar Soomin Park avatar  avatar Huaijin Pi avatar Yi Zhou avatar craftsliu avatar Amelia Young avatar Ben Ling avatar Zeshi Yang avatar Ash Hall avatar Harrisonyei avatar Michael T Tang avatar  avatar Noshaba Cheema avatar Fabio Dias Rollo avatar Phani Srikar avatar Liang Pan avatar  avatar  avatar eknath avatar Alan Fregtman avatar Emmanuel avatar Snow avatar Gabriel Faundez avatar Rubbly avatar 爱可可-爱生活 avatar Michael Zhuang avatar Ivan Danyliuk avatar Ruihan Yang avatar  avatar  avatar BALALA_SHIN avatar Vaibhav Bansal avatar Momeo avatar Andrew avatar friolero avatar Ramón Calvo avatar  avatar  avatar  avatar Justin John avatar Ayush Kumar avatar Jerry Zhao avatar Mr. Fisher avatar  avatar Myelin Sheath avatar Yongwoo Lee avatar  avatar STYLIANOS IORDANIS avatar  avatar Rafael Silva avatar  avatar  avatar 4-byte Unicode avatar Jie Yang avatar watermoon avatar

Watchers

James Cloos avatar Snow avatar Justin John avatar  avatar Zeshi Yang avatar

jump's Issues

Bayesian Diversity Search

Hi,

I found your paper very interesting!

I found training p-vae(train_pose_vae.py) and training ppo for motion control (train.py).
Can you tell me which part of the released code is about the bayesian diversity search?
It says in the paper that it is implemented using GPFlow, but I found no code using it.

Thank you.

Question about novelty reward term

Jump/env/jumper/highjump.py

Lines 184 to 199 in 1c9c1bd

def generalized_falling_rwd(self):
avg_penalty = self.offset_penalty / self.num_step
rwd = 1.0 - np.clip(np.power(avg_penalty / self.offset_penalty_coeff, 2), 0, 1)
avg_ang_penalty = self.angv_penalty / self.num_step
rwd *= np.exp(-self.angv_penalty_coeff * avg_ang_penalty)
if rwd > 0:
q = Quaternion.fromWXYZ(self.root_quat)
min_angle = np.pi
for qb in self.q_bases:
q_base = Quaternion.fromWXYZ(np.array(qb))
q_diff = q.mul(q_base.conjugate())
angle = np.abs(q_diff.angle())
min_angle = np.min([min_angle, angle])
rwd *= np.clip(min_angle/(np.pi/2), 0.01, 1)

Hi, I read your paper and got an interest in the reward term.
I could not find the intention of a reward term(on line #191-199) from your paper.
I wonder the role of the term, and what q_bases is(Why the minimum angle difference from root quaternion should be larger than pi/2?)

Input layer size VAE

Hi,

I noticed that you used a layer size of 200 for input features, but at the end you seems to use only the joints position. Is this somethings i'm missing? do you train on larger feature (like velociy) to get more motion differentiation? and then use a small portion (joints positions) that you are adding the offset aftermath?

Thank you.

Cannot learn to jump.

I cannot get the result the same as the paper. When the training of jump policy, I always gets reward 0.

The default values to learn runup policy are:

algorithm.max_iterations: 2000
experiment.env: jumper_run2
env.jumper_run2.angular_v: [-3.0, -3.0, 1.0]
env.jumper_run2.linear_v_z: -2.4

The jump policy with the following parameters, which are the recommended ones to learn Fosbury Flop.

algorithm.max_iterations: 12000
experiment.env: highjump
# initial state file generated by the run-up training
env.highjump.initial_state: results/runup-2022-Feb-10-175005/checkpoint_2000.tar.npy
# wall orientation in degrees
env.highjump.wall_rotation: -0.05
# must correspond to the training height of the checkpoint
env.highjump.initial_wall_height: 0.5

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.