Code Monkey home page Code Monkey logo

mvp's People

Contributors

ir413 avatar toruowo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

mvp's Issues

Image normalization

Thanks for sharing the pre-train model! I noticed that you have normalized the image input by im_mean and im_std in the simulation environment. To use the pre-train model, are we expected to normalize the image input ( from real-world or another simulation env) by the same mean and std?

Thanks!

About IsaacGym Preview 2

Hi, thanks for your awesome work. I am trying to reproduce your work by following the instructions in README. However, IsaacGym Preview 2 can not be accessed since its Preview 3 has been released in official website. So, would you please share the package of IsaacGym Preview 2 with me? Thanks in advance.

License?

Hi thanks for the great project. It appears that there is no indication of a license in this repository.
Could you please specify the license?

72956 segmentation fault

Hi! Thanks for your great sharing!

I met the 72956 segmentation fault when I tried to train the task with Pixels suffix like FrankaPickPixels.

Besides, I have finished the training successfully with the task without the Pixels suffix. It seems that the segmentation fault is not triggered by pytorch.

I'm using Isaac Gym Preview 4 on Ubuntu 20.04.

Here is the output after running python tools/train_ppo.py task=FrankaPickPixels

Importing module 'gym_37' (/home/xzc/Downloads/IsaacGym_Preview_4_Package/isaacgym/python/isaacgym/_bindings/linux-x86_64/gym_37.so)
Setting GYM_USD_PLUG_INFO_PATH to /home/xzc/Downloads/IsaacGym_Preview_4_Package/isaacgym/python/isaacgym/_bindings/linux-x86_64/usd/plugInfo.json
PyTorch version 1.10.0
Device count 1
/home/xzc/Downloads/IsaacGym_Preview_4_Package/isaacgym/python/isaacgym/_bindings/src/gymtorch
Using /home/xzc/.cache/torch_extensions/py37_cu113 as PyTorch extensions root...
Emitting ninja build file /home/xzc/.cache/torch_extensions/py37_cu113/gymtorch/build.ninja...
Building extension module gymtorch...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
Loading extension module gymtorch...
/home/xzc/Downloads/IsaacGym_Preview_4_Package/isaacgym/python/isaacgym/torch_utils.py:135: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. To silence this warning, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
  def get_axis_params(value, axis_idx, x_value=0., dtype=np.float, n_dims=3):
tools/train_ppo.py:13: UserWarning: 
The version_base parameter is not specified.
Please specify a compatability version level, or None.
Will assume defaults for version 1.1
  @hydra.main(config_name="config", config_path="../configs/ppo")
/home/xzc/mambaforge/envs/mvp/lib/python3.7/site-packages/hydra/_internal/defaults_list.py:251: UserWarning: In 'config': Defaults list is missing `_self_`. See https://hydra.cc/docs/1.2/upgrades/1.0_to_1.1/default_composition_order for more information
  warnings.warn(msg, UserWarning)
/home/xzc/mambaforge/envs/mvp/lib/python3.7/site-packages/hydra/_internal/defaults_list.py:415: UserWarning: In config: Invalid overriding of hydra/job_logging:
Default list overrides requires 'override' keyword.
See https://hydra.cc/docs/1.2/upgrades/1.0_to_1.1/defaults_list_override for more information.

  deprecation_warning(msg)
/home/xzc/mambaforge/envs/mvp/lib/python3.7/site-packages/hydra/_internal/hydra.py:127: UserWarning: Future Hydra versions will no longer change working directory at job runtime by default.
See https://hydra.cc/docs/1.2/upgrades/1.1_to_1.2/changes_to_job_working_dir/ for more information.
  configure_logging=with_log_configuration,
task: 
    name: FrankaPick
    env: 
        numEnvs: 256
        envSpacing: 1.5
        episodeLength: 500
        object_pos_init: [0.5, 0.0]
        object_pos_delta: [0.1, 0.2]
        goal_height: 0.8
        obs_type: pixels
        im_size: 224
        cam: 
            w: 298
            h: 224
            fov: 120
            ss: 2
            loc_p: [0.04, 0.0, 0.045]
            loc_r: [180, -90.0, 0.0]
        dofVelocityScale: 0.1
        actionScale: 7.5
        objectDistRewardScale: 0.08
        liftBonusRewardScale: 4.0
        goalDistRewardScale: 1.28
        goalBonusRewardScale: 4.0
        actionPenaltyScale: 0.01
        asset: 
            assetRoot: assets
            assetFileNameFranka: urdf/franka_description/robots/franka_panda.urdf
    sim: 
        substeps: 1
        physx: 
            num_threads: 4
            solver_type: 1
            num_position_iterations: 12
            num_velocity_iterations: 1
            contact_offset: 0.005
            rest_offset: 0.0
            bounce_threshold_velocity: 0.2
            max_depenetration_velocity: 1000.0
            default_buffer_size_multiplier: 5.0
            always_use_articulations: False
    task: 
        randomize: False
train: 
    seed: 0
    torch_deterministic: False
    encoder: 
        name: vits-mae-hoi
        pretrain_dir: /home/xzc/Documents/mvp/tmp/pretrained
        freeze: True
        emb_dim: 128
    policy: 
        pi_hid_sizes: [256, 128, 64]
        vf_hid_sizes: [256, 128, 64]
    learn: 
        agent_name: franka_ppo
        test: False
        resume: 0
        save_interval: 50
        print_log: True
        max_iterations: 2000
        cliprange: 0.1
        ent_coef: 0
        nsteps: 32
        noptepochs: 10
        nminibatches: 4
        max_grad_norm: 1
        optim_stepsize: 0.001
        schedule: cos
        gamma: 0.99
        lam: 0.95
        init_noise_std: 1.0
        log_interval: 1
physics_engine: physx
pipeline: gpu
sim_device: cuda:0
rl_device: cuda:0
graphics_device_id: 0
num_gpus: 1
test: False
resume: 0
logdir: /home/xzc/Documents/mvp/tmp/debug
cptdir: 
headless: True
Wrote config to: /home/xzc/Documents/mvp/tmp/debug/config.yaml
Setting seed: 0
Setting sim options
Not connected to PVD
+++ Using GPU PhysX
Physics Engine: PhysX
Physics Device: cuda:0
GPU Pipeline: enabled
num franka bodies:  11
num franka dofs:  9
[1]    72956 segmentation fault  python tools/train_ppo.py task=FrankaPickPixels

Looking forward to any comments! Thanks!

Ego Dataset

Hi, thanks for your excellent works.
I would like to ask that could you provide the dataset that you named 'Ego' mentioned in your paper?
It would help me a a lot! thanks

Code for fine-tuning MAE on custom data

I wanted to finetune the MAE on a custom dataset, after loading the pretrained weights of "in-the-wild" provided in the repo. Would it be possible to access your code for training MAE on the given dataset? Training code would be really helpful for not only getting weights for the decoder part of MAE, but also writing a fine tuning code.

Thanks!

hello, I wanna know how do you observe the action of the robots?

hi, I am sorry to bother you but I am new to reinforcement learning.
I have read your paper and I see you write in the paper that you use wrist-mounted cameras to capture the movements of the robots, I wonder what is the real meaning of wrist-mounted camera? Is it a default setting of the simulator, or is it a real camera in physical world?
Thank you : )

Can't reproduce the MVP performance on FrankaPick

Hi, thanks for the great project!

I can reproduce the performance of the oracle state model, which shows good stability.
However, when I try to reimplement the MVP results on FrankaPick, I have some difficulties.
I use the pre-trained MAE you provided. I use 5 seeds: 333, 444, 555, 666, 777.
It seems that IsaacGym Preview 3 can only be downloaded from the official website now.
I use wandb to log the results. The success rates are shown in the figure below. The success rate has been zero in two out of five runs.

mvp_FrankaPick

The final mean success rate is around 53, shown below, which is far behind the performance (about 85) reported in the paper.

mvp_FrankaPick_mean

How can I obtain the results reported in the paper?Is it normal that the success rate on the task is often 0? It seems that MVP exhibits good stability across different seeds (low variance, Figure 5 in the paper), what seeds do you use?

Can't get results with ViT-L model (config file for using ViT-L)

Hi,

I cannot get good results on ViT-L model.

Below is mean success rate.
Screenshot from 2023-05-19 11-06-59

Command: python tools/train_ppo.py task=FrankaPickPixelsLarge

Here is my config for FrankaPickPixelsLarge


env:
  numEnvs: 256
  envSpacing: 1.5
  episodeLength: 500

  object_pos_init: [0.5, 0.0]
  object_pos_delta: [0.1, 0.2]

  goal_height: 0.8

  obs_type: pixels
  im_size: 256

  cam:
    w: 298
    h: 256
    fov: 120
    ss: 2
    loc_p: [0.04, 0.0, 0.045]
    loc_r: [180, -90.0, 0.0]

  dofVelocityScale: 0.1
  actionScale: 7.5

  objectDistRewardScale: 0.08
  liftBonusRewardScale: 4.0
  goalDistRewardScale: 1.28
  goalBonusRewardScale: 4.0
  actionPenaltyScale: 0.01

  asset:
    assetRoot: "assets"
    assetFileNameFranka: "urdf/franka_description/robots/franka_panda.urdf"

sim:
  substeps: 1
  physx:
    num_threads: 4
    solver_type: 1
    num_position_iterations: 12
    num_velocity_iterations: 1
    contact_offset: 0.005
    rest_offset: 0.0
    bounce_threshold_velocity: 0.2
    max_depenetration_velocity: 1000.0
    default_buffer_size_multiplier: 5.0
    always_use_articulations: False

task:
  randomize: False

The only change I've made to FrankaPickPixels.yaml is changing im_size and cam_h from 224 to 256.

I could get the following results on ViT-S model by running python tools/train_ppo.py task=FrankaPickPixels as specified in getting_started.
Screenshot from 2023-05-19 11-04-44

AR controller for real data collection

Hi,

Thanks for your excellent work!

When you use an HTC Vive controller to collect demonstrations, do you also use a VR headset? We currently find that the controller can not be paired without a headset in steam.

Thanks for your help!

Reset management for multiple envs

Hi, thanks for this project and making available your code,

I am playing with PixMc and your model for few days, but I still have a pending question,

In task/configs an episode length is specified whereas in mvp/ppo.py the environments are never reinitialized (apply_reset set to false). I am deducing that this is automatic. Similarly, I assume that when a given environment reaches a goal it is reinitialized automatically. Am I right to think so? And if not, how can I successfully reset the environment when the goal states are reached?

Pretrained Decoder Weights

It seems that the pretrained weights only contain the encoder, if I understand correctly. I'm wondering if it is also possible to release the decoder weights used for pre-training on the HOI data.

Thanks,
Wentao

training from pixels

Hi, thanks for the great project!

I installed the repo, and I ran the training example from states, it works well.

but when I ran the training example from pixels,

python tools/train.py task=FrankaPickPixels

the error:

Traceback (most recent call last):
  File "tools/train.py", line 39, in train
    ppo = process_ppo(env, cfg, cfg_dict, cfg.logdir, cfg.cptdir)
  File "/home/haoyux/mvp/mvp/utils/hydra_utils.py", line 211, in process_ppo
    num_gpus=cfg.num_gpus
  File "/home/haoyux/mvp/mvp/ppo/ppo.py", line 80, in __init__
    policy_cfg
  File "/home/haoyux/mvp/mvp/ppo/actor_critic.py", line 197, in __init__
    emb_dim=emb_dim
  File "/home/haoyux/mvp/mvp/ppo/actor_critic.py", line 149, in __init__
    assert pretrain_type == "none" or os.path.exists(pretrain_path)
AssertionError

Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.