Code Monkey home page Code Monkey logo

orbit's Introduction

Example Tasks created with ORBIT


Orbit

IsaacSim Python Linux platform pre-commit Docs status License

Orbit is a unified and modular framework for robot learning that aims to simplify common workflows in robotics research (such as RL, learning from demonstrations, and motion planning). It is built upon NVIDIA Isaac Sim to leverage the latest simulation capabilities for photo-realistic scenes and fast and accurate simulation.

Please refer to our documentation page to learn more about the installation steps, features, tutorials, and how to set up your project with Orbit.

Announcements

  • [17.04.2024] v0.3.0: Several improvements and bug fixes to the framework. Includes cabinet opening and dexterous manipulation environments, terrain-aware patch sampling, and animation recording.

  • [22.12.2023] v0.2.0: Significant breaking updates to enhance the modularity and user-friendliness of the framework. Also includes procedural terrain generation, warp-based custom ray-casters, and legged-locomotion environments.

Contributing to Orbit

We wholeheartedly welcome contributions from the community to make this framework mature and useful for everyone. These may happen as bug reports, feature requests, or code contributions. For details, please check our contribution guidelines.

Troubleshooting

Please see the troubleshooting section for common fixes or submit an issue.

For issues related to Isaac Sim, we recommend checking its documentation or opening a question on its forums.

Support

  • Please use GitHub Discussions for discussing ideas, asking questions, and requests for new features.
  • Github Issues should only be used to track executable pieces of work with a definite scope and a clear deliverable. These can be fixing bugs, documentation issues, new features, or general updates.

Acknowledgement

NVIDIA Isaac Sim is available freely under individual license. For more information about its license terms, please check here.

Orbit framework is released under BSD-3 License. The license files of its dependencies and assets are present in the docs/licenses directory.

Citing

If you use this framework in your work, please cite this paper:

@article{mittal2023orbit,
   author={Mittal, Mayank and Yu, Calvin and Yu, Qinxi and Liu, Jingzhou and Rudin, Nikita and Hoeller, David and Yuan, Jia Lin and Singh, Ritvik and Guo, Yunrong and Mazhar, Hammad and Mandlekar, Ajay and Babich, Buck and State, Gavriel and Hutter, Marco and Garg, Animesh},
   journal={IEEE Robotics and Automation Letters},
   title={Orbit: A Unified Simulation Framework for Interactive Robot Learning Environments},
   year={2023},
   volume={8},
   number={6},
   pages={3740-3747},
   doi={10.1109/LRA.2023.3270034}
}

orbit's People

Contributors

abmorobotics avatar andrejorsula avatar arbhardwaj98 avatar calvin0722 avatar dhoeller19 avatar farbod-farshidian avatar fyu-bdai avatar hhansen-bdai avatar jsmith-bdai avatar kouroshd avatar kshivakumar-nvidia avatar kylem73 avatar ltesfaye avatar mayankm96 avatar mbrauckmann-bdai avatar nburger-bdai avatar nikitardn avatar pascal-roth avatar phenomen21 avatar renezurbruegg avatar romesco avatar rumblingturtle avatar sasayesh avatar shafeef901 avatar toni-sm avatar vladimirfokow avatar yangcyself avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

orbit's Issues

[Bug Report] Not able to set the camera pose correctly

Describe the bug

When trying to set the camera pose using the Camera.set_world_pose_ros(), the obtained pose in the data is different from the one being set. On digging deeper, the main issue seems to be with using the Replicator API to create the camera in the Camera.spawn() method.

When replicator creates a camera using the rep.create.camera(), it creates an Xform prim and a Camera prim under it. The camera prim is rotated by (90, 0, -90) relative to the parent, to have the up-axis of the camera Z instead of Y. Now, when the camera pose is computed, the transformation of the Camera prim is used. However, when setting the pose, the pose is set to the Xform prim (also called the camera rig). So in a way, the camera prim pose ends up at the input pose with an orientation offset of (90, 0, -90).

Steps to reproduce

Modify the play_camera.py script with the following addition from L-140 onwards:

# set pose
# camera.set_world_pose_from_view(eye=[5.0, 5.0, 5.0], target=[0.0, 0.0, 0.0])
position = [5.0, 5.0, 5.0]
orientation = [-0.1759199, 0.33985114, 0.82047324, -0.4247082]
camera.set_world_pose_ros(position, orientation)

This gives the output:

Camera @ '/World/CameraSensor/Camera_Xform/Camera': 
	data types   : ['rgb', 'distance_to_image_plane', 'normals', 'motion_vectors', 'trigger_outputs'] 
	tick rate (s): 0
	timestamp (s): 0.0
	frame        : 33
	shape        : (480, 640)
	position     : [5. 5. 5.] 
	orientation  : [-0.27984815  0.88047624 -0.11591691  0.36470519] 

Expected output orientation to match the input orientation.

System Info

Describe the characteristic of your environment:

  • Commit: 88d4a34
  • Isaac Sim Version: 2022.2.0
  • OS: Ubuntu 20.04
  • GPU: RTX 2080
  • CUDA: 11.6
  • GPU Driver: 510.108.03

Additional context

The camera methods currently print the warnings that they may not work.

2023-01-26 15:55:07 [14,045ms] [Warning] [omni.isaac.orbit.sensors.camera.camera] The function `set_world_pose_ros` is currently not implemented correctly.

Checklist

  • I have checked that there is no similar issue in the repo (required)
  • I have checked that the issue is not in running Isaac Sim itself and is related to the repo

[Quesiton] Is "physics_sim_view.create_soft_body_view" available now?

Question

Hi. I want to do a toy problem for the soft-cube grasping.
Here is my modified version of soft_prim_view, which come from /home/bong/.local/share/ov/pkg/isaac_sim-2022.2.1/exts/omni.isaac.core/omni/isaac/core/prims/rigid_prim_view.py

soft_prim_view.py

class SoftPrimView(XFormPrimView):
        ...

def initialize(self, physics_sim_view: omni.physics.tensors.SimulationView = None) -> None:
        if physics_sim_view is None:
            physics_sim_view = omni.physics.tensors.create_simulation_view(self._backend)
            physics_sim_view.set_subspace_roots("/")
        carb.log_info("initializing view for {}".format(self._name))
        self._physics_sim_view = physics_sim_view
        self._physics_view = physics_sim_view.create_soft_body_view(self._regex_prim_paths.replace(".*", "*"))  # CHANGED!
        # self._num_shapes = self._physics_view.max_shapes  # for rigid
        carb.log_info("Soft Prim View Device: {}".format(self._device))
        # if self._track_contact_forces:  # for rigid
        #     self._contact_view.initialize(self._physics_sim_view)

but self._physics_view._backend is None, which causes the problem in api.py
api.py

class SoftBodyView:
    def __init__(self, backend, frontend):
        self._backend = backend
        self._frontend = frontend

    @property
    def count(self):
        return self._backend.count

currently, is function physics_sim_view.create_soft_body_view unavailable?
...or what I missed?

thanks!

[Question] How to make an inclined plane?

Question

I have a question. My task involves pushing a cube on an inclined plane by training Franka Panda. The source code allows us to create a default ground plane but I want to create an inclined plane with angle as a parameter. How can I go about it?

[Question] Kp,Kd values for Unitree A1 robot in the reality

Question

In omni/isaac/orbit/robots/config/unitree.py, the Kp,Kd values of Unitree A1 robot are 25 and 0.5.
So what are these values in the reality.
Specifically, for various robot behavior (walking,turning, gallop,...), are these parameters uniform (in simulation or reality)?
If not, what are the approximate values of each parameter.

Thank you!

[Question] Importing Partnet-mobility dataset

Hi, thanks for the great work of putting the paper, library and benchmark together.

In the paper you mention that Orbit supports both YCB and Partnet-Mobility. I found the the example to use YCB (play_ycb_objects.py) but didn't find any details on Partnet-Mobility. I also couldn't find them looking through the assets in the Isaac nucleus.

Do you have any details on how to import Partnet-Mobility into Isaac / Orbit? Thanks a lot.

[Question] How to assign dof/joint indices?

I'm trying to use LeggedMobileManipulator to combine a quadruped and a robot arm. I use 'convert_urdf.py' to convert a URDF to USD. However, the joint indices were not as [base_dof_nums, arm_dof_nums, tool_dof_nums(which is 0 in my case)] after the robot init. (The dof indices were initialized in line 212 of articulation_view.py.) So there may be some problems when I call robot.get_default_dof_state() or robot.apply_action(), etc.

My question is how to make sure the joint order is like [base, arm, tool] when importing the USD file? Thanks a lot!

[Proposal] mimic_multiplier in GripperActuatorGroupCfg

the mimic_multiplier is multiplied twice in gripper_group.py

first, and second

since I applied gripper mimic in configure file for sb3,

        "robotiq_hand": GripperActuatorGroupCfg(
            dof_names=["bot_joint_0_p", "top_joint_0_p", "bot_joint_1_p", "top_joint_1_p", "bot_joint_2_p", "top_joint_2_p"],
            model_cfg=ImplicitActuatorCfg(velocity_limit=20, torque_limit=1000),
            control_cfg=ActuatorControlCfg(command_types=["p_abs"], stiffness={".*": 400}, damping={".*": 20}),
            mimic_multiplier={"bot_joint_0_p":1, "top_joint_0_p":1, "bot_joint_1_p":1, "top_joint_1_p":-1, "bot_joint_2_p":0, "top_joint_2_p":1},
            # speed=1,
            open_dof_pos=0,
            close_dof_pos=0.785398,

mimic multiplier didn't work since "top_joint_1_p":-1 is multiplied twice, in torch elementwise matmul manner. commenting second, it worked.

[Bug Report] Keyboard controller and IK controller

Describe the bug

Depending on the configuration of the robot, it seems like translation keyboard commands affect the orientation of the robot. In my attached video, I only press the "A" and "D" keys in the collect demonstration script, but the eef orientation also moves.

Steps to reproduce

I have attached a video, but the issue can be reproduced by moving the arm around using the keyboard controller, using the collect human demos script.

System Info

Describe the characteristic of your environment:

  • Commit: master
  • Isaac Sim Version: 2022.2.1
  • OS: Ubuntu 20.04
  • GPU: RTX 3090
  • CUDA: 11.7
  • GPU Driver: 515.43.04
orbit_keyboard_problem.mp4

[Question] ArticulatedObject Example

Could we have an example of how to create an articulated object?

In your Robotic Workflows/Reinforcement learning video (https://isaac-orbit.github.io/) you have an example of the robot opening a drawer, we also see you have an ArticulatedObject class, however we did not find any example of how to create such an object. It would be really appreciated if you could release this example, and also a sample .usd file of an articulated object.

Thank you very much in advance

[Question] Training issue in lift environment

Thanks for releasing this, really exciting work! Just wanted to share some issues I encountered when trying it out.

When I tried to install the repo I got an error complaining 'pip' not found at this line: https://github.com/NVIDIA-Omniverse/Orbit/blob/main/orbit.sh#L114. After changing it to use the isaac_sim python binary it did work.

Also, the franka lift env doesn't seem to train (got reward ~13 with rl_games and ~18 with rsl_rl, in both case arm just moved away ignoring the object).

[Question] Error occurs when running sb3/play.py using VSCode debugger with "--num_envs=1" option.

Hi, I want to run sb3/play.py with my checkpoint.

I configured conda environment, orbit, into VSCode.

  1. Press "F1"
  2. Python: Select Interpreter
  3. Select "orbit"

Problem: --num_envs=1 option cause a problem when the first episode is done. --num_envs=2,3,4... is ok.
+ My Guess: type of obs or reward is numpy.array. I guess there is some dimension shrinks when --num_envs=1.
Thanks!

Error Message

Exception has occurred: IndexError
index 1 is out of bounds for axis 0 with size 1
  File "/home/bong/.local/share/ov/pkg/isaac_sim-2022.2.1/Orbit/source/standalone/workflows/sb3/play.py", line 83, in main
    obs, reward, done, env_info = env.step(actions)
  File "/home/bong/.local/share/ov/pkg/isaac_sim-2022.2.1/Orbit/source/standalone/workflows/sb3/play.py", line 96, in <module>
    main()
IndexError: index 1 is out of bounds for axis 0 with size 1

image


and this is my launch.json

{
    "version": "0.2.0",
    "configurations": [
        {
            "name": "Python: Current File",
            "type": "python",
            "request": "launch",
            "args" : ["--task","Lift-Robotiq-v0","--num_envs","1", "--checkpoint","./logs/sb3/Lift-Robotiq-v0/Apr12_07-12-28/model_16384000_steps.zip","--cpu"], // play
            "program": "${file}",
            "console": "integratedTerminal",
            "env": {
                "EXP_PATH": "${workspaceFolder}/_isaac_sim/apps",
                "RESOURCE_NAME": "IsaacSim"
            },
            "envFile": "${workspaceFolder}/.vscode/.python.env",
            "preLaunchTask": "setup_python_env"
        },
        {
            "name": "Python: Attach (windows-x86_64/linux-x86_64)",
            "type": "python",
            "request": "attach",
            "port": 3000,
            "host": "localhost",
        },
    ]
}

[Question] Attach Camera to a specific link

I am trying to attach a camera to my robot's link using the script from play_camera.py.
I initialized my robot as:
robot = my_world.scene.add(PR2(prim_path="/World/PR2", name="pr2"))
and spawned the camera as:
camera.spawn("/World/PR2/wide_stereo_r_stereo_camera_frame") where the prim_path is the link where I would like to attach the camera.
Instead of attaching to the provided prim_path, it is spawned at a distance from the given link ( at a different spot and height in the air than that of the link )
Screenshot from 2023-03-20 05-20-04

Kindly let me know if there is a way to attach the camera to a link. Thanks

[Bug Report] Teleoperation demo script does not work

Describe the bug

The teleop example (source/standalone/environments/teleoperation/teleop_se3_agent.py) fails. When launched using the --cpu option or not, the script runs, but I'm not able to control the robot using the keyboard. I simply launch it in the terminal and press keys on my keyboard, making them printed in this same terminal, but not moving the robot in the simulator. When pressing keys in the simulator, 'K' does toggle the gripper (which is expected), but pressing other keys simply selects different modes in the left panel of the simulator.

Steps to reproduce

Run teleoperation demo in API Demos:

./orbit.sh -p source/standalone/environments/teleoperation/teleop_se3_agent.py --task Isaac-Lift-Franka-v0 --num_envs 1 --cpu --device keyboard

No error in the terminal:

image

System Info

  • Commit: bea049b
  • Isaac Sim Version: 2022.2.1
  • OS: Ubuntu 20.04
  • GPU: RTX 3060
  • CUDA: 12.0
  • GPU Driver: 525.85.05

Additional context

Bug first mentioned here. Potential cause of the error mentioned here.

Checklist

  • I have checked that there is no similar issue in the repo (required)
  • I have checked that the issue is not in running Isaac Sim itself and is related to the repo

[Question] Question about mobile_manipulator.py

Should line 102 in mobile_manipulator.py be

self._data.ee_jacobian[:] = jacobians[:, self.ee_body_index - 1, :, self.base_num_dof : self.base_num_dof+self.arm_num_dof]

?
The original code is

self._data.ee_jacobian[:] = jacobians[:, self.ee_body_index - 1, :, self.base_num_dof : self.arm_num_dof]

but self.base_num_dof : self.arm_num_dof seems wrong

[Proposal] Migrating the base `IsaacEnv`

Proposal

Gymnasium is the maintained version of OpenAI Gym that is now handled by Farama Foundation. It is currently the definition used in libraries such as rllib and tianshou, while others like stable-baselines3 will also update soon.

More recently, there has been an interesting proposal on RL environment definition in the framework torchrl which would fit well for our targetted applications.

This thread aims to serve as a voting/staging on what would be the best environment definition to use.

Motivation

Migrating to Gym 0.28

The current Gym definition (from 0.21.0) is outdated and depends on older libraries such as importlib-meta==4.1 package. This creates conflicts with new updated packages and it would be best to switch to the new Gym definition.

Related issues: OIGE #28, Sb3 #1327

Once SB3 also upgrades to Gymnasium, then it would be best to update everything to use the latest definitions.

More information on Gymnasium:

Migrating to TorchRL definition

The environment definition EnvBase which relies heavily on using tensordict.

The advantages of their base class are:

  • tensordict makes it easier to work with given the scaling possible with Isaac Sim and should be more efficient/flexible than dict of torch.Tensor
  • It allows having complex datatypes (continuous/discrete, images/proprioception) for both observation and action spaces. This helps generalize to a wider audience (such as multi-agent learning)

Related Issues: torchrl #883

Effect on the remaining framework

Since there are wrappers for other RL frameworks (RL-Games and RSL-RL), this probably won't cause any breaking changes on that side. We will just need to adapt them based on the chosen IsaacEnv definition.

Checklist

  • I have checked that there is no similar issue in the repo (required)

[Question] Tethering object to the gripper in LeggedMobileManipulator

We’re trying to implement a simple gripper that works not by grasp friction but by tethering the object to the gripper when the gripper is close enough. I have used class LeggedMobileManipulator in issac orbit. I found there is a class named SurfaceGripper (https://docs.omniverse.nvidia.com/py/isaacsim/source/extensions/omni.isaac.surface_gripper/docs/index.html) can solve this problem. But how can I add SurfaceGripper to LeggedMobileManipulator in Issac orbit. And are there other solutions that can achieve this goal?

[Question] where can I set maximum velocity?

Question

Hi this is my code. I modified frank.py, and created my own code.

"""
# from omni.isaac.orbit.actuators.config.franka import PANDA_HAND_MIMIC_GROUP_CFG
from omni.isaac.orbit.actuators.group import ActuatorGroupCfg, GripperActuatorGroupCfg
from omni.isaac.orbit.actuators.group.actuator_group_cfg import ActuatorControlCfg
from omni.isaac.orbit.actuators.model import ImplicitActuatorCfg
# from omni.isaac.orbit.utils.assets import ISAAC_ORBIT_NUCLEUS_DIR
from omni.isaac.orbit.robots.single_arm import SingleArmManipulatorCfg

# _FRANKA_PANDA_ARM_INSTANCEABLE_USD = f"{ISAAC_ORBIT_NUCLEUS_DIR}/Robots/FrankaEmika/panda_instanceable.usd"
# _ROBOTIQ_WRIST_INSTANCEABLE_USD = "/home/bong/.local/share/ov/pkg/isaac_sim-2022.2.1/Orbit/source/extensions/omni.isaac.orbit_envs/omni/isaac/orbit_envs/bong/robotiq_wrist.usd"
_ROBOTIQ_WRIST_INSTANCEABLE_USD = "/home/bong/.local/share/ov/pkg/isaac_sim-2022.2.1/Orbit/source/extensions/omni.isaac.orbit_envs/omni/isaac/orbit_envs/bong/robotiq_wrist_resized_direction.usd"
# _ROBOTIQ_WRIST_INSTANCEABLE_USD = "/home/bong/.local/share/ov/pkg/isaac_sim-2022.2.1/Orbit/source/extensions/omni.isaac.orbit_envs/omni/isaac/orbit_envs/bong/robotiq_wrist_resized_direction_mass.usd"

VELOCITY_LIMIT = 0.000000000000001
TORQUE_LIMIT = 100
STIFFNESS = 400

ROBOTIQ_WRIST_WITH_ROBOTIQ_CFG = SingleArmManipulatorCfg(
    meta_info=SingleArmManipulatorCfg.MetaInfoCfg(
        usd_path=_ROBOTIQ_WRIST_INSTANCEABLE_USD,
        arm_num_dof=6,
        tool_num_dof=4,
        tool_sites_names=["bot_link_1", "bot_link_2", "top_link_1", "top_link_2"],  # xform
    ),
    init_state=SingleArmManipulatorCfg.InitialStateCfg(   # revjoint
        pos=[0.2, 0, 0.3],
        rot=[0.707, 0, 0.707, 0],
        dof_pos={
            "trans_x": 0.0,
            "trans_y": 0.0,
            "trans_z": 0.0,
            "rev_x": 0.0,
            "rev_y": 0.0,
            "rev_z": 0.0,
            "bot_joint_0_p": 0.0,
            "top_joint_0_p": 0.0,
            "bot_joint_1_p": 0.0,
            "top_joint_1_p": 0.0,
            "bot_joint_2_p": 0.0,
            "top_joint_2_p" : 0.0,
        },
        dof_vel={".*": 0.0},

    ),
    ee_info=SingleArmManipulatorCfg.EndEffectorFrameCfg(
        body_name="base", pos_offset=(0.0, 0.0, 0.15), rot_offset=(1.0, 0.0, 0.0, 0.0)  # xform / head
    ),
    rigid_props=SingleArmManipulatorCfg.RigidBodyPropertiesCfg(
        # max_depenetration_velocity=5.0,
        max_depenetration_velocity=0.0000000000001,
    ),
    collision_props=SingleArmManipulatorCfg.CollisionPropertiesCfg(
        contact_offset=0.005,
        rest_offset=0.0,
    ),
    articulation_props=SingleArmManipulatorCfg.ArticulationRootPropertiesCfg(
        enable_self_collisions=True,
    ),
    actuator_groups={
        "wrist_trans": ActuatorGroupCfg(
            dof_names=["trans_[x,y,z]"],
            model_cfg=ImplicitActuatorCfg(velocity_limit=VELOCITY_LIMIT, torque_limit=TORQUE_LIMIT),
            control_cfg=ActuatorControlCfg(
                command_types=["p_abs"],
                stiffness={".*": 400.0},
                damping={".*": 20.0},
                dof_pos_offset={
                    "trans_x": 0.0,
                    "trans_y": 0.0,
                    "trans_z": 0.0,
                },
            ),
        ),
        "wrist_rev": ActuatorGroupCfg(
            dof_names=["rev_[x-z]"],
            model_cfg=ImplicitActuatorCfg(velocity_limit=VELOCITY_LIMIT, torque_limit=TORQUE_LIMIT),
            control_cfg=ActuatorControlCfg(
                command_types=["p_abs"],
                stiffness={".*": 400.0},
                damping={".*": 20.0},
                dof_pos_offset={"rev_x": 0, "rev_y": 0, "rev_z": 0.0},
            ),
        ),
        "robotiq_hand": GripperActuatorGroupCfg(
            dof_names=["bot_joint_0_p", "top_joint_0_p", "bot_joint_1_p", "top_joint_1_p", "bot_joint_2_p", "top_joint_2_p"],
            model_cfg=ImplicitActuatorCfg(velocity_limit=VELOCITY_LIMIT, torque_limit=TORQUE_LIMIT),
            control_cfg=ActuatorControlCfg(command_types=["p_abs"], stiffness={".*": 400}, damping={".*": 20}),
            mimic_multiplier={"bot_joint_0_p": 1, "top_joint_0_p": -1, "bot_joint_1_p": 1, "top_joint_1_p": -1, "bot_joint_2_p": -1, "top_joint_2_p": 1},
            # speed=1,
            open_dof_pos=0,
            close_dof_pos=0.785398,
        )
    },
)
"""Configuration of Franka arm with Franka Hand using implicit actuator models."""

as you can see, I set VELOCITY_LIMIT = 0.000000000000001 and max_depenetration_velocity=0.0000000000001,.
But the robot still moves so fast.

How can I change it?
I also checked docs, but I have no idea what is wrong.
I want to limit velocity only while keeping proper torque.

thanks!

[Question] Parallel camera support

I have just checked the roadmap and there is no a parallel camera support.

I believe that the parallel camera support (and an example) would be very useful for an image based DRL.

[Proposal] Replace certain uses of 'headless' with new flag

Proposal

Currently in certain demo scripts there are uses of 'headless' to indicate whether or not rendering should be enabled for the SimulationContext.

I think this could be confusing, because 'headless' already has a potentially conflicting meaning within SimulationApp. A user may input 'headless' to one of these scripts and still want rendering. For instance, one might want to run headlessly on a remote server, and to run the Omniverse Streaming Client for a local GUI.

Pitch

I think this could be resolved by simply adding another flag 'render' (or alternatively 'no-render') which will take the place of 'headless' in these evaluations and allow it to retain its pure meaning in the SimulationApp API.

Alternatives

We could keep the existing coercion of headless, but it wouldn't work for people hoping to stream.

Checklist

  • I have checked that there is no similar issue in the repo (required)

[Bug Report] ValueError: articulation primitive path (play_arms demo)

Question

While developing a Docker container setup for Orbit (see here) I have encountered issues in running the demos. In particular I am seeing warnings related to accessing assets and an error generated by set_articulation_properties in the kit.py file. It seems like the path provided for the articulation primitive is incorrect in my setup see here.

I am posting this here for visibility and in case there is a quick fix for this or something basic I am missing. All files that I am using can be found here.

Output

orbit_1      | [145.454s] Simulation App Startup Complete
orbit_1      | 2023-02-08 15:54:02 [147,443ms] [Warning] [omni.usd] Warning: in _ReportErrors at line 2830 of /buildAgent/work/ca6c508eae419cf8/USD/pxr/usd/usd/stage.cpp -- Could not open asset @http://omniverse-content-production.s3-us-west-2.amazonaws.com/Assets/Isaac/2022.1/Isaac/Props/Mounts/SeattleLabTable/table_instanceable.usd@ for reference on prim @anon:0x254cb410:World0.usd@,@anon:0x22ea3a60:World0-session.usda@</World/Table_1>. (recomposing stage on stage @anon:0x254cb410:World0.usd@ <0x2509eb30>)
orbit_1      | 
orbit_1      | 2023-02-08 15:54:02 [147,447ms] [Warning] [omni.usd] Warning: in _ReportErrors at line 2830 of /buildAgent/work/ca6c508eae419cf8/USD/pxr/usd/usd/stage.cpp -- Could not open asset @http://omniverse-content-production.s3-us-west-2.amazonaws.com/Assets/Isaac/2022.1/Isaac/Props/Mounts/SeattleLabTable/table_instanceable.usd@ for reference on prim @anon:0x254cb410:World0.usd@,@anon:0x22ea3a60:World0-session.usda@</World/Table_2>. (recomposing stage on stage @anon:0x254cb410:World0.usd@ <0x2509eb30>)
orbit_1      | 
orbit_1      | 2023-02-08 15:54:02 [147,645ms] [Warning] [omni.usd] Warning: in _ReportErrors at line 2830 of /buildAgent/work/ca6c508eae419cf8/USD/pxr/usd/usd/stage.cpp -- Could not open asset @http://omniverse-content-production.s3-us-west-2.amazonaws.com/Assets/Isaac/2022.1/Isaac/Samples/Orbit/Robots/FrankaEmika/panda_instanceable.usd@ for reference on prim @anon:0x254cb410:World0.usd@,@anon:0x22ea3a60:World0-session.usda@</World/Robot_1>. (recomposing stage on stage @anon:0x254cb410:World0.usd@ <0x2509eb30>)
orbit_1      | 
orbit_1      | Traceback (most recent call last):
orbit_1      |   File "./source/standalone/demo/play_arms.py", line 159, in <module>
orbit_1      |     main()
orbit_1      |   File "./source/standalone/demo/play_arms.py", line 94, in main
orbit_1      |     robot.spawn("/World/Robot_1", translation=(0.0, -1.0, 0.0))
orbit_1      |   File "/isaac-sim/Orbit/source/extensions/omni.isaac.orbit/omni/isaac/orbit/robots/robot_base.py", line 150, in spawn
orbit_1      |     solver_velocity_iteration_count=self.cfg.articulation_props.solver_velocity_iteration_count,
orbit_1      |   File "/isaac-sim/Orbit/source/extensions/omni.isaac.orbit/omni/isaac/orbit/utils/kit.py", line 274, in set_articulation_properties
orbit_1      |     raise ValueError(f"No articulation schema present for prim '{prim_path}'.")
orbit_1      | ValueError: No articulation schema present for prim '/World/Robot_1'.
orbit_1      | /isaac-sim/python.sh: line 40:    23 Segmentation fault      (core dumped) $python_exe "$@" $args
orbit_1      | There was an error running python
docker_orbit_1 exited with code 1

[Question] Running demos in headless mode

Question

I have in general been able to access isaac-sim simulations that are run in headless mode within a docker container through the Omniverse Streaming Client. Should I expect the same when running the Orbit demo scripts in headless mode? Currently the Omniverse Streaming Client doesn't connect for me on some of the demos.

[Proposal] Gamepad support as teleoperation device

Proposal

I propose adding the gamepad as a device for teleoperation.

Motivation

A gamepad will make it easier for the teleoperation of the robot arms than the keyboard. Supporting the gamepad also makes the feature set more comprehensive.

Pitch

Implementations of gamepad in source/extensions/omni.isaac.orbit/omni/isaac/orbit/devices/gamepad, providing classes such as Se2Gamepad and Se3Gamepad.

Alternatives

No

Additional context

Gamepad has already been used for camera navigation. So users need to unselect the "Gamepad Camera Control" option in the GUI before using it for teleoperation.

Checklist

  • I have checked that there is no similar issue in the repo (required)

[Bug Report] Mismatch in the observations between data collection and training in robomimic example workflow

If you are submitting a bug report, please fill in the following details and use the tag [bug].

Describe the bug

When running the agent training step of the Robomimic example following the documentation. I get an error saying that object 'object_positions' doesn't exist

Steps to reproduce

Just follow the exact steps in Running existing scripts/Imitation Learning in the document page.

# step a: collect data with keyboard
./orbit.sh -p source/standalone/workflows/robomimic/collect_demonstrations.py --task Isaac-Lift-Franka-v0 --num_envs 1 --num_demos 10 --device keyboard

# split data
./orbit.sh -p source/standalone//workflows/robomimic/tools/split_train_val.py logs/robomimic/Isaac-Lift-Franka-v0/hdf_dataset.hdf5 --ratio 0.2

./orbit.sh -p source/standalone/workflows/robomimic/train.py --task Isaac-Lift-Franka-v0 --algo bc --dataset logs/robomimic/Isaac-Lift-Franka-v0/hdf_dataset.hdf5

System Info

Describe the characteristic of your environment:

  • Commit: 9d27e4d
  • Isaac Sim Version: 2022.2.1-rc.14+2022.2.494.70497c06.tc
  • OS: Ubuntu 20.04
  • GPU: NVIDIA GeForce RTX 3080 Ti
  • CUDA: 12.0
  • GPU Driver: 525.105.17

Additional context

I think the problem is that in lift_bc.json there is object_positions, whereas in lift_cfg.py it is commented out. There is object_relative_tool_positions instead.

Checklist

  • I have checked that there is no similar issue in the repo (required)
  • I have checked that the issue is not in running Isaac Sim itself and is related to the repo

Orbit Install Extras Error

Was following the installation instructions of orbit from
https://isaac-orbit.github.io/orbit/source/setup/installation.html

I ran ./orbit.sh -e

and I got

kaito@pop-os:~/Documents/Orbit$ ./orbit.sh -e
[INFO] Installing extra requirements such as learning frameworks...                                                                                                                                               
/home/kaito/.local/share/ov/pkg/isaac_sim-2022.2.1/kit/python/bin/python3: can't open file '/home/kaito/Documents/Orbit/source/extensions/omni.isaac.orbit_envs[all]': [Errno 2] No such file or directory
There was an error running python

[Bug Report] After Isaac-Velocity-Anymal-C-v0 is trained baesd on rsl-rl, it cannot walk.

Describe the bug

After Isaac-Velocity-Anymal-C-v0 is trained 1000 steps baesd on rsl-rl, it cannot walk, but float in the air.

Steps to reproduce

on bea049b commit,

./orbit.sh -p source/standalone/workflows/rsl_rl/train.py --task Isaac-Velocity-Anymal-C-v0 --headless
./orbit.sh -p source/standalone/workflows/rsl_rl/play.py --task Isaac-Velocity-Anymal-C-v0 --num_envs 1 --checkpoint /PATH/TO/model_1000.pth

System Info

Describe the characteristic of your environment:

  • Commit: bea049b
  • Isaac Sim Version: 2022.2.1
  • OS: Ubuntu 20.04
  • GPU: RTX 3080
  • CUDA: 11.3
  • GPU Driver: 525.60.11

Additional context

Checklist

  • I have checked that there is no similar issue in the repo (required)
  • I have checked that the issue is not in running Isaac Sim itself and is related to the repo

[Question] details about ZMQ + FCI for real Franka

"we use ZMQ to send joint commands from ORBIT to a computer running the real-time kernel for Franka Emika robot. To abide by the real-time safety constraints, we use a quintic interpolator to upsample the 60 Hz joint commands from the simulator to 1000 Hz for execution on the robot" (orbit paper)


This sound very efficient to overcome the impossibility to use nvidia-drivers on the PC that talks to the robot (due to the PREEMPT_RT kernel)!

I would have two series of questions (unfortunately I could not find any additional details in the documentation / paper):

  1. regarding the zmq interface:

    1. what serialization is used (maybe msgpack or protobuf)?
    2. :octocat: is the code available somewhere?
  2. regarding the computer running the RT kernel for the Franka Emika robot:

    1. it is probably using FCI via libfranka, isn't it?
    2. is the target joint position (called qc on this FCI figure) passed to a joint position motion generator, which is then processed by one of the FCI internal controllers, e.g. the joint impedance controller, as explained here? (green path on the attached image)
    3. or are the target torques (called to-d on this FCI figure) directly processed by an external controller? (purple path on the attached image)
    4. on the fig.9 the target joint position issued at 1kHz is converted to torques by a PD controller.
      1. is this PD controller the built-in one of FCI?
      2. is this PD controller also used in simulation during training (I do not think so)? And if yes are the PD parameters identical in simulation and on the real robot?
    5. :octocat: is the code available somewhere?


20230404_franka_zmq



fci

[Question] Will scripted / hand-crafted agents be available?

Question

Hi, thank you for the great project.
The paper says that the demo in Figure 6 was collected by "Demonstration of the designed tasks using hand-crafted state machines and task-space controllers." but this agent was not found in current code base. Also, there was no specific mention of a hand crafted agent in the develeopment roadmap.
Is this something you plan to release in the future?

Thanks!

[Bug Report] Bugs of body_index & skrl in Orbit

Describe the bug

Hello, I've found two bugs in Orbit. I'm not sure that description is certainty, but I hope this Bug Report can help you fix these bugs in the next version.

  1. Bug of body_index
    The function _process_info_cfg which define self.ee_body_index = body_index in "source/extensions/omni.isaac.orbit/omni/isaac/orbit/robots/single_arm.py", line 156.
    The ee_body_index should minus 1. The effort of this var is find the ee_body's Jacobian. For example, my baxter robot has 57 links, the index of base is 0. However, the first dim of Jocabian is 56 except the base. The ee_body_index should minus 1 to Ensure one-to-one correspondence between Jacobian and index.
def _process_info_cfg(self) -> None:
        """Post processing of configuration parameters."""
        # process parent config
        super()._process_info_cfg()
        # resolve regex expressions for indices
        # -- end-effector body
        self.ee_body_index = None
        for body_index, body_name in enumerate(self.body_names):
            if re.fullmatch(self.cfg.ee_info.body_name, body_name):
                self.ee_body_index = body_index - 1
  1. Bug of skrl
    This is a simple bug which was founded when I was running the demo of skrl. But I'm not sure is the version of environment causes this bug.
Traceback (most recent call last):
  File "source/standalone/workflows/skrl/train.py", line 167, in <module>
    main()
  File "source/standalone/workflows/skrl/train.py", line 107, in main
    **convert_skrl_cfg(experiment_cfg["models"]["policy"]),
TypeError: gaussian_model() argument after ** must be a mapping, not NoneType

I fix it by adding "return d" after "source/standalone/workflows/skrl/config.py", line 90.

def update_dict(d):
        for key, value in d.items():
            if isinstance(value, dict):
                update_dict(value)
            else:
                if key in _direct_eval:
                    d[key] = eval(value)
                elif key.endswith("_kwargs"):
                    d[key] = value if value is not None else {}
                elif key in ["rewards_shaper_scale"]:
                    d["rewards_shaper"] = reward_shaper_function(value)
        return d

System Info

Describe the characteristic of your environment:

  • Commit: db71160
  • Isaac Sim Version: 2022.2.0
  • OS: Ubuntu 20.04
  • GPU: RTX 4090
  • CUDA: 12.0
  • GPU Driver: 525.85.05

Checklist

  • I have checked that there is no similar issue in the repo (required)
  • I have checked that the issue is not in running Isaac Sim itself and is related to the repo

[Question] Visual Observations Setup?

Question

Awesome paper and work so far, I was curious about setting up environments with visual observations. Is that possible yet? If so how do you configure that? I couldn't find anything about visual observations (for RL workflows) in the docs.

Thanks

[Question] How can I reduce action space while keeping actuation alive?

Hi,

I'm preparing a simple example to study Orbit. Currently, my robot is made up of a 6-DoF Wrist and a 1-DoF gripper with a mimic.

Problem: Policy Network returns 7-DoF currently.
What I want to do: I only need 6-DoF action for the wrist, and I want to close the gripper manually on _step_impl.
Question: How can I reduce the action space of the policy network, keeping the articulation of the gripper alive?

Thank you for your attention!

Best,
Bong.


Here are my code snippets for the robot (which is similar to franka.py)

robotiq.py: please see comments

    actuator_groups={
        "wrist_trans": ActuatorGroupCfg(
            dof_names=["trans_[x,y,z]"],
            model_cfg=ImplicitActuatorCfg(velocity_limit=VELOCITY_LIMIT, torque_limit=TORQUE_LIMIT),
            control_cfg=ActuatorControlCfg(
                command_types=["p_abs"],
                stiffness={".*": 400.0},
                damping={".*": 20.0},
                dof_pos_offset={
                    "trans_x": 0.0,
                    "trans_y": 0.0,
                    "trans_z": 0.0,
                },
            ),
        ),
        "wrist_rev": ActuatorGroupCfg(
            dof_names=["rev_[x-z]"],
            model_cfg=ImplicitActuatorCfg(velocity_limit=VELOCITY_LIMIT, torque_limit=TORQUE_LIMIT),
            control_cfg=ActuatorControlCfg(
                command_types=["p_abs"],
                stiffness={".*": 400.0},
                damping={".*": 20.0},
                dof_pos_offset={"rev_x": 0, "rev_y": 0, "rev_z": 0.0},
            ),
        ),
        # If I comment on this part, 
        # policy network returns only six action spaces, that's good!
        # However, I cannot control the gripper's end-effector. Please see _step_impl
        "robotiq_hand": GripperActuatorGroupCfg(
            dof_names=["bot_joint_0_p", "top_joint_0_p", "bot_joint_1_p", "top_joint_1_p", "bot_joint_2_p", "top_joint_2_p"],
            model_cfg=ImplicitActuatorCfg(velocity_limit=VELOCITY_LIMIT, torque_limit=TORQUE_LIMIT),
            control_cfg=ActuatorControlCfg(command_types=["p_abs"], stiffness={".*": 400}, damping={".*": 20}),
            mimic_multiplier={"bot_joint_0_p": 1, "top_joint_0_p": -1, "bot_joint_1_p": 1, "top_joint_1_p": -1, "bot_joint_2_p": -1, "top_joint_2_p": 1},
            # speed=1,
            open_dof_pos=0,
            close_dof_pos=0.785398,
        )
    },

Below is snippets that are parts of lift/lift_env

_step_impl please see my comments!

    def _step_impl(self, actions: torch.Tensor):
        ...
        elif self.cfg.control.control_type == "default":
            self.robot_actions[:] = self.actions     # Without comment in "robotiq.py" -> self.robot_actions.shape is [:,7]
                                                     # With a comment in "robotiq.py" -> self.robot_actions.shape is [:,6]
            # HERE IS MY CODE!!
            # Without comment "robotiq.py",  it works, but there is useless action space[:,-1] on policy returns!
            # With the comment "robotiq.py", I cannot access gripper's robot action!
            if is_ee_close_to_object()
                self.robot_actions[:, -1] = 0 # Without comment "robotiq.py" -> close the gripper
                                              # With the comment "robotiq.py" -> it access the last joint of the wrist.
       ...

Slight error in lift_env reward manager

https://github.com/NVIDIA-Omniverse/Orbit/blob/d75e7ff63bf08b558266d2eff42fd4db15d29d6b/source/extensions/omni.isaac.orbit_envs/omni/isaac/orbit_envs/manipulation/lift/lift_env.py#L399-L401

Looks like this missing the exp-kernel, assuming it should be something along the lines of:

  def reaching_object_position_exp(self, env: LiftEnv, sigma: float):
      """Penalize end-effector tracking position error using exp-kernel."""
      error = torch.sum(torch.square(env.robot.data.ee_state_w[:, 0:3] - env.object.data.root_pos_w), dim=1)
      return torch.exp(-error / sigma)

[Question] About February Features

Question

Hello, I like this framework very much, and I am also looking forward to the February function mentioned in the roadmap. May I know when the next version will be updated?

[Question] Availability of environments in the paper

Hi, there seems to be several environments missing in the repo right now compared to the paper. I'm wondering if they are not in the orbit_env directory, or they are yet to be released? If latter, is there any timeline on when they'll be available?

Thanks!

[Proposal] Compositional Configuration via Dataclasses (Hydra, etc.)

Proposal

Orbit has already made thoughtful design decisions that support really nice UX/UI paradigms for configuration management. Showing a few use-cases of how this might be done would be beneficial for the community as it can be difficult to decide "how to start". We would like to have one or two best-practices examples that fully leverage configuration via dataclasses. Preferably, one which makes no commitment to configuration framework, and one which demonstrates a a few tweaks that can allow Hydra to be used at its most powerful capacity.

Pitch

We'll provide an example that shows how to set up an environment(robot / scene), RL algorithm, and training script and configure all of this via dataclasses.

We'll then show how to extend this configuration paradigm via some key features of Hydra.

Alternatives

We can also look into what benefits other configuration frameworks have (one such being tyro), as long as they use dataclasses as the principle data structure.

Additional context

#57 and #67 is a related issue/PR pair, and it is a good start! We will build on this and provide more use-cases that leverage Hydra and are clean / easy to understand. See some of the previous discussion in #67 .

Checklist

  • I have checked that there is no similar issue in the repo (required)

[Proposal] Rename "dof" to "joint" in core classes

Proposal

This thread is for discussing renaming all "DOF" attributes to "Joints" in the core framework.

Motivation

The framework uses "DOF" (degrees-of-freedom) as a convention taken from IsaacGym. For fixed arms, the joints and DOFs are equivalent. However, for floating-based systems, DOF also includes the un-actuated base (i.e. generalized coordinates).

It can be annoying to call it "DOF" since nomenclature-wise it is incorrect for robotics.

Pitch

Rename the attributes that have "DOF" or "dof" to call them "Joint" or "joint" respectively.

To keep it backward compatible, we can add a check using __getattr__ call that renames the called variable name to joint and add a warning. After v1.0, we can remove this function.

Alternatives

This is a breaking change but useful in the long run.

Checklist

  • I have checked that there is no similar issue in the repo (required)

[Question] How can I give dummy steps?

Question

Hello, I want to add dummy steps to the environment. For example, it would look like this:

Dummy action: The gripper moves 1 unit along the x-axis for 10 steps due to the dummy action, then closes to grab the object.
Action: The robot is driven by the agent. The robot performs a specific task (e.g., rotation).
Dummy action: After the rotation is performed, the gripper moves 1 unit along the y-axis due to the dummy action.
In this case, the entire environment has early termination and a multi-environment.

The important point here is that the rollout should not be contaminated. That is, when the env_0 dummy action is performed, env_0 does not provide any information to the rollout, and env_1 still collects rollouts. the env_0 and env_1 are updated together through sim.step().

Is there already completed work for this? I'm asking because it's challenging to code from scratch due to early termination and multi-environment.

Thank you.

[Question] How to add user custom environment

Question

Hi, I want to create some custom environment.

for example, I want to upgrade the environment franka-lift -v0, changing rigid object and manipulator such as UR10.

what is the best way? can you recommend some docs for the custom environment?

thanks!

[Question] Using Jax

Is it possible to use Jax as learning framework without any performance/speed of simulation loss ?

[Question] How to create a visual mesh?

Question

Hi! I want to create a visual mesh like this.

Unfortunately, In-hand Manipulation env. is not opened yet, I would like to edit the existing code. With these options,

  • collision_enabled=False
  • disable_gravity=True

I expected the same behavior in the picture.
However, in my codes, the visual_objects still have a collider, while gravity is disabled.
Can you give me some advice?

Thanks!
Reproduce Steps======================================================

I just added some lines to the .cfg file.
lift_cfg.py

@configclass
class VisualObjectCfg(RigidObjectCfg):

    meta_info = RigidObjectCfg.MetaInfoCfg(
        usd_path=f"{ISAAC_NUCLEUS_DIR}/Props/Blocks/DexCube/dex_cube_instanceable.usd",  scale=(1, 1, 1))
    init_state = RigidObjectCfg.InitialStateCfg(
        pos=(0.4, 0.0, 0.075), rot=(1.0, 0.0, 0.0, 0.0), lin_vel=(0.0, 0.0, 0.0), ang_vel=(0.0, 0.0, 0.0))
    collision_props = RigidObjectCfg.CollisionPropertiesCfg(collision_enabled=False)
    rigid_props = RigidObjectCfg.RigidBodyPropertiesCfg(disable_gravity=True)

...

@configclass
class LiftEnvCfg(IsaacEnvCfg):
   ...
    # -- object
    object: ManipulationObjectCfg = ManipulationObjectCfg()
    # -- VISUAL OBJECT 
    visual_object: VisualObjectCfg = VisualObjectCfg()
    ...

After that, I added two lines one my env file.
lift_env.py

class LiftEnv(IsaacEnv):

    def __init__(self, cfg: LiftEnvCfg = None, headless: bool = False):
        ...
        self.object = RigidObject(cfg=self.cfg.object)
        self.visual_object = RigidObject(cfg=self.cfg.visual_object) # Here
        ...

    def _design_scene(self) -> List[str]:
        ...
        self.object.spawn(self.template_env_ns + "/Object")
        self.visual_object.spawn(self.template_env_ns + "/VisualObject") # Here
        ...

[Bug Report] Program aborted the execution due to unhandled error when runing API Demos 1 and 2

Describe the bug

Demos 1 and 2 in API Demos abort the execution due to unhandled error

Steps to reproduce

Run demos 1 and 2 in API Demos

  • Spawn different quadrupeds, visualize feet markers, and make robots stand using position commands:

    ./orbit.sh -p source/standalone/demo/play_quadrupeds.py

    kit_20230118_201202.log

    Screenshot from 2023-01-18 20-12-23

  • Spawn multiple Franka arms and apply random position commands:

    ./orbit.sh -p source/standalone/demo/play_arms.py --robot franka_panda

    kit_20230118_201501.log

    Screenshot from 2023-01-18 20-15-36

System Info

Describe the characteristic of your environment:

  • Commit: dd24040
  • Isaac Sim Version: 2022.2.0
  • OS: Ubuntu 20.04
  • GPU: RTX 3080
  • CUDA: 11.8
  • GPU Driver: 520.61.05

Additional context

  • Demo 3 runs well

    Screenshot from 2023-01-18 20-17-26

Checklist

  • I have checked that there is no similar issue in the repo (required)
  • I have checked that the issue is not in running Isaac Sim itself and is related to the repo

[Question] Where to find other environments from paper and doc?

Question

Hello Orbit Team,

I went through the docs and were able to run the environments that I was able to find namely Cartpole, Ant, Humanoid, Velocity, Reach and Lift and I was wondering where the other examples showcased in your paper and docs (here: https://isaac-orbit.github.io/orbit/source/features/environments.html) are and how to access them.

The base examples only show the basic observations like the position and orientation of the actor, object, and target position, however a more complex example that utilizes more complex observations would be appreciated.

For example Cloth Folding or Mobile Reach would be interesting, as these require observations which would require the NN to learn spacial awareness.

I would greatly appreciate it if you publish some of the more complex examples as Isaac still has a quite meager community support for how exactly to utilize the engine to its potential.

Kind regards

[Proposal] Enabling Omniverse Streaming Client

Proposal

Omniverse Streaming Client is a lightweight application allowing Users to connect to Omniverse applications deployed on the network or over the Internet.

https://docs.omniverse.nvidia.com/app_streaming-client/app_streaming-client/overview.html

Motivation

It is quite useful for people who are developing on remote servers. Thus, it would help to have a pipeline to enable this functionality easily.

Pitch

  • To enable live stream, the user needs to load the default app experience in their standalone python scripts
  • After that, adding the following piece of code to the script should enable live streaming:
import carb
from omni.isaac.core.utils.extensions import enable_extension
from omni.isaac.core.utils.carb import set_carb_setting

# settings interface
carb_settings = carb.settings.get_settings()
# set values
set_carb_setting(carb_settings, "/app/livestream/enabled", True)
set_carb_setting(carb_settings, "/app/window/drawMouse", True)
set_carb_setting(carb_settings, "/app/livestream/proto", "ws")
set_carb_setting(carb_settings, "/app/livestream/websocket/framerate_limit", 120)
set_carb_setting(carb_settings, "/ngx/enabled", False)
# enable extensions for live stream
enable_extension("omni.kit.livestream.native")
enable_extension("omni.services.streaming.manager")

Additional context

  • OmniIsaacGymEnvs has a way to enable live stream easily form the command line.
  • OmniGibson has a nice way to handle remote operation by setting the REMOTE_ENABLED flag in the terminal (corresponding: PR).

Checklist

  • I have checked that there is no similar issue in the repo (required)

[Bug Report] Bug in rendering camera images in Gym environment

Describe the bug

  • I'm using camera in a reinforcement learning framework and when I set headless to True, the program stops running

Steps to reproduce

The main bug

  • I checked the code and it seems to be caused by the experience parameter of simulation_app, you can reproduce my error like this
  • Open the ”Orbit/source/standalone/demo/play_camera.py“
  • Replace the following code (because the RL fram use this code)
simulation_app = SimulationApp(config)
  • with
import os
config = {"headless": args_cli.headless}
# load cheaper kit config in headless
if args_cli.headless:
    app_experience = f"{os.environ['EXP_PATH']}/omni.isaac.sim.python.gym.headless.kit"
else:
    app_experience = f"{os.environ['EXP_PATH']}/omni.isaac.sim.python.kit"
# launch the simulator
simulation_app = SimulationApp(config, experience=app_experience)

System Info

Describe the characteristic of your environment:

  • Commit: none
  • Isaac Sim Version: 2022.2.0
  • OS: Ubuntu 20.04
  • GPU: RTX 3090ti
  • CUDA: 11.7
  • GPU Driver: 515.86.01

Additional context

Add any other context about the problem here.
image

Checklist

  • I have checked that there is no similar issue in the repo (required)
  • I have checked that the issue is not in running Isaac Sim itself and is related to the repo

[Bug Report] Cube not interactable in “Isaac-Lift-Franka-v0” task

Discussed in #26

Originally posted by Aqua12138 March 2, 2023
Why is the cube not responding physically when I run lift env, I tried controlling the franka to clip it but there doesn't seem to be any collision response
image
image

Checklist

  • I have checked that there is no similar issue in the repo (required)
  • I have checked that the issue is not in running Isaac Sim itself and is related to the repo

[Question] Collision error between mobile manipulator and YCB object

I have defined a mobile manipulator and import a YCB object in issac orbit.
here is the code of importing YCB object

Obj_cfg = RigidObjectCfg()
Obj_cfg.meta_info.usd_path = f"{ISAAC_NUCLEUS_DIR}/Props/YCB/Axis_Aligned_Physics/006_mustard_bottle.usd"
rigid_object = RigidObject(Obj_cfg)

obj_quat = convert_quat(tf.Rotation.from_euler("XYZ", (-90, 90, 0), degrees=True).as_quat(), to="wxyz")
translation =(np.random.uniform(-0.1,0.1), np.random.uniform(-0.3,0), 0.8)
rigid_object.spawn("/World/Objects/mustard_bottle",translation=translation, orientation=obj_quat)

and when I try to control the mobile manipulator to grasp the YCB object, there is no collision between the gripper and the object.
But there is collision between the robotic arm and the object. I have check the URDF file, but there is no difference between the arm and the gripper.

the following video shows the grasp process.
https://user-images.githubusercontent.com/41771334/231722830-79a4a5e2-c3ec-407b-b2a0-b8c5db67d322.mp4

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.