Code Monkey home page Code Monkey logo

voxposer's People

Contributors

huangwl18 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

voxposer's Issues

Running issue

my config:
wsl Ubuntu-22.04
python 3.9

when i run following code. jupyter Kernel was restarting.

config = get_config('rlbench')
# uncomment this if you'd like to change the language model (e.g., for faster speed or lower cost)
for lmp_name, cfg in config['lmp_config']['lmps'].items():
    cfg['model'] = 'gpt-3.5-turbo'

# initialize env and voxposer ui
visualizer = ValueMapVisualizer(config['visualizer'])
env = VoxPoserRLBench(visualizer=visualizer)
lmps, lmp_env = setup_LMP(env, config, debug=False)
voxposer_ui = lmps['plan_ui']

jupyter debug info:

qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "/mnt/d/all_codes/VoxPoser/CoppeliaSim_Edu_V4_1_0_Ubuntu20_04" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, webgl, xcb.

QCoreApplication::applicationFilePath: Please instantiate the QApplication object first

When I run the playgroud.ipynb,there were some errors:

QCoreApplication::applicationFilePath: Please instantiate the QApplication object first
WARNING: QApplication was not created in the main() thread.
WARNING: QApplication was not created in the main() thread.
libGL error: MESA-LOADER: failed to open vmwgfx: /usr/lib/dri/vmwgfx_dri.so: 无法打开共享对象文件: 没有那个文件或目录 (search paths /usr/lib/x86_64-linux-gnu/dri:$${ORIGIN}/dri:/usr/lib/dri, suffix _dri)
libGL error: failed to load driver: vmwgfx
libGL error: MESA-LOADER: failed to open vmwgfx: /usr/lib/dri/vmwgfx_dri.so: 无法打开共享对象文件: 没有那个文件或目录 (search paths /usr/lib/x86_64-linux-gnu/dri:$${ORIGIN}/dri:/usr/lib/dri, suffix _dri)
libGL error: failed to load driver: vmwgfx
libGL error: MESA-LOADER: failed to open swrast: /usr/lib/dri/swrast_dri.so: 无法打开共享对象文件: 没有那个文件或目录 (search paths /usr/lib/x86_64-linux-gnu/dri:$${ORIGIN}/dri:/usr/lib/dri, suffix _dri)
libGL error: failed to load driver: swrast

What is the solution for these errors?Thank you for your help.

TypeError: 'str' object is not callable

I tried to run playground.ipynb. And I got these error and I do not know what happened.

OpenAI API call took 6.37s
planner" generated code
context: "objects = ['bin', 'rubbish', 'tomato1', 'tomato2']"
objects = ['bin', 'rubbish', 'tomato1', 'tomato2']
Query: throw away the trash, leaving any other objects alone.
execute("grasp the rubbish")
execute("back to default pose")
execute("move to the bin")
execute("open gripper")
done

Error executing code:
objects = ['bin', 'rubbish', 'tomato1', 'tomato2']
execute("grasp the rubbish")
execute("back to default pose")
execute("move to the bin")
execute("open gripper")
done

TypeError Traceback (most recent call last)
Cell In[5], line 2
1 instruction = np.random.choice(descriptions)
----> 2 voxposer_ui(instruction)

File ~/Graduation/VoxPoser_Pro/VoxPoser/src/LMP.py:146, in LMP.call(self, query, **kwargs)
144 import pdb ; pdb.set_trace()
145 else:
--> 146 exec_safe(to_exec, gvars, lvars)
148 self.exec_hist += f'\n{to_log.strip()}'
150 if self._cfg['maintain_session']:

File ~/Graduation/VoxPoser_Pro/VoxPoser/src/LMP.py:189, in exec_safe(code_str, gvars, lvars)
187 except Exception as e:
188 print(f'Error executing code:\n{code_str}')
--> 189 raise e

File ~/Graduation/VoxPoser_Pro/VoxPoser/src/LMP.py:186, in exec_safe(code_str, gvars, lvars)
181 custom_gvars = merge_dicts([
182 gvars,
183 {'exec': empty_fn, 'eval': empty_fn}
184 ])
185 try:
--> 186 exec(code_str, custom_gvars, lvars)
187 except Exception as e:
188 print(f'Error executing code:\n{code_str}')

File :2

File ~/Graduation/VoxPoser_Pro/VoxPoser/src/interfaces.py:100, in LMP_interface.execute(self, movable_obs_func, affordance_map, avoidance_map, rotation_map, velocity_map, gripper_map)
98 if avoidance_map is None:
99 avoidance_map = self._get_default_voxel_map('obstacle')
--> 100 object_centric = (not movable_obs_func()['name'] in EE_ALIAS)
101 execute_info = []
102 if affordance_map is not None:
103 # execute path in closed-loop

TypeError: 'str' object is not callabl

Count the success rate

Dear Authors,

Thanks for sharing this great work. I want to know how to count the success rate, and Error breakdown of components as shown in the paper.

Best,
Jian Ding

Transfer to another simulation

Has anyone try to transfer VoxPoser to another simulation (like gazebo) and use ROS to communicate? I think this is the first step to deploy VoxPoser on our own embodiment. If someone is doing the same work, we can exchange ideas!

about get the object position ?

Hello, I'm impressed with VoxPoser , and I have a question about how to get the object position for the affordance_map. in the open source , the position of the object is from the sim env , and not from the vision model ? I think it should come from the GPT4 result in the real experiment . is my thinking is right ?

The code content about openai api has expired.

The API provided by OpenAI for calling GPT has been updated in many versions. After experimentation, it has been found that the code for calling GPT cannot run correctly, whether it is the latest version of OpenAI library or any previous version of OpenAI library.

The problem mainly occurs in two places:

  1. openai.error

  2. Openai.ChatCompletion

I am currently trying to migrate to the latest version of the OpenAI library. Good luck!!

[Question]: How to update object names file to handle new RLBench tasks

Hi. I'm trying to test Voxposer in a set of environments from RLBench, but different from the ones currently exposed in the playground.ipynb. Specifically, I wanted to know how to update the task_object_names.json to allow other tasks from RLBench to be tested. What is the information that I have to set into the json file?. For example, for the task basketball_in_hoop, I'm using the following entry in the table of object names:

"basketball_in_hoop": [["ball", "ball"], ["hoop", "basket_ball_hoop_visual"]],

Notice that I'm using the object name I'm using is the one of the visual used for the hoop object, not the respondable (collider). The image below shows the scene hierarchy for the basketball_in_hoop task; note that for the hoop, there's a visual and a respondable, so I was wondering which one is the name I should used for the table in the json file.

img_basketball_in_hoop_objects

Similarly, for other tasks, I sometimes use the respondable for the object name, and the tasks doesn't succeed, and after changing to the visual, the task keeps failing, so I'm not sure if it's an error on my end, or an error from Voxposer.

Thanks in advance.

Question on disppearing robot arm when running tasks

Hello, I'm impressed with VoxPoser and grateful for the open source on RLBench. I'm currently exploring Voxposer with additional RLBench tasks beyond those provided in the Jupyter notebook (and even some from the Jupyter notebook tasks). However, I've encountered a recurring issue where the robot arm disappears, leaving only the base. Have you experienced this? Thank you for your assistance

Error: signal 11:

every time run into self.func = func will raise error:
class IterableDynamicObservation:
"""acts like a list of DynamicObservation objects, initialized with a function that evaluates to a list"""
def init(self, func):
assert callable(func), 'func must be callable'
self.func = func
self._validate_func_output()

track to here,
pcd.points = o3d.utility.Vector3dVector(points[-1])

Error: signal 11:

/home/wang/CoppeliaSim/libcoppeliaSim.so.1(_Z11_segHandleri+0x30)[0x7f624d7d6ae0]
/lib/x86_64-linux-gnu/libc.so.6(+0x43090)[0x7f62f2740090]
QObject::~QObject: Timers cannot be stopped from another thread
QMutex: destroying locked mutex

if I test the Vector3dVector in my test codes, work normally.

The code related to real-world execution

Thank you for your incredible work!
The current code only focuses on simulation execution and does not include the part about VLM. I was wondering if there is a possibility of releasing the source code related to real-world execution?

for code

The simulation experiment in the paper is in SAPIEN, but the code is based on rlbench. Can you provide simulation code from SAPIEN?

RuntimeError: Handle Panda does not exist.

import os
import openai
from arguments import get_config
from interfaces import setup_LMP
from visualizers import ValueMapVisualizer
from envs.rlbench_env import VoxPoserRLBench
from utils import set_lmp_objects
import numpy as np
from rlbench import tasks

config = get_config('rlbench')
# uncomment this if you'd like to change the language model (e.g., for faster speed or lower cost)
# for lmp_name, cfg in config['lmp_config']['lmps'].items():
#     cfg['model'] = 'gpt-3.5-turbo'

# initialize env and voxposer ui
visualizer = ValueMapVisualizer(config['visualizer'])
env = VoxPoserRLBench(visualizer=visualizer)
lmps, lmp_env = setup_LMP(env, config, debug=False)
voxposer_ui = lmps['plan_ui']

result:

Jupyter environment detected. Enabling Open3D WebVisualizer.
[Open3D INFO] WebRTC GUI backend enabled.
[Open3D INFO] WebRTCWindowSystem: HTTP handshake server disabled.


---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
Cell In[2], line 20
     14 # uncomment this if you'd like to change the language model (e.g., for faster speed or lower cost)
     15 # for lmp_name, cfg in config['lmp_config']['lmps'].items():
     16 #     cfg['model'] = 'gpt-3.5-turbo'
     17 
     18 # initialize env and voxposer ui
     19 visualizer = ValueMapVisualizer(config['visualizer'])
---> 20 env = VoxPoserRLBench(visualizer=visualizer)
     21 lmps, lmp_env = setup_LMP(env, config, debug=False)
     22 voxposer_ui = lmps['plan_ui']

File /data1/ckw/00robo/01llm/VoxPoser/src/envs/rlbench_env.py:52, in VoxPoserRLBench.__init__(self, visualizer)
     49 action_mode = CustomMoveArmThenGripper(arm_action_mode=EndEffectorPoseViaPlanning(),
     50                                 gripper_action_mode=Discrete())
     51 self.rlbench_env = Environment(action_mode)
---> 52 self.rlbench_env.launch()
     53 self.task = None
     55 self.workspace_bounds_min = np.array([self.rlbench_env._scene._workspace_minx, self.rlbench_env._scene._workspace_miny, self.rlbench_env._scene._workspace_minz])

File /data1/ckw/micromamba/envs/voxposer-env/lib/python3.9/site-packages/rlbench/environment.py:112, in Environment.launch(self)
    110     arm.set_position(panda_pos)
    111 else:
--> 112     arm, gripper = arm_class(), gripper_class()
    114 self._robot = Robot(arm, gripper)
    115 if self._randomize_every is None:

File /data1/ckw/micromamba/envs/voxposer-env/lib/python3.9/site-packages/pyrep/robots/arms/panda.py:7, in Panda.__init__(self, count)
      6 def __init__(self, count: int = 0):
----> 7     super().__init__(count, 'Panda', 7)

File /data1/ckw/micromamba/envs/voxposer-env/lib/python3.9/site-packages/pyrep/robots/arms/arm.py:25, in Arm.__init__(self, count, name, num_joints, base_name, max_velocity, max_acceleration, max_jerk)
     23 """Count is used for when we have multiple copies of arms"""
     24 joint_names = ['%s_joint%d' % (name, i+1) for i in range(num_joints)]
---> 25 super().__init__(count, name, joint_names, base_name)
     27 # Used for motion planning
     28 self.max_velocity = max_velocity

File /data1/ckw/micromamba/envs/voxposer-env/lib/python3.9/site-packages/pyrep/robots/robot_component.py:19, in RobotComponent.__init__(self, count, name, joint_names, base_name)
     16 def __init__(self, count: int, name: str, joint_names: List[str],
     17              base_name: str = None):
     18     suffix = '' if count == 0 else '#%d' % (count - 1)
---> 19     super().__init__(
     20         name + suffix if base_name is None else base_name + suffix)
     21     self._num_joints = len(joint_names)
     23     # Joint handles

File /data1/ckw/micromamba/envs/voxposer-env/lib/python3.9/site-packages/pyrep/objects/object.py:24, in Object.__init__(self, name_or_handle)
     22     self._handle = name_or_handle
     23 else:
---> 24     self._handle = sim.simGetObjectHandle(name_or_handle)
     25 assert_type = self._get_requested_type()
     26 actual = ObjectType(sim.simGetObjectType(self._handle))

File /data1/ckw/micromamba/envs/voxposer-env/lib/python3.9/site-packages/pyrep/backend/sim.py:94, in simGetObjectHandle(objectName)
     92 handle = lib.simGetObjectHandle(objectName.encode('ascii'))
     93 if handle < 0:
---> 94     raise RuntimeError('Handle %s does not exist.' % objectName)
     95 return handle

RuntimeError: Handle Panda does not exist.

is rotation map implemented

I personally test some prompts with rotation suggestions, then the executor will throw the error of saying vec2quat is not defined

Questions about online learning

Thank you for your amazing work!I was very inspired after read the code, but also had a questions:

  1. for contact-rich tasks, zero-shot synthesized trajectories by VoxPoser is not enough,you say y used a MLP for this task,but I don't see this part of the code in the repository。Do you have any plans for open source this part? or is there a reference to this part? I'm not very familiar with this part,especially “online interactions” 。How can this trajectory be optimized in the forward propagation and backpropagation of MLP?

planner get right path, but execute unsuccessfully!

I tried to replicate the experiments in your paper using RLbench.
I created a new task environment in RLbench and attempted to drive it using VoxPoser.
The planner successfully computed waypoints, but it did not execute successfully.
Can you please help me identify where I may make mistakes?

this is the terminal log:
[interfaces.py | 11:42:37] completed waypoint 1 (wp: [ 0.282 -0.012 1.469], actual: [ 0.279 -0.009 1.471], target: [ 0.16 -0.02 0.792], start: [ 0.282 -0.012 1.469], dist2target: 0.689)
[interfaces.py | 11:42:37] completed waypoint 2 (wp: [ 0.252 -0.015 1.449], actual: [ 0.279 -0.009 1.471], target: [ 0.16 -0.02 0.792], start: [ 0.282 -0.012 1.469], dist2target: 0.689)
[interfaces.py | 11:42:37] completed waypoint 3 (wp: [ 0.228 -0.017 1.429], actual: [ 0.279 -0.009 1.47 ], target: [ 0.16 -0.02 0.792], start: [ 0.282 -0.012 1.469], dist2target: 0.688)
[interfaces.py | 11:42:37] completed waypoint 4 (wp: [ 0.208 -0.019 1.409], actual: [ 0.279 -0.009 1.47 ], target: [ 0.16 -0.02 0.792], start: [ 0.282 -0.012 1.469], dist2target: 0.688)
[interfaces.py | 11:42:37] completed waypoint 5 (wp: [ 0.191 -0.02 1.388], actual: [ 0.279 -0.009 1.469], target: [ 0.16 -0.02 0.792], start: [ 0.282 -0.012 1.469], dist2target: 0.687)
[interfaces.py | 11:42:37] completed waypoint 6 (wp: [ 0.179 -0.02 1.368], actual: [ 0.279 -0.009 1.469], target: [ 0.16 -0.02 0.792], start: [ 0.282 -0.012 1.469], dist2target: 0.687)
[interfaces.py | 11:42:37] completed waypoint 7 (wp: [ 0.169 -0.021 1.348], actual: [ 0.28 -0.009 1.468], target: [ 0.16 -0.02 0.792], start: [ 0.282 -0.012 1.469], dist2target: 0.686)
[interfaces.py | 11:42:37] completed waypoint 8 (wp: [ 0.163 -0.021 1.328], actual: [ 0.28 -0.009 1.468], target: [ 0.16 -0.02 0.792], start: [ 0.282 -0.012 1.469], dist2target: 0.686)
[interfaces.py | 11:42:38] completed waypoint 9 (wp: [ 0.158 -0.021 1.308], actual: [ 0.28 -0.009 1.467], target: [ 0.16 -0.02 0.792], start: [ 0.282 -0.012 1.469], dist2target: 0.685)
[interfaces.py | 11:42:38] completed waypoint 10 (wp: [ 0.156 -0.021 1.287], actual: [ 0.28 -0.009 1.467], target: [ 0.16 -0.02 0.792], start: [ 0.282 -0.012 1.469], dist2target: 0.685)
[interfaces.py | 11:42:38] completed waypoint 11 (wp: [ 0.155 -0.02 1.257], actual: [ 0.28 -0.009 1.466], target: [ 0.16 -0.02 0.792], start: [ 0.282 -0.012 1.469], dist2target: 0.684)
[interfaces.py | 11:42:38] completed waypoint 12 (wp: [ 0.155 -0.02 1.237], actual: [ 0.28 -0.009 1.465], target: [ 0.16 -0.02 0.792], start: [ 0.282 -0.012 1.469], dist2target: 0.684)
[interfaces.py | 11:42:38] completed waypoint 13 (wp: [ 0.156 -0.02 1.217], actual: [ 0.28 -0.009 1.465], target: [ 0.16 -0.02 0.792], start: [ 0.282 -0.012 1.469], dist2target: 0.683)
[interfaces.py | 11:42:38] completed waypoint 14 (wp: [ 0.158 -0.02 1.196], actual: [ 0.28 -0.009 1.464], target: [ 0.16 -0.02 0.792], start: [ 0.282 -0.012 1.469], dist2target: 0.683)
[interfaces.py | 11:42:38] completed waypoint 15 (wp: [ 0.16 -0.02 1.176], actual: [ 0.28 -0.009 1.464], target: [ 0.16 -0.02 0.792], start: [ 0.282 -0.012 1.469], dist2target: 0.682)
[interfaces.py | 11:42:38] completed waypoint 16 (wp: [ 0.161 -0.02 1.156], actual: [ 0.28 -0.009 1.464], target: [ 0.16 -0.02 0.792], start: [ 0.282 -0.012 1.469], dist2target: 0.682)
[interfaces.py | 11:42:38] completed waypoint 17 (wp: [ 0.16 -0.019 1.136], actual: [ 0.28 -0.009 1.463], target: [ 0.16 -0.02 0.792], start: [ 0.282 -0.012 1.469], dist2target: 0.681)
[interfaces.py | 11:42:38] completed waypoint 18 (wp: [ 0.16 -0.02 1.116], actual: [ 0.28 -0.009 1.462], target: [ 0.16 -0.02 0.792], start: [ 0.282 -0.012 1.469], dist2target: 0.681)
[interfaces.py | 11:42:39] completed waypoint 19 (wp: [ 0.16 -0.02 1.095], actual: [ 0.281 -0.009 1.462], target: [ 0.16 -0.02 0.792], start: [ 0.282 -0.012 1.469], dist2target: 0.68)
[interfaces.py | 11:42:39] completed waypoint 20 (wp: [ 0.159 -0.02 1.075], actual: [ 0.281 -0.009 1.461], target: [ 0.16 -0.02 0.792], start: [ 0.282 -0.012 1.469], dist2target: 0.68)
[interfaces.py | 11:42:39] completed waypoint 21 (wp: [ 0.157 -0.021 1.054], actual: [ 0.281 -0.009 1.461], target: [ 0.16 -0.02 0.792], start: [ 0.282 -0.012 1.469], dist2target: 0.679)
[interfaces.py | 11:42:39] completed waypoint 22 (wp: [ 0.156 -0.021 1.033], actual: [ 0.281 -0.009 1.46 ], target: [ 0.16 -0.02 0.792], start: [ 0.282 -0.012 1.469], dist2target: 0.679)
[interfaces.py | 11:42:39] completed waypoint 23 (wp: [ 0.154 -0.02 1.011], actual: [ 0.281 -0.009 1.459], target: [ 0.16 -0.02 0.792], start: [ 0.282 -0.012 1.469], dist2target: 0.678)
[interfaces.py | 11:42:39] completed waypoint 24 (wp: [ 0.151 -0.02 0.989], actual: [ 0.281 -0.009 1.459], target: [ 0.16 -0.02 0.792], start: [ 0.282 -0.012 1.469], dist2target: 0.678)
[interfaces.py | 11:42:39] completed waypoint 25 (wp: [ 0.149 -0.02 0.969], actual: [ 0.281 -0.009 1.458], target: [ 0.16 -0.02 0.792], start: [ 0.282 -0.012 1.469], dist2target: 0.677)
[interfaces.py | 11:42:39] completed waypoint 26 (wp: [ 0.146 -0.021 0.95 ], actual: [ 0.281 -0.009 1.458], target: [ 0.16 -0.02 0.792], start: [ 0.282 -0.012 1.469], dist2target: 0.677)
[interfaces.py | 11:42:39] completed waypoint 27 (wp: [ 0.144 -0.022 0.932], actual: [ 0.281 -0.009 1.457], target: [ 0.16 -0.02 0.792], start: [ 0.282 -0.012 1.469], dist2target: 0.676)
[interfaces.py | 11:42:39] completed waypoint 28 (wp: [ 0.142 -0.023 0.916], actual: [ 0.282 -0.009 1.457], target: [ 0.16 -0.02 0.792], start: [ 0.282 -0.012 1.469], dist2target: 0.675)
[interfaces.py | 11:42:40] completed waypoint 29 (wp: [ 0.141 -0.023 0.903], actual: [ 0.282 -0.009 1.456], target: [ 0.16 -0.02 0.792], start: [ 0.282 -0.012 1.469], dist2target: 0.675)
[interfaces.py | 11:42:40] completed waypoint 30 (wp: [ 0.14 -0.025 0.893], actual: [ 0.282 -0.009 1.455], target: [ 0.16 -0.02 0.792], start: [ 0.282 -0.012 1.469], dist2target: 0.674)
[interfaces.py | 11:42:40] completed waypoint 31 (wp: [ 0.16 -0.02 0.792], actual: [ 0.282 -0.009 1.455], target: [ 0.16 -0.02 0.792], start: [ 0.282 -0.012 1.469], dist2target: 0.674)
[interfaces.py | 11:42:40] completed waypoint 32 (wp: [ 0.16 -0.02 0.792], actual: [ 0.282 -0.009 1.455], target: [ 0.16 -0.02 0.792], start: [ 0.282 -0.012 1.469], dist2target: 0.674)
[interfaces.py | 11:42:40] completed waypoint 33 (wp: [ 0.16 -0.02 0.792], actual: [ 0.282 -0.009 1.455], target: [ 0.16 -0.02 0.792], start: [ 0.282 -0.012 1.469], dist2target: 0.674)
[interfaces.py | 11:42:40] finished executing path via controller

OSError: [Errno 22] Invalid argument: 'visualizations/9:57:4.html'

Thank you for your great work, I use gpt-3.5-turbo to do instruction, then it showed following error:

(using cache) *** OpenAI API call took 0.00s ***
########################################

"composer" generated code

########################################

Query: grasp the rubbish.

movable = parse_query_obj('rubbish')
affordance_map = get_affordance_map('a point at the center of the rubbish')
gripper_map = get_gripper_map('open everywhere except 1cm around the rubbish')
execute(movable, affordance_map=affordance_map, gripper_map=gripper_map)
...
composer("back to default pose")
composer("move to the top of the bin")
composer("open gripper")

done

Output is truncated. View as a scrollable element or open in a text editor. Adjust cell output settings...

OSError Traceback (most recent call last)
Cell In[5], line 3
1 instruction = np.random.choice(descriptions)
2 print(instruction)
----> 3 voxposer_ui(instruction)

File /media/test/DATA/VoxPoser/src/LMP.py:146, in LMP.call(self, query, **kwargs)
144 import pdb ; pdb.set_trace()
145 else:
--> 146 exec_safe(to_exec, gvars, lvars)
148 self.exec_hist += f'\n{to_log.strip()}'
150 if self._cfg['maintain_session']:

File /media/test/DATA/VoxPoser/src/LMP.py:189, in exec_safe(code_str, gvars, lvars)
187 except Exception as e:
188 print(f'Error executing code:\n{code_str}')
--> 189 raise e

File /media/test/DATA/VoxPoser/src/LMP.py:186, in exec_safe(code_str, gvars, lvars)
181 custom_gvars = merge_dicts([
182 gvars,
183 {'exec': empty_fn, 'eval': empty_fn}
184 ])
185 try:
...
1118 def _opener(self, name, flags, mode=0o666):
1119 # A stub for the opener argument to built-in open()
-> 1120 return self._accessor.open(self, flags, mode)

OSError: [Errno 22] Invalid argument: 'visualizations/9:57:4.html'

How can I solve it, thank you!

How to update task_object_names.json to handle new tasks?

I am very grateful to the author for sharing the code. I am very interested in this code and would like to ask the author how to test other tasks in VoxPoser, besides those provided in playground.ipynb. How do I update the object names corresponding to the tasks in the task_object_names.json? I look forward to your response.

Trying to use GPT-3.5 but failed.

Thanks for your great work. I want to try the palyground.ipynb, however I don't have access to GPT-4. So I change all 'gpt-4' in the rlbench_config.yaml to 'gpt-3.5'. And here comes the error 'InvalidRequestError: The model gpt-3.5 does not exist'. Could you please help fix that.

GPT3.5 or GPT4 ?

Thanks for your amazing work.
In my experiments, only the RLBench PutRubbishInBin task was completed very well using GPT3.5, and all other tasks failed, is there a significant difference between using GPT3.5 and GPT4 ?

qt.qpa.plugin: Could not find the Qt platform plugin "wayland"

qt.qpa.plugin: Could not find the Qt platform plugin "wayland" in "/home/a123/Downloads/CoppeliaSim"
libGL error: MESA-LOADER: failed to open vmwgfx: /usr/lib/dri/vmwgfx_dri.so: cannot open shared object file: no such file or directory (search paths /usr/lib/x86_64-linux-gnu/dri:$${ORIGIN}/dri:/usr/lib/dri, suffix _dri)
libGL error: failed to load driver: vmwgfx
libGL error: MESA-LOADER: failed to open swrast: /usr/lib/dri/swrast_dri.so: cannot open shared object file: no such file or directory (search paths /usr/lib/x86_64-linux-gnu/dri:$${ORIGIN}/dri:/usr/lib/dri, suffix _dri)
libGL error: failed to load driver: swrast

How to solve this problem?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.