Code Monkey home page Code Monkey logo

manipulathor's Introduction

Kiana Ehsani, Winson Han, Alvaro Herrasti, Eli VanderBilt, Luca Weihs, Eric Kolve, Aniruddha Kembhavi, Roozbeh Mottaghi

(Oral Presentation at CVPR 2021)

We present ManipulaTHOR, a framework that facilitates visual manipulation of objects using a robotic arm. Our framework is built upon a physics engine and enables realistic interactions with objects while navigating through scenes and performing tasks. Object manipulation is an established research domain within the robotics community and poses several challenges including avoiding collisions, grasping, and long-horizon planning. Our framework focuses primarily on manipulation in visually rich and complex scenes, joint manipulation and navigation planning, and generalization to unseen environments and objects; challenges that are often overlooked. The framework provides a comprehensive suite of sensory information and motor functions enabling development of robust manipulation agents.

This code base is based on AllenAct framework and the majority of the core training algorithms and pipelines are borrowed from AllenAct code base.

Citation

If you find this project useful in your research, please consider citing:

   @inproceedings{ehsani2021manipulathor,
     title={ManipulaTHOR: A Framework for Visual Object Manipulation},
     author={Ehsani, Kiana and Han, Winson and Herrasti, Alvaro and VanderBilt, Eli and Weihs, Luca and Kolve, Eric and Kembhavi, Aniruddha and Mottaghi, Roozbeh},
     booktitle={CVPR},
     year={2021}
   }

Contents

๐Ÿ’ป Installation

To begin, clone this repository locally

git clone https://github.com/ehsanik/manipulathor.git
See here for a summary of the most important files/directories in this repository

Here's a quick summary of the most important files/directories in this repository:

  • manipulathor_utils/*.py - Helper functions and classes.
  • manipulathor_baselines/armpointnav_baselines
    • experiments/
      • ithor/armpointnav_*.py - Different baselines introduced in the paper. Each files in this folder corresponds to a row of a table in the paper.
      • *.py - The base configuration files which define experiment setup and hyperparameters for training.
    • models/*.py - A collection of Actor-Critic baseline models.
  • ithor_arm/ - A collection of Environments, Task Samplers and Task Definitions
    • ithor_arm_environment.py - The definition of the ManipulaTHOREnvironment that wraps the AI2THOR-based framework introduced in this work and enables an easy-to-use API.
    • itho_arm_constants.py - Constants used to define the task and parameters of the environment. These include the step size taken by the agent, the unique id of the the THOR build we use, etc.
    • ithor_arm_sensors.py - Sensors which provide observations to our agents during training. E.g. the RGBSensor obtains RGB images from the environment and returns them for use by the agent.
    • ithor_arm_tasks.py - Definition of the ArmPointNav task, the reward definition and the function for calculating the goal achievement.
    • ithor_arm_task_samplers.py - Definition of the ArmPointNavTaskSampler samplers. Initializing the sampler, reading the json files from the dataset and randomly choosing a task is defined in this file.
    • ithor_arm_viz.py - Utility functions for visualization and logging the outputs of the models.

You can then install requirements by running

pip install -r requirements.txt

Python 3.6+ ๐Ÿ. Each of the actions supports typing within Python.

AI2-THOR ๐Ÿงž. To ensure reproducible results, please install this version of the AI2THOR.

After installing the requirements, you should start the xserver by running this script in the background. Finally, you can start playing with the environment using our example jupyter notebook.

๐Ÿ“ ArmPointNav Task Description

ArmPointNav is the goal of addressing the problem of visual object manipulation, where the task is to move an object between two locations in a scene. Operating in visually rich and complex environments, generalizing to unseen environments and objects, avoiding collisions with objects and structures in the scene, and visual planning to reach the destination are among the major challenges of this task. The example illustrates a sequence of actions taken a by a virtual robot within the ManipulaTHOR environment for picking up a vase from the shelf and stack it on a plate on the countertop.

๐Ÿ“Š Dataset

To study the task of ArmPointNav, we present the ArmPointNav Dataset (APND). This consists of 30 kitchen scenes in AI2-THOR that include more than 150 object categories (69 interactable object categories) with a variety of shapes, sizes and textures. We use 12 pickupable categories as our target objects. We use 20 scenes in the training set and the remaining is evenly split into Val and Test. We train with 6 object categories and use the remaining to test our model in a Novel-Obj setting. For more information on dataset, and how to download it refer to Dataset Details.

๐Ÿ–ผ๏ธ Sensory Observations

The types of sensors provided for this paper include:

  1. RGB images - having shape 224x224x3 and an FOV of 90 degrees.
  2. Depth maps - having shape 224x224 and an FOV of 90 degrees.
  3. Perfect egomotion - We allow for agents to know precisely what the object location is relative to the agent's arm as well as to its goal location.

๐Ÿƒ Allowed Actions

A total of 13 actions are available to our agents, these include:

  1. Moving the agent
  • MoveAhead - Results in the agent moving ahead by 0.25m if doing so would not result in the agent colliding with something.

  • Rotate [Right/Left] - Results in the agent's body rotating 45 degrees by the desired direction.

  1. Moving the arm
  • Moving the wrist along axis [x, y, z] - Results in the arm moving along an axis (ยฑx,ยฑy, ยฑz) by 0.05m.

  • Moving the height of the arm base [Up/Down] - Results in the base of the arm moving along y axis by 0.05m.

  1. Abstract Grasp
  • Picks up a target object. Only succeeds if the object is inside the arm grasper.
  1. Done Action
  • This action finishes an episode. The agent must issue a Done action when it reaches the goal otherwise the episode considers as a failure.

โœจ Defining a New Task

In order to define a new task, redefine the rewarding, try a new model, or change the enviornment setup, checkout our tutorial on defining a new task here.

๐Ÿ‹ Training An Agent

For running experiments first you need to add the project directory to your python path. You can train a model with a specific experiment setup by running one of the experiments below:

allenact manipulathor_baselines/armpointnav_baselines/experiments/ithor/<EXPERIMENT-NAME> -o experiment_output -s 1

Where <EXPERIMENT-NAME> can be one of the options below:

armpointnav_no_vision -- No Vision Baseline
armpointnav_disjoint_depth -- Disjoint Model Ablation
armpointnav_rgb -- Our RGB Experiment
armpointnav_rgbdepth -- Our RGBD Experiment
armpointnav_depth -- Our Depth Experiment

๐Ÿ’ช Evaluating A Pre-Trained Agent

To evaluate a pre-trained model, (for example to reproduce the numbers in the paper), you can add -t test -c <WEIGHT-ADDRESS> to the end of the command you ran for training.

In order to reproduce the numbers in the paper, you need to download the pretrained models from here and extract them to pretrained_models. The full list of experiments and their corresponding trained weights can be found here.

allenact manipulathor_baselines/armpointnav_baselines/experiments/ithor/<EXPERIMENT-NAME> -o test_out -s 1 -t test -c <WEIGHT-ADDRESS>

manipulathor's People

Contributors

ehsanik avatar lucaweihs avatar mattdeitke avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

manipulathor's Issues

Magic number is wrong: 542

Hi, really interesting work!

My machine: ubuntu20 + 2 nvidia 1080 GPUs + CUDA 11.2. I have run this script (https://github.com/allenai/allenact/blob/main/scripts/startx.py) to start x-display.

I got an exception when I was evaluating the pre-trained model (rgbdepth_armpointnav.pt). Below is the screenshot

image

I wonder what is that exception?

I kept running while ignoring the exception periodically popping up. I can still get the metrics_test*.json in experiment_output. But there is nothing in experiment_output/checkpoints (except empty folders) and the tf log file in experiment_output/tb contains nothing. Is that normal? Also, wondering if the code will generate some video clips or images during the evaluating?

BTW, It would be great if there is a similar example.py (https://github.com/allenai/ai2thor-rearrangement/blob/main/example.py) to let people play around with the environment. :)

Thank you!
Fuyang Zhang

Headless GUI options for training and evaluation

Hi, I love your work and I am fascinated with this framework.
I'm trying to run your training code. I ran
python allenact/main.py armpointnav_depth -b projects/manipulathor_baselines/armpointnav_baselines/experiments/ithor -o test_out -s 1 --eval -c .../environment/manipulathor/pretrained_models/saved_checkpoints/depth_armpointnav.pt -m 1
The code then executed and rendered some Unity videos of the robot agent. After rendering about 6-8 videos, the code crashed, the problem was:
[07/09 09:34:05 ERROR:] Worker 0(0-9) encountered an exception: Traceback (most recent call last): File "/home/anvd2aic/PycharmProjects/environment/venv/src/allenact/allenact/algorithms/onpolicy_sync/vector_sampled_tasks.py", line 394, in _task_sampling_loop_worker sp_vector_sampled_tasks.command( File "/home/anvd2aic/PycharmProjects/environment/venv/src/allenact/allenact/algorithms/onpolicy_sync/vector_sampled_tasks.py", line 1238, in command return [ File "/home/anvd2aic/PycharmProjects/environment/venv/src/allenact/allenact/algorithms/onpolicy_sync/vector_sampled_tasks.py", line 1239, in <listcomp> g.send((command, data)) File "/home/anvd2aic/PycharmProjects/environment/venv/src/allenact/allenact/algorithms/onpolicy_sync/vector_sampled_tasks.py", line 1051, in _task_sampling_loop_generator_fn raise e File "/home/anvd2aic/PycharmProjects/environment/venv/src/allenact/allenact/algorithms/onpolicy_sync/vector_sampled_tasks.py", line 995, in _task_sampling_loop_generator_fn result = getattr(current_task, function_name)() File "/home/anvd2aic/PycharmProjects/environment/venv/src/allenact/allenact/base_abstractions/task.py", line 67, in get_observations return self.sensor_suite.get_observations(env=self.env, task=self, **kwargs) File "/home/anvd2aic/PycharmProjects/environment/venv/src/allenact/allenact/base_abstractions/sensor.py", line 131, in get_observations return { File "/home/anvd2aic/PycharmProjects/environment/venv/src/allenact/allenact/base_abstractions/sensor.py", line 132, in <dictcomp> uuid: sensor.get_observation(env=env, task=task, **kwargs) # type: ignore File "/home/anvd2aic/PycharmProjects/environment/venv/src/allenact/allenact_plugins/manipulathor_plugin/manipulathor_sensors.py", line 198, in get_observation relative_goal_obj = world_coords_to_agent_coords( File "/home/anvd2aic/PycharmProjects/environment/venv/src/allenact/allenact_plugins/manipulathor_plugin/arm_calculation_utils.py", line 136, in world_coords_to_agent_coords world_obj["position"], world_obj["rotation"] TypeError: 'NoneType' object is not subscriptable [vector_sampled_tasks.py: 403] [07/09 09:34:05 INFO:] Worker 0(0-9) closing. [vector_sampled_tasks.py: 411] Process ForkServerProcess-1:1: Traceback (most recent call last): [07/09 09:34:05 ERROR:] [test worker 0] Encountered EOFError, exiting. [engine.py: 1970] File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap self.run() File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File ".../PycharmProjects/environment/venv/src/allenact/allenact/algorithms/onpolicy_sync/vector_sampled_tasks.py", line 406, in _task_sampling_loop_worker raise e File ".../PycharmProjects/environment/venv/src/allenact/allenact/algorithms/onpolicy_sync/vector_sampled_tasks.py", line 394, in _task_sampling_loop_worker sp_vector_sampled_tasks.command( File ".../PycharmProjects/environment/venv/src/allenact/allenact/algorithms/onpolicy_sync/vector_sampled_tasks.py", line 1238, in command return [ File ".../PycharmProjects/environment/venv/src/allenact/allenact/algorithms/onpolicy_sync/vector_sampled_tasks.py", line 1239, in <listcomp> g.send((command, data)) File ".../PycharmProjects/environment/venv/src/allenact/allenact/algorithms/onpolicy_sync/vector_sampled_tasks.py", line 1051, in _task_sampling_loop_generator_fn raise e File ".../PycharmProjects/environment/venv/src/allenact/allenact/algorithms/onpolicy_sync/vector_sampled_tasks.py", line 995, in _task_sampling_loop_generator_fn result = getattr(current_task, function_name)() File ".../PycharmProjects/environment/venv/src/allenact/allenact/base_abstractions/task.py", line 67, in get_observations return self.sensor_suite.get_observations(env=self.env, task=self, **kwargs) File ".../PycharmProjects/environment/venv/src/allenact/allenact/base_abstractions/sensor.py", line 131, in get_observations return { File ".../PycharmProjects/environment/venv/src/allenact/allenact/base_abstractions/sensor.py", line 132, in <dictcomp> uuid: sensor.get_observation(env=env, task=task, **kwargs) # type: ignore File ".../PycharmProjects/environment/venv/src/allenact/allenact_plugins/manipulathor_plugin/manipulathor_sensors.py", line 198, in get_observation relative_goal_obj = world_coords_to_agent_coords( File ".../PycharmProjects/environment/venv/src/allenact/allenact_plugins/manipulathor_plugin/arm_calculation_utils.py", line 136, in world_coords_to_agent_coords world_obj["position"], world_obj["rotation"] TypeError: 'NoneType' object is not subscriptable [07/09 09:34:05 ERROR:] Traceback (most recent call last): File ".../PycharmProjects/environment/venv/src/allenact/allenact/algorithms/onpolicy_sync/engine.py", line 1941, in process_checkpoints eval_package = self.run_eval( File ".../PycharmProjects/environment/venv/src/allenact/allenact/algorithms/onpolicy_sync/engine.py", line 1735, in run_eval num_paused = self.initialize_storage_and_viz( File ".../PycharmProjects/environment/venv/src/allenact/allenact/algorithms/onpolicy_sync/engine.py", line 455, in initialize_storage_and_viz observations = self.vector_tasks.get_observations() File ".../PycharmProjects/environment/venv/src/allenact/allenact/algorithms/onpolicy_sync/vector_sampled_tasks.py", line 487, in get_observations return self.call(["get_observations"] * self.num_unpaused_tasks,) File ".../PycharmProjects/environment/venv/src/allenact/allenact/algorithms/onpolicy_sync/vector_sampled_tasks.py", line 758, in call results.extend(read_fn()) File ".../PycharmProjects/environment/venv/src/allenact/allenact/algorithms/onpolicy_sync/vector_sampled_tasks.py", line 265, in read_with_timeout return read_fn() File "/usr/lib/python3.8/multiprocessing/connection.py", line 250, in recv buf = self._recv_bytes() File "/usr/lib/python3.8/multiprocessing/connection.py", line 414, in _recv_bytes buf = self._recv(4) File "/usr/lib/python3.8/multiprocessing/connection.py", line 383, in _recv raise EOFError EOFError [engine.py: 1973] [07/09 09:34:05 INFO:] [test worker 0] Closing OnPolicyRLEngine.vector_tasks. [engine.py: 681]
Can you please help me to resolve this issue? I'm looking forwards to a headless GUI options since rendering these videos is some kind of exhausting. Is this option feasible? Thank you in advance!

Avoid collisions with the table

Hi, We are trying to implement object manipulation - A pickup and release for a series of different objects on a particular table using the manipulathor arm.
The goal coordinates are given manually.
While attempting to pick up certain objects, the arm is colliding with the table and hence moving it.
We want the arm to plan in such a way that it doesn't collide with the table. How do we prevent this collision?

  1. Is there any kind of feedback that we can receive prior to the arm's motion, warning us of the possibility of a collision, so that we can iteratively rotate or modify the joints to avoid a collision with the table.
    or
  2. Any other alternative approach that we should consider to achieve this abstraction from collision avoidance?"

Our main objective is to work on the pick and place of objects without worrying much about collision handling.
We made it work by manually handling the arm base height and the Relative Elbow Rotation. However, we are looking to automate this task (integrate it to the agent itself) where the arm does not collide with obstacles while on its way to pick or release objects and also generalise this over different situations.

Ai2thor version - 3.3.4

table_collision

The X server sometimes work but sometimes stop.

Hi. Thanks for your work!

I followed your instructions and checked sometimes the X server works, and sometimes it doesn't. To be specific, the screen just stops and don't move. This happens right after the initialization stage. The final code of the screen is shown as follows:

08/13 23:26:33 INFO: Starting 0-th SingleProcessVectorSampledTasks generator with args {'mp_ctx': <multiprocessing.context.ForkServerContext object at 0x7f7c403d73d0>, 'scenes': ['FloorPlan16_physics', 'FloorPlan17_physics', 'FloorPlan18_physics', 'FloorPlan19_physics', 'FloorPlan20_physics'], 'env_args': {'gridSize': 0.25, 'width': 224, 'height': 224, 'visibilityDistance': 1.0, 'agentMode': 'arm', 'fieldOfView': 100, 'agentControllerType': 'mid-level', 'server_class': <class 'ai2thor.fifo_server.FifoServer'>, 'useMassThreshold': True, 'massThreshold': 10, 'autoSimulation': False, 'autoSyncTransforms': True, 'renderDepthImage': True, 'x_display': '0.1'}, 'max_steps': 200, 'sensors': [<ithor_arm.ithor_arm_sensors.DepthSensorThor object at 0x7f7cccede370>, <allenact_plugins.ithor_plugin.ithor_sensors.RGBSensorThor object at 0x7f7c403c7ac0>, <ithor_arm.ithor_arm_sensors.RelativeAgentArmToObjectSensor object at 0x7f7c403c7c40>, <ithor_arm.ithor_arm_sensors.RelativeObjectToGoalSensor object at 0x7f7c403c7d90>, <ithor_arm.ithor_arm_sensors.PickedUpObjSensor object at 0x7f7c403d70a0>], 'action_space': Discrete(13), 'seed': 506456969, 'deterministic_cudnn': False, 'rewards_config': {'step_penalty': -0.01, 'goal_success_reward': 10.0, 'pickup_success_reward': 5.0, 'failed_stop_reward': 0.0, 'shaping_weight': 1.0, 'failed_action_penalty': -0.03}, 'scene_period': 'manual', 'sampler_mode': 'train', 'cap_training': None} [vector_sampled_tasks.py: 975]

After this there is no console information that is given, and I confirmed that the entire system is not working. Do you know when does this happen and how can I solve this?

When I terminated the process the error is given as follows:

Traceback (most recent call last):
File "/home/im2/anaconda3/envs/gos/lib/python3.8/multiprocessing/process.py", line 318, in _bootstrap
util._exit_function()
File "/home/im2/anaconda3/envs/gos/lib/python3.8/multiprocessing/util.py", line 357, in _exit_function
p.join()
File "/home/im2/anaconda3/envs/gos/lib/python3.8/multiprocessing/process.py", line 149, in join
res = self._popen.wait(timeout)
File "/home/im2/anaconda3/envs/gos/lib/python3.8/multiprocessing/popen_fork.py", line 47, in wait
return self.poll(os.WNOHANG if timeout == 0.0 else 0)
File "/home/im2/anaconda3/envs/gos/lib/python3.8/multiprocessing/popen_forkserver.py", line 65, in poll
if not wait([self.sentinel], timeout):
File "/home/im2/anaconda3/envs/gos/lib/python3.8/multiprocessing/connection.py", line 931, in wait
ready = selector.select(timeout)
File "/home/im2/anaconda3/envs/gos/lib/python3.8/selectors.py", line 415, in select
fd_event_list = self._selector.poll(timeout)
KeyboardInterrupt
Traceback (most recent call last):
File "/home/im2/anaconda3/envs/gos/lib/python3.8/multiprocessing/process.py", line 318, in _bootstrap
util._exit_function()
File "/home/im2/anaconda3/envs/gos/lib/python3.8/multiprocessing/util.py", line 357, in _exit_function
p.join()
File "/home/im2/anaconda3/envs/gos/lib/python3.8/multiprocessing/process.py", line 149, in join
res = self._popen.wait(timeout)
File "/home/im2/anaconda3/envs/gos/lib/python3.8/multiprocessing/popen_fork.py", line 47, in wait
return self.poll(os.WNOHANG if timeout == 0.0 else 0)
File "/home/im2/anaconda3/envs/gos/lib/python3.8/multiprocessing/popen_forkserver.py", line 65, in poll
if not wait([self.sentinel], timeout):
File "/home/im2/anaconda3/envs/gos/lib/python3.8/multiprocessing/connection.py", line 931, in wait
ready = selector.select(timeout)
File "/home/im2/anaconda3/envs/gos/lib/python3.8/selectors.py", line 415, in select
fd_event_list = self._selector.poll(timeout)
KeyboardInterrupt
^CError in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/home/im2/anaconda3/envs/gos/lib/python3.8/multiprocessing/connection.py", line 931, in wait
ready = selector.select(timeout)
File "/home/im2/anaconda3/envs/gos/lib/python3.8/selectors.py", line 415, in select
fd_event_list = self._selector.poll(timeout)
KeyboardInterrupt

Lastly, I want to ask is there any upgrade plans for this framework. Compared to the allenact repository, this framework may be seen as quite outdated (e.g., the ai2thor version is 0.0.1, but the current version is 5.0.0). I'd really appreciate if this is taken into consideration.

Thank you.

Accessing and driving the agent arm joint-by-joint

Hi,
We are working on a particular problem that requires us to manipulate/drive the arm joint-by-joint or gain access to the IK plans generated for every manipulation. We are working with Ai2thor's Manipulathor for this project, and we found the following snippet in the paper - https://arxiv.org/pdf/2104.11213.pdf (ManipulaTHOR: A Framework for Visual Object Manipulation):
" The robotโ€™s arm rig has been designed to work with either forward or inverse kinematics (IK), meaning its motion can be driven joint-by-joint, or directly from the wrist, respectively. ".
Though later, the same paper pointed out that the API only provides wrist control to the user, I feel the aforementioned snippet hints at some workaround that we can use to achieve joint control.
So, basically:

  1. How to access the ai2thor's default IK solver's solution for every API call like pick_object, release_object, and moveArm? (Expecting each such solution is a trajectory with coordinates for each of the arm's joints, similar to that of a classic IK solver)
    and,
  2. Is it possible to control these joints directly, like building a custom IK solver and sending the solution generated by this solver to the arm for execution? If so, in what way can we access these joints and drive the arm joint-by-joint?

We are currently using Ai2Thor version 3.3.4.

fifo_server error

I am using AI2THOR 2.4. Tried the following but get error

python main.py -o experiment_output -s 1 -b projects/armpointnav_baselines/experiments/ithor/ armpointnav_rgb

05/05 23:15:29 INFO: Running with args Namespace(checkpoint=None, deterministic_agents=False, deterministic_cudnn=False, disable_config_saving=False, disable_tensorboard=False, experiment='armpointnav_rgb', experiment_base='projects/armpointnav_baselines/experiments/ithor/', extra_tag='', gp=None, log_level='info', max_sampler_processes_per_worker=None, mode='train', output_dir='experiment_output', restart_pipeline=False, seed=1, skip_checkpoints=0, test_date=None) [main.py: 269]
05/05 23:15:29 ERROR: Uncaught exception: [system.py: 139]
Traceback (most recent call last):
File "main.py", line 317, in
main()
File "main.py", line 273, in main
cfg, srcs = load_config(args)
File "main.py", line 233, in load_config
raise e
File "main.py", line 230, in load_config
module = importlib.import_module(module_path)
File "/home/kb/anaconda3/lib/python3.6/importlib/init.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 994, in _gcd_import
File "", line 971, in _find_and_load
File "", line 955, in _find_and_load_unlocked
File "", line 665, in _load_unlocked
File "", line 678, in exec_module
File "", line 219, in _call_with_frames_removed
File "/home/kb/CUPS_RL/manipulathor/projects/armpointnav_baselines/experiments/ithor/armpointnav_rgb.py", line 4, in
from plugins.ithor_arm_plugin.ithor_arm_constants import ENV_ARGS
File "/home/kb/CUPS_RL/manipulathor/plugins/ithor_arm_plugin/ithor_arm_constants.py", line 3, in
import ai2thor.fifo_server
ModuleNotFoundError: No module named 'ai2thor.fifo_server'

Can not setup the training, evaluation

Hello, first of all, I want to tell you that I really enjoy and appreciate your project as well as what was presented in your paper. However, when running your code, I have some troubles, it seems that:
allenact manipulathor_baselines/armpointnav_baselines/experiments/ithor/<EXPERIMENT-NAME> -o experiment_output -s 1
and
allenact manipulathor_baselines/armpointnav_baselines/experiments/ithor/<EXPERIMENT-NAME> -o test_out -s 1 -t test -c <WEIGHT-ADDRESS>
can not work properly anymore, could you tell me how to fix this, please? Thank you in advance!
image
image

How does โ€œisPickedUpโ€ work?

Hello,

I have an issue that the arm in manipulathor cannot output "isPickedUp" = True after I actually pickup an apple.

Now, I tried to manually test how the difficulty an arm can actually pickup a object in FloorPlan8 with hand radius = 0.1.
I manually move the arm to the "Apple" by calling some navigation actions like, "RotationLeft", "MoveAhead", and extend the arm forward a little to nearby the target object Apple. Then, I tried to call action "PickupObject" to pickup the Apple, the Apple looks like be picked up by the arm, but it shows me False when I check object["isPickedUp"].
Also, I tried to print out the object["position"] and ,but it never change after I picked this Apple up.

Could you please tell me how the Mechanism behind this ["isPickedUp"]? how to determine the object is actually holding by agent's hand?

Also, Do you have any idea about my problem why this object["isPickedUp"] still false after I picked up the apple?

Looking forward for you reply, thanks a lot!

Before picked up, Apple["position"] = {'x': 0.9943692684173584, 'y': 0.9372280836105347, 'z': -1.1052862405776978}, Apple["objectId"] = Apple|+00.99|+00.94|-01.11

After picked up, Apple["position"] = {'x': 0.9943692684173584, 'y': 0.9372280836105347, 'z': -1.1052862405776978}, Apple["objectId"] = Apple|+00.99|+00.94|-01.11

MoveArmBase8

@ehsanik @Lucaweihs @mattdeitke

handSphereCenter in event.metadata["arm"] has wrong position

handSphereCenter in event.metadata["arm"] has the same value as the position of robot_arm_1_jnt, but robot_arm_4_jnt seems to be the correct handSphereCenter. The position of robot_arm_4_jnt is also used in get_absolute_hand_state() in manipulathor/ithor_arm/ithor_arm_environment.py

def get_absolute_hand_state(self):

This is returned by event.metadata["arm"]:
{'joints': [{'name': 'robot_arm_1_jnt', 'position': {'x': -0.8999999761581421, 'y': 1.463499903678894, 'z': -0.3290000259876251}, 'rootRelativePosition': {'x': 5.960464477539063e-08, 'y': 0.0, 'z': 2.9802322387695312e-08}, 'rotation': {'x': -0.0, 'y': 1.0000001192092896, 'z': -0.0, 'w': 90.0}, 'rootRelativeRotation': {'x': 0.0, 'y': 1.0000001192092896, 'z': 0.0, 'w': 90.0}, 'localRotation': {'x': 0.0, 'y': 1.0000001192092896, 'z': 0.0, 'w': 90.0}}, {'name': 'robot_arm_2_jnt', 'position': {'x': -0.5834574103355408, 'y': 1.463499903678894, 'z': -0.3290000557899475}, 'rootRelativePosition': {'x': 5.960464477539063e-08, 'y': 0.0, 'z': 0.3165426254272461}, 'rotation': {'x': 0.5968809723854065, 'y': 0.5725464820861816, 'z': -0.5620710253715515, 'w': 123.60325622558594}, 'rootRelativeRotation': {'x': 0.6731582283973694, 'y': 0.189345121383667, 'z': 0.7148473262786865, 'w': 265.23699951171875}, 'localRotation': {'x': 0.6731582283973694, 'y': 0.189345121383667, 'z': 0.7148473262786865, 'w': 265.23699951171875}}, {'name': 'robot_arm_3_jnt', 'position': {'x': -0.5974782109260559, 'y': 1.1478955745697021, 'z': -0.348837673664093}, 'rootRelativePosition': {'x': 0.019837677478790283, 'y': -0.3156043291091919, 'z': 0.30252182483673096}, 'rotation': {'x': 0.30278900265693665, 'y': 0.010122903622686863, 'z': 0.953004002571106, 'w': 180.36859130859375}, 'rootRelativeRotation': {'x': 0.9530039429664612, 'y': 0.010123095475137234, 'z': -0.30278897285461426, 'w': 180.3684844970703}, 'localRotation': {'x': 0.8547145128250122, 'y': 0.5176569223403931, 'z': 0.03865963965654373, 'w': 180.31800842285156}}, {'name': 'robot_arm_4_jnt', 'position': {'x': -0.4145815968513489, 'y': 1.1546282768249512, 'z': -0.09006652235984802}, 'rootRelativePosition': {'x': -0.2389335334300995, 'y': -0.30887162685394287, 'z': 0.485418438911438}, 'rotation': {'x': 0.5722787380218506, 'y': 0.5873619318008423, 'z': 0.5722787380218506, 'w': 119.14329528808594}, 'rootRelativeRotation': {'x': -0.5722788572311401, 'y': 0.5873619318008423, 'z': 0.5722787976264954, 'w': 119.14328002929688}, 'localRotation': {'x': -0.14109723269939423, 'y': -0.7303881049156189, 'z': 0.6682999134063721, 'w': 140.92092895507812}}], 'heldObjects': [], 'pickupableObjects': [], 'handSphereCenter': {'x': -0.8999999761581421, 'y': 1.463499903678894, 'z': -0.3290000259876251}, 'handSphereRadius': 0.11999999731779099}

Trajectory followed by the arm

Hi,

We are working on rearranging objects on a table. How do we get information about the trajectory being followed between a pickup and a release of an object in manipulathor?

  1. We would like to get a list of points that belong to the trajectory that the arm follows
  2. Is it possible to integrate other trajectory planners with manipulathor?

The ai2thor version I'm using is 3.3.4.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.