- Python 3.6/3.7
- PyTorch 1.8.0/1.9.0
- attrdict
- scikit-image
- PyBullet
This project is tested on Python 3.6 and 3.7, PyTorch 1.8.0 and 1.9.0.
conda create -n <env_name> python=3.7
conda activate env_name
conda install pytorch==1.8.0 torchvision==0.9.0 cudatoolkit=11.1 -c pytorch -c conda-forge # Please make sure that the version of pytorch is compatible with the version of cudatoolkit, and the version of cudatoolkit is compatible with your GPU.
conda install scikit-image
pip install attrdict opencv-python matplotlib
pip install pybullet --upgrade --user
pip install -U rospkg
pip3 install open3d
If you are using command line, before running our basic demo, you need to add the path to our project to the PYTHONPATH environment variable. Just run the following command.
export PYTHONPATH=$PYTHONPATH:/your/path/to/visionBasedManipulation:/your/path/to/visionBasedManipulation/network
if you are using vscode, you can use the launch.json file in .vscode folder to configure project and run the demo directly.
Activate the virtual environment and run the basic demo.
conda activate env_name
python demos/basicDemo.py
You can see a simulation like this:
This demo included a ROS wrapper for the basic demo, the RGB-D images and point clouds are generated from Pybullet simulation and published to ROS topic.
@TODO:
needs to add tf from Pybullet
conda activate env_name
python demos/basicROSDemo.py
This demo adopt VGN in our setup, its a 6D grasp predicting method using point clouds as input.
conda activate env_name
python demos/basicVGNDemo.py