Code Monkey home page Code Monkey logo

pointnetgpd's Introduction

PointNetGPD: Detecting Grasp Configurations from Point Sets

PointNetGPD (ICRA 2019, arXiv) is an end-to-end grasp evaluation model to address the challenging problem of localizing robot grasp configurations directly from the point cloud.

PointNetGPD is light-weighted and can directly process the 3D point cloud that locates within the gripper for grasp evaluation. Taking the raw point cloud as input, our proposed grasp evaluation network can capture the complex geometric structure of the contact area between the gripper and the object even if the point cloud is very sparse.

To further improve our proposed model, we generate a larger-scale grasp dataset with 350k real point cloud and grasps with the YCB objects Dataset for training.

grasp_pipeline

Video

Video for PointNetGPD

Before Install

  • All the code should be installed in the following directory:
mkdir -p $HOME/code/
cd $HOME/code/
  • Set environment variable PointNetGPD_FOLDER in your $HOME/.bashrc file.
export PointNetGPD_FOLDER=$HOME/code/PointNetGPD

Install all the requirements (Using a virtual environment is recommended)

  1. Install pcl-tools via sudo apt install pcl-tools.

  2. An example for create a virtual environment: conda create -n pointnetgpd python=3.10 numpy ipython matplotlib opencv mayavi -c conda-forge

  3. Make sure in your Python environment do not have same package named meshpy or dexnet.

  4. Install PyTorch: https://pytorch.org/get-started/locally/

  5. Clone this repository:

    cd $HOME/code
    git clone https://github.com/lianghongzhuo/PointNetGPD.git
  6. Install our requirements in requirements.txt

    cd $PointNetGPD_FOLDER
    pip install -r requirements.txt
  7. Install our modified meshpy (Modify from Berkeley Automation Lab: meshpy)

    cd $PointNetGPD_FOLDER/meshpy
    python setup.py develop
  8. Install our modified dex-net (Modify from Berkeley Automation Lab: dex-net)

    cd $PointNetGPD_FOLDER/dex-net
    python setup.py develop
  9. Modify the gripper configurations to your own gripper

    vim $PointNetGPD_FOLDER/dex-net/data/grippers/robotiq_85/params.json

    These parameters are used for dataset generation:

    "min_width":
    "force_limit":
    "max_width":
    "finger_radius":
    "max_depth":

    These parameters are used for grasp pose generation at experiment:

    "finger_width":
    "real_finger_width":
    "hand_height":
    "hand_height_two_finger_side":
    "hand_outer_diameter":
    "hand_depth":
    "real_hand_depth":
    "init_bite":

Generated Grasp Dataset Download

Generate Your Own Grasp Dataset

  1. Download YCB object set from YCB Dataset. A command line tool for download ycb dataset can be found at: ycb-tools.
    cd $PointNetGPD_FOLDER/data
    git clone https://github.com/lianghongzhuo/ycb-tools
    cd ycb-tools
    python download_ycb_dataset.py rgbd_512
  2. Manage your dataset at: $PointNetGPD_FOLDER/PointNetGPD/data Every object should have a folder, structure like this:
    ├002_master_chef_can
    |└── google_512k
    |    ├── nontextured.obj (generated by pcl-tools)
    |    ├── nontextured.ply
    |    ├── nontextured.sdf (generated by SDFGen)
    |└── rgbd
    |    ├── *.jpg
    |    ├── *.h5
    |    ├── ...
    ├003_cracker_box
    └004_sugar_box
    ...
    
  3. Install SDFGen from GitHub:
    cd $PointNetGPD_FOLDER
    git clone https://github.com/jeffmahler/SDFGen.git
    cd SDFGen && mkdir build && cd build && cmake .. && make
  4. Install Open3D
    pip install open3d
  5. Generate nontextured.sdf file and nontextured.obj file using pcl-tools and SDFGen by running:
    cd $PointNetGPD_FOLDER/dex-net/apps
    python read_file_sdf.py
  6. Generate dataset by running the code:
    cd $PointNetGPD_FOLDER/dex-net/apps
    python generate-dataset-canny.py [prefix]
    where [prefix] is optional, it will add a prefix on the generated files.

Visualization tools

  • Visualization grasps

    cd $PointNetGPD_FOLDER/dex-net/apps
    python read_grasps_from_file.py

    Note:

    • This file will visualize the grasps in $PointNetGPD_FOLDER/PointNetGPD/data/ycb_grasp/ folder
  • Visualization object normal

    cd $PointNetGPD_FOLDER/dex-net/apps
    python Cal_norm.py

This code will check the norm calculated by meshpy and pcl library.

Training the network

  1. Data prepare:

    cd $PointNetGPD_FOLDER/PointNetGPD/data

    Make sure you have the following files, The links to the dataset directory should add by yourself:

    ├── google2cloud.csv  (Transform from google_ycb model to ycb_rgbd model)
    ├── google2cloud.pkl  (Transform from google_ycb model to ycb_rgbd model)
    └── ycb_grasp  (generated grasps)
    

    Generate point cloud from RGB-D image, you may change the number of process running in parallel if you use a shared host with others

    cd $PointNetGPD_FOLDER/PointNetGPD
    python ycb_cloud_generate.py

    Note: Estimated running time at our Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz dual CPU with 56 Threads is 36 hours. Please also remove objects beyond the capacity of the gripper.

  2. Run the experiments:

    cd $PointNetGPD_FOLDER/PointNetGPD

    Launch a tensorboard for monitoring

    tensorboard --log-dir ./assets/log --port 8080

    and run an experiment for 200 epoch

    python main_1v.py --epoch 200 --mode train --batch-size x (x>=16)
    

    File name and corresponding experiment:

    main_1v.py        --- 1-viewed point cloud, 2 class
    main_1v_mc.py     --- 1-viewed point cloud, 3 class
    main_1v_gpd.py    --- 1-viewed point cloud, GPD
    main_fullv.py     --- Full point cloud, 2 class
    main_fullv_mc.py  --- Full point cloud, 3 class
    main_fullv_gpd.py --- Full point cloud, GPD
    

    For GPD experiments, you may change the input channel number by modifying input_chann in the experiment scripts(only 3 and 12 channels are available)

Using the trained network

  1. Get UR5 robot state:

    Goal of this step is to publish a ROS parameter tell the environment whether the UR5 robot is at home position or not.

    cd $PointNetGPD_FOLDER/dex-net/apps
    python get_ur5_robot_state.py
  2. Run perception code: This code will take depth camera ROS info as input, and gives a set of good grasp candidates as output. All the input, output messages are using ROS messages.

    cd $PointNetGPD_FOLDER/dex-net/apps
    python kinect2grasp.py
    arguments:
    -h, --help                 show this help message and exit
    --cuda                     using cuda for get the network result
    --gpu GPU                  set GPU number
    --load-model LOAD_MODEL    set witch model you want to use (rewrite by model_type, do not use this arg)
    --show_final_grasp         show final grasp using mayavi, only for debug, not working on multi processing
    --tray_grasp               not finished grasp type
    --using_mp                 using multi processing to sample grasps
    --model_type MODEL_TYPE    selet a model type from 3 existing models
    

Citation

If you found PointNetGPD useful in your research, please consider citing:

@inproceedings{liang2019pointnetgpd,
  title={{PointNetGPD}: Detecting Grasp Configurations from Point Sets},
  author={Liang, Hongzhuo and Ma, Xiaojian and Li, Shuang and G{\"o}rner, Michael and Tang, Song and Fang, Bin and Sun, Fuchun and Zhang, Jianwei},
  booktitle={IEEE International Conference on Robotics and Automation (ICRA)},
  year={2019}
}

Acknowledgement

pointnetgpd's People

Contributors

jeasinema avatar lianghongzhuo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pointnetgpd's Issues

Issue when running main_test.py

Hi,

I have tried to run the provided neural network using both kinect2grasp_pyhton2.py and just main_test.py but in both scenario I have an error stating that:

AttributeError: 'DataParallel' object has no attribute 'src_device_obj'

I have tried to comment out everything related to GPUs in the code, but doesn't change anything....

Any help would be appreciated.

Thanks

error:Install python pcl library python-pcl

Hello!liang! When I use your 'Python PCL' library to install, I report an error! Can you tell me how to solve it? The error is as follows:
setup.py: error: cannot find PCL, tried
pkg-config pcl_common-1.9
pkg-config pcl_common-1.8
pkg-config pcl_common-1.7
pkg-config pcl_common-1.6
pkg-config pcl_common

generate grasps too slowly

Hi, Liang,
I tried to generate 7000 grasps for each object but it works too slowly.
Would you mind to share your grasps data(*.pickle and *.npy file) so that we can use it more convenient?
Thank you very much.

bug when running python kinect2grasp_python2.py

Traceback (most recent call last):
File "kinect2grasp_python2.py", line 51, in
from main_test import test_network, model, args
File "/main_test.py", line 50, in
model = torch.load(args.load_model, map_location='cpu')
File "/usr/local/lib/python2.7/dist-packages/torch/serialization.py", line 358, in load
return _load(f, map_location, pickle_module)
File "/usr/local/lib/python2.7/dist-packages/torch/serialization.py", line 542, in _load
result = unpickler.load()
AttributeError: 'module' object has no attribute '_rebuild_parameter'

how to fix this problem?thanks

MarkerArray of gripper_vis is not on the table, much higher

Hi~ When I run python kinect2grasp.py, after gpg generated grasps, pub in rviz like this:(I use simulated kinect than real kinect)
grasp_view
I have no idea about this. I think the /kinect2_ir_optical_frame and /table_top is correct, so the transform between camera and table is right. But as for the /table_top_points, I directly used /kinect_V2/depth/points, which came from the kinect2 publisher, and can be visualized in rviz like the picture shows.
Hope for you help, thanks a lot.

The hyperparameters using in the simulation experiments

Excuse me, I've been trying to reproduce all of the experiment results in your PointNetGPD paper recently. When running the main_1v_mc.py, it seems that keeping all the hyperparameters the same as those default values except batch-size(which I set to 32, 256 and 1024), all the test accuracy is lower than the result in your paper(79.45%) by more than 5 percent. I'm wondering what's wrong with my operation, maybe some hyperparameters need to be changed in this experiment? I'm sincerely looking forward to your guidance.

By the way, I'm also from a group in Tsinghua University(in EE Department). We have also bought all the facilities for this experiment, so I'll be especially grateful to you if we can keep in touch. My e-mail: [email protected]

Where to find nontextured.obj files

Hi,

I've downloaded the files from the ycb dataset but seems like they don't come with a nontextured.obj file for each object. Is there a step to generating those?
For a given object, these are my current files: kinbody.xml nontextured.ply nontextured.stl textured.dae textured.mtl textured.obj texture_map.png

python3 read_file_sdf.py

Traceback (most recent call last):
File "read_file_sdf.py", line 62, in
generate_obj_from_ply(i+"/google_512k/nontextured.ply")
File "read_file_sdf.py", line 48, in generate_obj_from_ply
p = subprocess.Popen(["pcl_ply2obj", base + ".ply", base + ".obj"])
File "/usr/lib/python3.6/subprocess.py", line 729, in init
restore_signals, start_new_session)
File "/usr/lib/python3.6/subprocess.py", line 1364, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'pcl_ply2obj': 'pcl_ply2obj'

当我运行这个文件时候,显示这个,我不知道如何解决,谢谢。

Get GPG grasps but no good grasps

Hi, Liang, I just used kinect2grasp.py with my simulation robot enviroment. I got GPG grasps like below(I removed the table points by z-axis distance in the world coordinate, and the object is the cracker box in YCB dataset), it seems just ok.
屏幕截图_23
But in the terminal, after input the GPG grasps in the pointnet model, outputs 'Got 0 good grasps, and 21 bad grasps'. I don't know what problem caused this.
Hope for your help~ Thanks a lot

About the YCB dataset

@lianghongzhuo
When using the YCB dataset, we were not sure how can we get the following files: ycb_meshes_google (YCB dataset)、 ycb_rgbd (YCB dataset). We downloaded the generated .npy files, among the files in "berkeley_processed" and "google_64k", however it seems that those files are not included. We wonder whether there are some other files that should be downloaded, or should we do some other works to generate them from the original files or not? Could you please give us some advice?
Also, would you like to tell us something about your hardware basis when doing this project? We are trying to reappearance this network on a notebook with a Geforce RTX 2060 GPU (whose capacitance is 6G) and an Intel Core i5 CPU. Is it enough for this project?
We are looking forward to your kind advice!

python-pcl library installation error

Ubuntu 20.04, conda environment python 3.7

python setup.py build_ext -i

setup.py: error: cannot find PCL, tried
pkg-config pcl_common-1.9
pkg-config pcl_common-1.8
pkg-config pcl_common-1.7
pkg-config pcl_common-1.6
pkg-config pcl_common

grasp-pointnet installation issues

Hi @lianghongzhuo,
I experienced some problems when installing this repo:

  • The meshpy package depends on an older libboost version. boost/numpy.hpp from Boost.NumPy is deprecated and has been moved to boost/python/numpy.hpp. boost/python/numeric.hpp is also not longer available at (my) version 1.65.
    Can you please tell which libboost version you use or even update the dependencies?
  • Modified dex-net depends on Boost.NumPy as well and there are numerous other import warnings when I run generate-dataset-canny.py. I saw that you removed several files from the original repository, eg. requirements.txt. Can I safely ignore the warnings or will cause this problems with the generated dataset? Below I post the relevant output.

Thank you very much!

nic@salocin:~/code$ /usr/bin/python3 /home/nic/code/grasp-pointnet/dex-net/apps/generate-dataset-canny.py
WARNING:root:autolab_core not installed as catkin package, RigidTransform ros methods will be unavailable
Unable to import meshrender shared library! Rendering will not work. Likely due to missing Boost.Numpy
Boost.Numpy can be installed following the instructions in https://github.com/ndarray/Boost.NumPy
WARNING:root:Unable to import pylibfreenect2. Python-only Kinect driver may not work properly.
WARNING:root:Unable to import Primsense sensor modules! Likely due to missing OpenNI2.
WARNING:root:Failed to import ROS in phoxi_sensor.py. PhoXiSensor functionality unavailable.
WARNING:root:Unable to import generic sensor modules!.
WARNING:root:Failed to import gqcnn! Grasp2D functions will not be available.
WARNING:root:Failed to import OpenRAVE
WARNING:dexnet.grasping.grasp_sampler:Failed to import OpenRAVE
WARNING:dexnet.grasping.grasp_sampler:Failed to import rospy, you can't grasp now.
WARNING:dexnet.api:Failed to import DexNetVisualizer3D, visualization methods will be unavailable
All job done.
a worker of task 002_master_chef_can start
...

Problems about gripper

I notice that the gripper configuration file named robotiq_85. Does it mean that the configuration is for robotiq 2F 85 gripper? Since I am using robotiq 2F 85, I want to know whether I need to modify the configuration. Thx!

Visualization Import Errors in render_mages.py

When i try to run render_images.py i get the following error
I tried to import this it in a shell python script. gives the same error. Tried reinstalling module 'visualization', upgrading/downgrading it. No luck :(

File "render_images.py", line 19, in
from visualization import Visualizer2D as vis
File "/home/ahmad3/anaconda2/envs/gpd/lib/python2.7/site-packages/visualization/init.py", line 3, in
from .visualizer3d import Visualizer3D
File "/home/ahmad3/anaconda2/envs/gpd/lib/python2.7/site-packages/visualization/visualizer3d.py", line 9, in
from pyrender import Scene, Mesh, Viewer, Node, MetallicRoughnessMaterial, TextAlign
File "/home/ahmad3/anaconda2/envs/gpd/lib/python2.7/site-packages/pyrender/init.py", line 12, in
from .viewer import Viewer
File "/home/ahmad3/anaconda2/envs/gpd/lib/python2.7/site-packages/pyrender/viewer.py", line 36, in
class Viewer(pyglet.window.Window):
File "/home/ahmad3/anaconda2/envs/gpd/lib/python2.7/site-packages/pyglet/init.py", line 335, in getattr
import(import_name)
File "/home/ahmad3/anaconda2/envs/gpd/lib/python2.7/site-packages/pyglet/window/init.py", line 131, in
import pyglet.window.event
File "/home/ahmad3/anaconda2/envs/gpd/lib/python2.7/site-packages/pyglet/window/event.py", line 74
key.symbol_string(symbol), key.modifiers_string(modifiers)), file=self.file)
^
SyntaxError: invalid syntax

Problem about python generate-dataset-canny.py

Hi, @lianghongzhuo
i have a question about the code for Generate dataset problem about python generate-dataset-canny.py .
first , it seems the code hard code the pool_size to be 1, do we need to modify this setting to the actual cores for our cpu cores?
second, in line 36 "p_set = [multiprocessing.Process(target=worker, args=(i, 100, 20, good_grasp)) for _ in range(50)] " , i just got confused why would this line to fill the pool-list with the same args for the following 50 processes ? Seems the total 50 processes to do the same thing , right ? or is there any random variable for the 50 processes?
Thanks advance and looking forward your reply !
Much thanks for your great work for robot grasping .

Can't generate grasps for 002_master_chef_can.

Hi, Liang,
I follow the instruction and I just put one model 002_master_chef_can in the ~/dataset/ycb_meshes_google/objects. Then I run

$ python generate-dataset-canny.py

I got these warning:

$ python generate-dataset-canny.py bit
WARNING:root:Failed to import geometry msgs in rigid_transformations.py.
WARNING:root:Failed to import ros dependencies in rigid_transforms.py
WARNING:root:autolab_core not installed as catkin package, RigidTransform ros methods will be unavailable
WARNING:root:Unable to import CNN modules! Likely due to missing tensorflow.
WARNING:root:TensorFlow can be installed following the instructions in https://www.tensorflow.org/get_started/os_setup
WARNING:root:Unable to import pylibfreenect2. Python-only Kinect driver may not work properly.
WARNING:root:Failed to import ROS in Kinect2_sensor.py. Kinect will not be able to be used in bridged mode
WARNING:root:Unable to import Primsense sensor modules! Likely due to missing OpenNI2.
WARNING:root:Unable to import pyrealsense2.
WARNING:root:Failed to import ROS in ensenso_sensor.py. ROS functionality not available
WARNING:root:Failed to import ROS in phoxi_sensor.py. PhoXiSensor functionality unavailable.
WARNING:root:Unable to import generic sensor modules!.
WARNING:root:Unable to import weight sensor modules!
WARNING:root:Failed to import gqcnn! Grasp2D functions will not be available.
WARNING:root:Failed to import OpenRAVE
WARNING:dexnet.grasping.grasp_sampler:Failed to import OpenRAVE
WARNING:dexnet.grasping.grasp_sampler:Failed to import rospy, you can't grasp now.
All job done.
a worker of task 002_master_chef_can start

Then, it runs for several hours withou ending and I had to end it with ctrl-c. And the output repeated

...
INFO:dexnet.grasping.grasp_sampler:47/100 grasps found after iteration 2.
INFO:dexnet.grasping.grasp_sampler:Num surface: 36266
INFO:dexnet.grasping.grasp_sampler:31/100 grasps found after iteration 1.
INFO:dexnet.grasping.grasp_sampler:Num surface: 36266
...

All of the warnings should be fixed? But they need ros and openrave. ROS and epenrave depend on python2, but you said the code were written in python3. How could I resolve these problems?

Computing grasp score is different from written in the paper?

Hi, thanks for the great repo!
In your paper, if the friction coeff is miu, then the score should be 1/miu. But in your implementation, I found you are directly adding miu as part of the score.

score = level_score + refine_score * 0.01

The level_score is basically the friction coeff miu when you are saving the grasp training data according to the code. Is my understanding right?

Generate dataset problem about python generate-dataset-canny.py [prefix]

Question about the autolab_core

Hello,

I've met a problem when I try to run your code of generate-daset-canny.py:

WARNING:root:autolab_core not installed as catkin package, RigidTransform ros methods will be unavailable

Have you ever met this kind of problem? I'm sure having followed the same steps as written in your README.

Thx a lot.

problem about params.json

Hi, I'm generating my own dataset, but some params in params.json make me confused.

  1. What does the param "finger_radius" mean? I can't find the definition in gpg. And what is the difference between "finger_width" and "real_finger_width"? Does the "hand_height" mean the height of the finger?
  2. The robot hand that I'm using is RG2, and its depth is changing with the width. For example, when RG2 is fully open, the width is 10.5cm and the depth ( from fingertip to the base ) is 4.5cm at the moment. But when it is closed, the depth can be 8.5cm, much bigger than the former. I have no idea about setting the param to the situation like this.

Would you please give me some suggestions about my problems ? Thx!

About your grasp dataset

Hi, Liang,
I have downloaded your 'PointNetGPD_grasps_dataset.zip',and I put it in the ~/code/PointNetGPD/PointNetGPD/data.Also I rename the PointNetGPD_grasps_dataset as ycb_grasp.But,when I python main_1v.py.I got the information as follows.
2021-06-02 20-21-02屏幕截图
Did I do it wrong?
By the way,I didn't generate my Own Grasp Dataset as written in the README.md.So I really want to know how I should use your generated dataset.

Thank you very much!

How to use your dataset?

I've download your PointNetGPD_grasp_dataset,but I don't know which folder I should put it in.Could you tell me your training details?

Problem with kinect2grasp.py

@lianghongzhuo Firstly thank you for your great paper and code!
When I tried to run kinect2grasp.py, I noticed that there is no defination of table_top. So I fixed the value of cam_top and used the pointcloud of kinect2 directly to let the procedure continue.
However, I encountered an error just in front of the network, which is

Caculated padded input size per channel: (0). Kernel size :(1). Kernel size can't be greater than actual input size

The log showed that several graps were actually sent into the network. May you help me figure out the reason why the error occurred?
Thanks in advance.

使用python2还是python3?

项目里使用的py2还是py3?
最后使用network需要用ros,只能用py2,所以前面都是用的py2吗?
但是在py2的虚拟环境中无法import meshrender,会报错
File "~/Desktop/Deep learning grasp detection code/PointGPDCode/venv/local/lib/python2.7/site-packages/pyglet/matrix.py", line 78
return matrix @ rmat
^
SyntaxError: invalid syntax
自定义的@运算符在python2是不支持的,似乎这里暗示应该用python3.
所以到底应该用py2呢,还是py3?

when I run python read_grasps_from_file.py, no visualization

When I run python read_grasps_from_file.py, wanted to visualise grasps, I found that no image appeared, and the output seemed to be that the leaf was correct, how to solve this problem. this is my output:

QFactoryLoader::QFactoryLoader() checking directory path "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms" ...
QFactoryLoader::QFactoryLoader() looking at "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqeglfs.so"
Found metadata in lib /home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqeglfs.so, metadata=
{
"IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",
"MetaData": {
"Keys": [
"eglfs"
]
},
"archreq": 0,
"className": "QEglFSIntegrationPlugin",
"debug": false,
"version": 331520
}

Got keys from plugin meta data ("eglfs")
QFactoryLoader::QFactoryLoader() looking at "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqlinuxfb.so"
Found metadata in lib /home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqlinuxfb.so, metadata=
{
"IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",
"MetaData": {
"Keys": [
"linuxfb"
]
},
"archreq": 0,
"className": "QLinuxFbIntegrationPlugin",
"debug": false,
"version": 331520
}

Got keys from plugin meta data ("linuxfb")
QFactoryLoader::QFactoryLoader() looking at "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqminimal.so"
Found metadata in lib /home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqminimal.so, metadata=
{
"IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",
"MetaData": {
"Keys": [
"minimal"
]
},
"archreq": 0,
"className": "QMinimalIntegrationPlugin",
"debug": false,
"version": 331520
}

Got keys from plugin meta data ("minimal")
QFactoryLoader::QFactoryLoader() looking at "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqminimalegl.so"
Found metadata in lib /home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqminimalegl.so, metadata=
{
"IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",
"MetaData": {
"Keys": [
"minimalegl"
]
},
"archreq": 0,
"className": "QMinimalEglIntegrationPlugin",
"debug": false,
"version": 331520
}

Got keys from plugin meta data ("minimalegl")
QFactoryLoader::QFactoryLoader() looking at "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqoffscreen.so"
Found metadata in lib /home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqoffscreen.so, metadata=
{
"IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",
"MetaData": {
"Keys": [
"offscreen"
]
},
"archreq": 0,
"className": "QOffscreenIntegrationPlugin",
"debug": false,
"version": 331520
}

Got keys from plugin meta data ("offscreen")
QFactoryLoader::QFactoryLoader() looking at "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqvnc.so"
Found metadata in lib /home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqvnc.so, metadata=
{
"IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",
"MetaData": {
"Keys": [
"vnc"
]
},
"archreq": 0,
"className": "QVncIntegrationPlugin",
"debug": false,
"version": 331520
}

Got keys from plugin meta data ("vnc")
QFactoryLoader::QFactoryLoader() looking at "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqwayland-egl.so"
Found metadata in lib /home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqwayland-egl.so, metadata=
{
"IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",
"MetaData": {
"Keys": [
"wayland-egl"
]
},
"archreq": 0,
"className": "QWaylandEglPlatformIntegrationPlugin",
"debug": false,
"version": 331520
}

Got keys from plugin meta data ("wayland-egl")
QFactoryLoader::QFactoryLoader() looking at "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqwayland-generic.so"
Found metadata in lib /home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqwayland-generic.so, metadata=
{
"IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",
"MetaData": {
"Keys": [
"wayland"
]
},
"archreq": 0,
"className": "QWaylandIntegrationPlugin",
"debug": false,
"version": 331520
}

Got keys from plugin meta data ("wayland")
QFactoryLoader::QFactoryLoader() looking at "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqwayland-xcomposite-egl.so"
Found metadata in lib /home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqwayland-xcomposite-egl.so, metadata=
{
"IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",
"MetaData": {
"Keys": [
"wayland-xcomposite-egl"
]
},
"archreq": 0,
"className": "QWaylandXCompositeEglPlatformIntegrationPlugin",
"debug": false,
"version": 331520
}

Got keys from plugin meta data ("wayland-xcomposite-egl")
QFactoryLoader::QFactoryLoader() looking at "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqwayland-xcomposite-glx.so"
Found metadata in lib /home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqwayland-xcomposite-glx.so, metadata=
{
"IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",
"MetaData": {
"Keys": [
"wayland-xcomposite-glx"
]
},
"archreq": 0,
"className": "QWaylandXCompositeGlxPlatformIntegrationPlugin",
"debug": false,
"version": 331520
}

Got keys from plugin meta data ("wayland-xcomposite-glx")
QFactoryLoader::QFactoryLoader() looking at "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqwebgl.so"
Found metadata in lib /home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqwebgl.so, metadata=
{
"IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",
"MetaData": {
"Keys": [
"webgl"
]
},
"archreq": 0,
"className": "QWebGLIntegrationPlugin",
"debug": false,
"version": 331520
}

Got keys from plugin meta data ("webgl")
QFactoryLoader::QFactoryLoader() looking at "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqxcb.so"
Found metadata in lib /home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqxcb.so, metadata=
{
"IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",
"MetaData": {
"Keys": [
"xcb"
]
},
"archreq": 0,
"className": "QXcbIntegrationPlugin",
"debug": false,
"version": 331520
}

Got keys from plugin meta data ("xcb")
QFactoryLoader::QFactoryLoader() checking directory path "/home/xiaofeisong/anaconda3/envs/pointnetgpd/bin/platforms" ...
loaded library "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqxcb.so"
loaded library "Xcursor"
QStandardPaths: XDG_RUNTIME_DIR not set, defaulting to '/tmp/runtime-xiaofeisong'
QFactoryLoader::QFactoryLoader() checking directory path "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platformthemes" ...
QFactoryLoader::QFactoryLoader() looking at "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platformthemes/libqgtk3.so"
Found metadata in lib /home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platformthemes/libqgtk3.so, metadata=
{
"IID": "org.qt-project.Qt.QPA.QPlatformThemeFactoryInterface.5.1",
"MetaData": {
"Keys": [
"gtk3"
]
},
"archreq": 0,
"className": "QGtk3ThemePlugin",
"debug": false,
"version": 331520
}

Got keys from plugin meta data ("gtk3")
QFactoryLoader::QFactoryLoader() looking at "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platformthemes/libqxdgdesktopportal.so"
Found metadata in lib /home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platformthemes/libqxdgdesktopportal.so, metadata=
{
"IID": "org.qt-project.Qt.QPA.QPlatformThemeFactoryInterface.5.1",
"MetaData": {
"Keys": [
"xdgdesktopportal",
"flatpak",
"snap"
]
},
"archreq": 0,
"className": "QXdgDesktopPortalThemePlugin",
"debug": false,
"version": 331520
}

Got keys from plugin meta data ("xdgdesktopportal", "flatpak", "snap")
QFactoryLoader::QFactoryLoader() checking directory path "/home/xiaofeisong/anaconda3/envs/pointnetgpd/bin/platformthemes" ...
loaded library "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platformthemes/libqgtk3.so"
QFactoryLoader::QFactoryLoader() checking directory path "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforminputcontexts" ...
QFactoryLoader::QFactoryLoader() looking at "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforminputcontexts/libcomposeplatforminputcontextplugin.so"
Found metadata in lib /home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforminputcontexts/libcomposeplatforminputcontextplugin.so, metadata=
{
"IID": "org.qt-project.Qt.QPlatformInputContextFactoryInterface.5.1",
"MetaData": {
"Keys": [
"compose",
"xim"
]
},
"archreq": 0,
"className": "QComposePlatformInputContextPlugin",
"debug": false,
"version": 331520
}

Got keys from plugin meta data ("compose", "xim")
QFactoryLoader::QFactoryLoader() looking at "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforminputcontexts/libibusplatforminputcontextplugin.so"
Found metadata in lib /home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforminputcontexts/libibusplatforminputcontextplugin.so, metadata=
{
"IID": "org.qt-project.Qt.QPlatformInputContextFactoryInterface.5.1",
"MetaData": {
"Keys": [
"ibus"
]
},
"archreq": 0,
"className": "QIbusPlatformInputContextPlugin",
"debug": false,
"version": 331520
}

Got keys from plugin meta data ("ibus")
QFactoryLoader::QFactoryLoader() checking directory path "/home/xiaofeisong/anaconda3/envs/pointnetgpd/bin/platforminputcontexts" ...
loaded library "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforminputcontexts/libcomposeplatforminputcontextplugin.so"
QFactoryLoader::QFactoryLoader() checking directory path "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/styles" ...
QFactoryLoader::QFactoryLoader() checking directory path "/home/xiaofeisong/anaconda3/envs/pointnetgpd/bin/styles" ...
QLibraryPrivate::unload succeeded on "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforminputcontexts/libcomposeplatforminputcontextplugin.so"
QLibraryPrivate::unload succeeded on "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platformthemes/libqgtk3.so"
QLibraryPrivate::unload succeeded on "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqxcb.so"
QLibraryPrivate::unload succeeded on "Xcursor" (faked)
(pointnetgpd) xiaofeisong@LAPTOP-MPH47EO7:~/code/PointNetGPD/dex-net/apps$ python read_grasps_from_file.py
QFactoryLoader::QFactoryLoader() checking directory path "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms" ...
QFactoryLoader::QFactoryLoader() looking at "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqeglfs.so"
Found metadata in lib /home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqeglfs.so, metadata=
{
"IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",
"MetaData": {
"Keys": [
"eglfs"
]
},
"archreq": 0,
"className": "QEglFSIntegrationPlugin",
"debug": false,
"version": 331520
}

Got keys from plugin meta data ("eglfs")
QFactoryLoader::QFactoryLoader() looking at "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqlinuxfb.so"
Found metadata in lib /home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqlinuxfb.so, metadata=
{
"IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",
"MetaData": {
"Keys": [
"linuxfb"
]
},
"archreq": 0,
"className": "QLinuxFbIntegrationPlugin",
"debug": false,
"version": 331520
}

Got keys from plugin meta data ("linuxfb")
QFactoryLoader::QFactoryLoader() looking at "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqminimal.so"
Found metadata in lib /home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqminimal.so, metadata=
{
"IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",
"MetaData": {
"Keys": [
"minimal"
]
},
"archreq": 0,
"className": "QMinimalIntegrationPlugin",
"debug": false,
"version": 331520
}

Got keys from plugin meta data ("minimal")
QFactoryLoader::QFactoryLoader() looking at "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqminimalegl.so"
Found metadata in lib /home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqminimalegl.so, metadata=
{
"IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",
"MetaData": {
"Keys": [
"minimalegl"
]
},
"archreq": 0,
"className": "QMinimalEglIntegrationPlugin",
"debug": false,
"version": 331520
}

Got keys from plugin meta data ("minimalegl")
QFactoryLoader::QFactoryLoader() looking at "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqoffscreen.so"
Found metadata in lib /home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqoffscreen.so, metadata=
{
"IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",
"MetaData": {
"Keys": [
"offscreen"
]
},
"archreq": 0,
"className": "QOffscreenIntegrationPlugin",
"debug": false,
"version": 331520
}

Got keys from plugin meta data ("offscreen")
QFactoryLoader::QFactoryLoader() looking at "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqvnc.so"
Found metadata in lib /home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqvnc.so, metadata=
{
"IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",
"MetaData": {
"Keys": [
"vnc"
]
},
"archreq": 0,
"className": "QVncIntegrationPlugin",
"debug": false,
"version": 331520
}

Got keys from plugin meta data ("vnc")
QFactoryLoader::QFactoryLoader() looking at "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqwayland-egl.so"
Found metadata in lib /home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqwayland-egl.so, metadata=
{
"IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",
"MetaData": {
"Keys": [
"wayland-egl"
]
},
"archreq": 0,
"className": "QWaylandEglPlatformIntegrationPlugin",
"debug": false,
"version": 331520
}

Got keys from plugin meta data ("wayland-egl")
QFactoryLoader::QFactoryLoader() looking at "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqwayland-generic.so"
Found metadata in lib /home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqwayland-generic.so, metadata=
{
"IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",
"MetaData": {
"Keys": [
"wayland"
]
},
"archreq": 0,
"className": "QWaylandIntegrationPlugin",
"debug": false,
"version": 331520
}

Got keys from plugin meta data ("wayland")
QFactoryLoader::QFactoryLoader() looking at "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqwayland-xcomposite-egl.so"
Found metadata in lib /home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqwayland-xcomposite-egl.so, metadata=
{
"IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",
"MetaData": {
"Keys": [
"wayland-xcomposite-egl"
]
},
"archreq": 0,
"className": "QWaylandXCompositeEglPlatformIntegrationPlugin",
"debug": false,
"version": 331520
}

Got keys from plugin meta data ("wayland-xcomposite-egl")
QFactoryLoader::QFactoryLoader() looking at "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqwayland-xcomposite-glx.so"
Found metadata in lib /home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqwayland-xcomposite-glx.so, metadata=
{
"IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",
"MetaData": {
"Keys": [
"wayland-xcomposite-glx"
]
},
"archreq": 0,
"className": "QWaylandXCompositeGlxPlatformIntegrationPlugin",
"debug": false,
"version": 331520
}

Got keys from plugin meta data ("wayland-xcomposite-glx")
QFactoryLoader::QFactoryLoader() looking at "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqwebgl.so"
Found metadata in lib /home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqwebgl.so, metadata=
{
"IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",
"MetaData": {
"Keys": [
"webgl"
]
},
"archreq": 0,
"className": "QWebGLIntegrationPlugin",
"debug": false,
"version": 331520
}

Got keys from plugin meta data ("webgl")
QFactoryLoader::QFactoryLoader() looking at "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqxcb.so"
Found metadata in lib /home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqxcb.so, metadata=
{
"IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",
"MetaData": {
"Keys": [
"xcb"
]
},
"archreq": 0,
"className": "QXcbIntegrationPlugin",
"debug": false,
"version": 331520
}

Got keys from plugin meta data ("xcb")
QFactoryLoader::QFactoryLoader() checking directory path "/home/xiaofeisong/anaconda3/envs/pointnetgpd/bin/platforms" ...
loaded library "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqxcb.so"
loaded library "Xcursor"
QStandardPaths: XDG_RUNTIME_DIR not set, defaulting to '/tmp/runtime-xiaofeisong'
QFactoryLoader::QFactoryLoader() checking directory path "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platformthemes" ...
QFactoryLoader::QFactoryLoader() looking at "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platformthemes/libqgtk3.so"
Found metadata in lib /home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platformthemes/libqgtk3.so, metadata=
{
"IID": "org.qt-project.Qt.QPA.QPlatformThemeFactoryInterface.5.1",
"MetaData": {
"Keys": [
"gtk3"
]
},
"archreq": 0,
"className": "QGtk3ThemePlugin",
"debug": false,
"version": 331520
}

Got keys from plugin meta data ("gtk3")
QFactoryLoader::QFactoryLoader() looking at "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platformthemes/libqxdgdesktopportal.so"
Found metadata in lib /home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platformthemes/libqxdgdesktopportal.so, metadata=
{
"IID": "org.qt-project.Qt.QPA.QPlatformThemeFactoryInterface.5.1",
"MetaData": {
"Keys": [
"xdgdesktopportal",
"flatpak",
"snap"
]
},
"archreq": 0,
"className": "QXdgDesktopPortalThemePlugin",
"debug": false,
"version": 331520
}

Got keys from plugin meta data ("xdgdesktopportal", "flatpak", "snap")
QFactoryLoader::QFactoryLoader() checking directory path "/home/xiaofeisong/anaconda3/envs/pointnetgpd/bin/platformthemes" ...
loaded library "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platformthemes/libqgtk3.so"
QFactoryLoader::QFactoryLoader() checking directory path "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforminputcontexts" ...
QFactoryLoader::QFactoryLoader() looking at "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforminputcontexts/libcomposeplatforminputcontextplugin.so"
Found metadata in lib /home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforminputcontexts/libcomposeplatforminputcontextplugin.so, metadata=
{
"IID": "org.qt-project.Qt.QPlatformInputContextFactoryInterface.5.1",
"MetaData": {
"Keys": [
"compose",
"xim"
]
},
"archreq": 0,
"className": "QComposePlatformInputContextPlugin",
"debug": false,
"version": 331520
}

Got keys from plugin meta data ("compose", "xim")
QFactoryLoader::QFactoryLoader() looking at "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforminputcontexts/libibusplatforminputcontextplugin.so"
Found metadata in lib /home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforminputcontexts/libibusplatforminputcontextplugin.so, metadata=
{
"IID": "org.qt-project.Qt.QPlatformInputContextFactoryInterface.5.1",
"MetaData": {
"Keys": [
"ibus"
]
},
"archreq": 0,
"className": "QIbusPlatformInputContextPlugin",
"debug": false,
"version": 331520
}

Got keys from plugin meta data ("ibus")
QFactoryLoader::QFactoryLoader() checking directory path "/home/xiaofeisong/anaconda3/envs/pointnetgpd/bin/platforminputcontexts" ...
loaded library "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforminputcontexts/libcomposeplatforminputcontextplugin.so"
QFactoryLoader::QFactoryLoader() checking directory path "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/styles" ...
QFactoryLoader::QFactoryLoader() checking directory path "/home/xiaofeisong/anaconda3/envs/pointnetgpd/bin/styles" ...
QLibraryPrivate::unload succeeded on "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforminputcontexts/libcomposeplatforminputcontextplugin.so"
QLibraryPrivate::unload succeeded on "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platformthemes/libqgtk3.so"
QLibraryPrivate::unload succeeded on "/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/PyQt5/Qt5/plugins/platforms/libqxcb.so"
QLibraryPrivate::unload succeeded on "Xcursor" (faked)

python-pcl require python >=3.5

Because it need the ros and python2, I tried to install python-pcl in python2. But I get the error: RuntimeError: Python version >= 3.5 required. How can I install python-pcl in python2? In the python-pcl https://github.com/strawlab/python-pcl repositor, it say "This release has been tested on Linux Ubuntu 18.04 with Python 2.7.6, 3.5.x".

Training PointNetGPD on different data source than YCB

Hello,
I would like to train PointNetGPD on a new data source and I'm trying to figure out what exactly is needed. From reading the code (in particular PointGraspOneViewDataset), I'm currently assuming the following:

  • For each object I need one npy file that contains the point cloud from a specific view point in the form of a matrix of dims (N, 3) (how many viewpoints should I generate?)
  • For each object I need the grasps as an npy file in the form of a matrix of dims (N, 10). This contains (?): position x, y, z, approach vector x, y, z, roll around approach, width, level_score (what's the range?), refine_score (range?)
  • A pickle that is called google2cloud.pkl that contains the transform between the pointcloud and the mesh as a 4x4 matrix

Could you comment on my assumptions? Thanks!

How much time to generate dataset?

Thanks for your great works!

I would like to train PointNetGPD.
And data generation took much time and it is not finished yet.
I want you to tell me that how much time to generate a dataset for reference.

Grasps too few and unpredicatable

PointnetGPD.zip

I am having trouble generating enough viable grasps repeatedly with kinect2grasp.py script .
I am using the 3class model and running the inference on an Nvidia Quadro T1000( 4GB).

I am not sure what i am doing wrong but i am not able to sample enough grasps on the YCB object, even very simple objects like cracker box and mustard bottle. The results are too unpredictable with some objects having a few good grasps, some having not very useful and others having none at all.

I dug through the GPGSamplingPCL script and tried a small change where i changed the dtheta in the grid search to be about normal instead of minor_principalofCurvature. Surprisingly, that generates around 40-50 grasps for almost all ycb objects on average. But most of them are probably colliding with the object.

I tried removing a few checks here and there in the GPGsampler to pinpoint where all the samples are being rejected but couldn't find much. I am using nonTextured.ply files provided by YCB dataset and converting them to .pcd for ros publishing. The pointcloud looks fine and there's also enough points on it. The final GUI even shows quite a lot of sampled surface points in red color but the actual grasps that get sampled out of those are very few or none.

I am not sure if this description is enough to really tell what's the issue. So, i have attached a few pictures both from 'rotation_about_normal' and 'rotation_about_pc'. I am hoping, maybe you can give some idea, as to what i am doing wrong by just going thru these pictures.

PS. Thanks in advance :)

transformation problem in dataset.py

Hello, I found that the transformation in your code is:
'''
center = (np.dot(transform, np.array([center[0], center[1], center[2], 1])))[:3]
binormal = (np.dot(transform, np.array([binormal[0], binormal[1], binormal[2], 1])))[:3].reshape(3, 1)
approach = (np.dot(transform, np.array([approach[0], approach[1], approach[2], 1])))[:3].reshape(3, 1)
minor_normal = (np.dot(transform, np.array([minor_normal[0], minor_normal[1], minor_normal[2], 1])))[:3].reshape(3, 1)
'''
I think it would be:
'''
center = (np.dot(transform, np.array([center[0], center[1], center[2], 1])))[:3]
binormal = (np.dot(transform, np.array([binormal[0], binormal[1], binormal[2], 0])))[:3].reshape(3, 1)
approach = (np.dot(transform, np.array([approach[0], approach[1], approach[2], 0])))[:3].reshape(3, 1)
minor_normal = (np.dot(transform, np.array([minor_normal[0], minor_normal[1], minor_normal[2], 0])))[:3].reshape(3, 1)
'''
Because the binormal, approach, minor_normal are vectors.

Installation in Anaconda

Hi

I am new to programming.
It's suggested to install in a virtual environment, does it mean Anaconda?

If so, I tried install dependencies by : conda install --yes --file requirements.txt , but I got :

PackagesNotFoundError: The following packages are not available from current channels:

  • autolab-core
  • pyquaternion
  • nearpy
  • autolab-perception
  • trimesh
  • pyhull
  • visualization

I also tried to use pip install, but it seems the packages are installed in my /.local/lib/python3.6/site-packages, not in the created conda env.

How to solve this?

Thanks

Best Regards

pretrained results

@lianghongzhuo
I wrote a script based on your kinect2grasp.py to detect grasp poses from point cloud captured from a camera.
The detection part is very good (I generated around 15 grasps for better visualization):
image

But the classification part is very bad that the model classified all the generated grasps as bad grasps.
image

Do you have any comments about the results I got ?

How to perform a fresh installation in a docker container

I'm trying to use the PointNetGPD in a docker container to have an isolated installation.

What is the required environment to have PointNetGDP working?
From the issues sections I determined that the best safe path is to use ubuntu 18.04 with ros melodic but until now I couldn't manage to obtain a working environment. After following the instructions, I have problems when I try to use the trained network.

I explain a bit the steps I performed:

I pulled a pytorch docker image (pytorch/pytorch:1.7.1-cuda11.0-cudnn8-runtime) and I installed the code in the $HOME/code path, followed the install requirement instructions and downloaded the Generated Grasp Dataset. Afterwards I installed ros-melodic.
At this point, as I don't want to generate a new database I jumped to the Using the trained network instructions and executed "python get_ur5_robot_state.py".
It reported me that the module rospy doesn't exist.
From this point I install a bunch of python and ros packages via pip and apt and the current error I have is:

root@534bf88c2be0:~/code/PointNetGPD/dex-net/apps# python get_ur5_robot_state.py
Traceback (most recent call last):
  File "get_ur5_robot_state.py", line 29, in <module>
    group = moveit_commander.MoveGroupCommander("arm")
AttributeError: module 'moveit_commander' has no attribute 'MoveGroupCommander'

And I'm not able to move forward.

I also have problems regarding kinect2grasp.py.
I think the error is about I lack some python libraries:

root@534bf88c2be0:~/code/PointNetGPD/dex-net/apps# python kinect2grasp.py
WARNING:root:autolab_core not installed as catkin package, RigidTransform ros methods will be unavailable
2021-02-06 18:57:55.095252: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
WARNING:root:Unable to import pylibfreenect2. Python-only Kinect driver may not work properly.
WARNING:root:Failed to import ROS in Kinect2_sensor.py. Kinect will not be able to be used in bridged mode
WARNING:root:Unable to import Primsense sensor modules! Likely due to missing OpenNI2.
WARNING:root:Unable to import pyrealsense2.
WARNING:root:Failed to import ROS in ensenso_sensor.py. ROS functionality not available
WARNING:root:Failed to import ROS in phoxi_sensor.py. PhoXiSensor functionality unavailable.
WARNING:root:Unable to import generic sensor modules!.
/opt/conda/lib/python3.8/site-packages/traits/etsconfig/etsconfig.py:412: UserWarning: Environment variable "HOME" not set, setting home directory to /tmp
  warn(
Please install grasp msgs from https://github.com/TAMS-Group/gpd_grasp_msgs in your ROS workspace

I have a working protonect from libfreenect and I installed pylibfreenect2, but it seems it is not detected (maybe a python version problem?)

Any help in what I can do to be able to have the PointNetGPD working?

cant import OpenNI2

@lianghongzhuo
Hi, I've tried to use https://github.com/occipital/OpenNI2 and https://github.com/occipital/OpenNI2/tree/kinect2 and pip to install OpenNI2, but the project still cant import openni2 driver correctly. There is openni==2.3.0 in my pip list, but i don't know if it is real openni2. I really wanna know how did you configure the OpenNI2, thanks a lot.
Here is the information when I used terminal to execute the kinect2grasp, please ignore the problem that the failure of the register.

WARNING:root:autolab_core not installed as catkin package, RigidTransform ros methods will be unavailable
2020-10-10 18:22:30.745644: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libnvinfer.so.6'; dlerror: libnvinfer.so.6: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/ros/kinetic/lib:/opt/ros/kinetic/lib/x86_64-linux-gnu
2020-10-10 18:22:30.745780: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libnvinfer_plugin.so.6'; dlerror: libnvinfer_plugin.so.6: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/ros/kinetic/lib:/opt/ros/kinetic/lib/x86_64-linux-gnu
2020-10-10 18:22:30.745796: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:30] Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
WARNING:root:Unable to import openni2 driver. Python-only Primesense driver may not work properly
WARNING:root:Failed to import ROS in phoxi_sensor.py. PhoXiSensor functionality unavailable.
WARNING:root:Failed to import gqcnn! Grasp2D functions will not be available.
WARNING:root:Failed to import OpenRAVE
WARNING:dexnet.grasping.grasp_sampler:Failed to import OpenRAVE
WARNING:dexnet.api:Failed to import DexNetVisualizer3D, visualization methods will be unavailable
Using default model file
/home/zhangteng/.local/lib/python2.7/site-packages/torch/serialization.py:593: SourceChangeWarning: source code of class 'torch.nn.parallel.data_parallel.DataParallel' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes.
warnings.warn(msg, SourceChangeWarning)
/home/zhangteng/.local/lib/python2.7/site-packages/torch/serialization.py:562: UnicodeWarning: Unicode unequal comparison failed to convert both arguments to Unicode - interpreting them as being unequal
if original_source != current_source:
/home/zhangteng/.local/lib/python2.7/site-packages/torch/serialization.py:593: SourceChangeWarning: source code of class 'torch.nn.modules.conv.Conv1d' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes.
warnings.warn(msg, SourceChangeWarning)
/home/zhangteng/.local/lib/python2.7/site-packages/torch/serialization.py:593: SourceChangeWarning: source code of class 'torch.nn.modules.pooling.MaxPool1d' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes.
warnings.warn(msg, SourceChangeWarning)
/home/zhangteng/.local/lib/python2.7/site-packages/torch/serialization.py:593: SourceChangeWarning: source code of class 'torch.nn.modules.linear.Linear' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes.
warnings.warn(msg, SourceChangeWarning)
/home/zhangteng/.local/lib/python2.7/site-packages/torch/serialization.py:593: SourceChangeWarning: source code of class 'torch.nn.modules.activation.ReLU' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes.
warnings.warn(msg, SourceChangeWarning)
/home/zhangteng/.local/lib/python2.7/site-packages/torch/serialization.py:593: SourceChangeWarning: source code of class 'torch.nn.modules.batchnorm.BatchNorm1d' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes.
warnings.warn(msg, SourceChangeWarning)
load model /home/zhangteng/code/PointNetGPD/PointNetGPD/data/pointgpd_3class.model
Unable to register with master node [http://localhost:11311]: master may not be running yet. Will keep trying.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.