Code Monkey home page Code Monkey logo

gpd's Introduction

Grasp Pose Detection (GPD)

Grasp Pose Detection (GPD) is a package to detect 6-DOF grasp poses (3-DOF position and 3-DOF orientation) for a 2-finger robot hand (e.g., a parallel jaw gripper) in 3D point clouds. GPD takes a point cloud as input and produces pose estimates of viable grasps as output. The main strengths of GPD are:

  • works for novel objects (no CAD models required for detection),
  • works in dense clutter, and
  • outputs 6-DOF grasp poses (enabling more than just top-down grasps).

UR5 demo

GPD consists of two main steps: sampling a large number of grasp candidates, and classifying these candidates as viable grasps or not.

Example Input and Output

The reference for this package is: Grasp Pose Detection in Point Clouds.

Table of Contents

  1. Requirements
  2. Installation
  3. Generate Grasps for a Point Cloud File
  4. Parameters
  5. Views
  6. Input Channels for Neural Network
  7. CNN Frameworks
  8. Network Training
  9. Grasp Image
  10. References
  11. Troubleshooting

1) Requirements

  1. PCL 1.9 or newer
  2. Eigen 3.0 or newer
  3. OpenCV 3.4 or newer

2) Installation

The following instructions have been tested on Ubuntu 16.04. Similar instructions should work for other Linux distributions.

  1. Install PCL and Eigen. If you have ROS Indigo or Kinetic installed, you should be good to go.

  2. Install OpenCV 3.4 (tutorial).

  3. Clone the repository into some folder:

    git clone https://github.com/atenpas/gpd
    
  4. Build the package:

    cd gpd
    mkdir build && cd build
    cmake ..
    make -j
    

You can optionally install GPD with sudo make install so that it can be used by other projects as a shared library.

If building the package does not work, try to modify the compiler flags, CMAKE_CXX_FLAGS, in the file CMakeLists.txt.

3) Generate Grasps for a Point Cloud File

Run GPD on an point cloud file (PCD or PLY):

./detect_grasps ../cfg/eigen_params.cfg ../tutorials/krylon.pcd

The output should look similar to the screenshot shown below. The window is the PCL viewer. You can press [q] to close the window and [h] to see a list of other commands.

Below is a visualization of the convention that GPD uses for the grasp pose (position and orientation) of a grasp. The grasp position is indicated by the orange cross and the orientation by the colored arrows.

4) Parameters

Brief explanations of parameters are given in cfg/eigen_params.cfg.

The two parameters that you typically want to play with to improve the number of grasps found are workspace and num_samples. The first defines the volume of space in which to search for grasps as a cuboid of dimensions [minX, maxX, minY, maxY, minZ, maxZ], centered at the origin of the point cloud frame. The second is the number of samples that are drawn from the point cloud to detect grasps. You should set the workspace as small as possible and the number of samples as large as possible.

Most of the code is parallelized. To improve runtime, set num_threads to the number of (physical) CPU cores that your computer has available.

5) Views

rviz screenshot

You can use this package with a single or with two depth sensors. The package comes with CAFFE model files for both. You can find these files in models/caffe/15channels. For a single sensor, use single_view_15_channels.caffemodel and for two depth sensors, use two_views_15_channels_[angle]. The [angle] is the angle between the two sensor views, as illustrated in the picture below. In the two-views setting, you want to register the two point clouds together before sending them to GPD.

Providing the camera position to the configuration file (*.cfg) is important, as it enables PCL to estimate the correct normals direction (which is to point toward the camera). Alternatively, using the ROS wrapper, multiple camera positions can be provided.

rviz screenshot

To switch between one and two sensor views, change the parameter weight_file in your config file.

6) Input Channels for Neural Network

The package comes with weight files for two different input representations for the neural network that is used to decide if a grasp is viable or not: 3 or 15 channels. The default is 15 channels. However, you can use the 3 channels to achieve better runtime for a loss in grasp quality. For more details, please see the references below.

7) CNN Frameworks

GPD comes with a number of different classifier frameworks that exploit different hardware and have different dependencies. Switching between the frameworks requires to run CMake with additional arguments. For example, to use the OpenVino framework:

cmake .. -DUSE_OPENVINO=ON

You can use ccmake to check out all possible CMake options.

GPD supports the following three frameworks:

  1. OpenVino: installation instructions for open source version (CPUs, GPUs, FPGAs from Intel)
  2. Caffe (GPUs from Nvidia or CPUs)
  3. Custom LeNet implementation using the Eigen library (CPU)

Additional classifiers can be added by sub-classing the classifier interface.

OpenVINO

OpenVINO is recommended for speed. To use OpenVINO, you need to run the following command before compiling GPD.

export InferenceEngine_DIR=/path/to/dldt/inference-engine/build/

8) Network Training

To create training data with the C++ code, you need to install OpenCV 3.4 Contribs. Next, you need to compile GPD with the flag DBUILD_DATA_GENERATION like this:

```
cd gpd
mkdir build && cd build
cmake .. -DBUILD_DATA_GENERATION=ON
make -j
```

There are four steps to train a network to predict grasp poses. First, we need to create grasp images.

./generate_data ../cfg/generate_data.cfg

You should modify generate_data.cfg according to your needs.

Next, you need to resize the created databases to train_offset and test_offset (see the terminal output of generate_data). For example, to resize the training set, use the following commands with size set to the value of train_offset.

cd pytorch
python reshape_hdf5.py pathToTrainingSet.h5 out.h5 size

The third step is to train a neural network. The easiest way to training the network is with the existing code. This requires the pytorch framework. To train a network, use the following commands.

cd pytorch
python train_net3.py pathToTrainingSet.h5 pathToTestSet.h5 num_channels

The fourth step is to convert the model to the ONNX format.

python torch_to_onxx.py pathToPytorchModel.pwf pathToONNXModel.onnx num_channels

The last step is to convert the ONNX file to an OpenVINO compatible format: tutorial. This gives two files that can be loaded with GPD by modifying the weight_file and model_file parameters in a CFG file.

9) Grasp Image/Descriptor

Generate some grasp poses and their corresponding images/descriptors:

./test_grasp_image ../tutorials/krylon.pcd 3456 1 ../models/lenet/15channels/params/

For details on how the grasp image is created, check out our journal paper.

10) References

If you like this package and use it in your own work, please cite our journal paper [1]. If you're interested in the (shorter) conference version, check out [2].

[1] Andreas ten Pas, Marcus Gualtieri, Kate Saenko, and Robert Platt. Grasp Pose Detection in Point Clouds. The International Journal of Robotics Research, Vol 36, Issue 13-14, pp. 1455-1473. October 2017.

[2] Marcus Gualtieri, Andreas ten Pas, Kate Saenko, and Robert Platt. High precision grasp pose detection in dense clutter. IROS 2016, pp. 598-605.

11) Troubleshooting Tips

  1. Remove the cmake cache: rm CMakeCache.txt
  2. make clean
  3. Remove the build folder and rebuild.
  4. Update gcc and g++ to a version > 5.

gpd's People

Contributors

atenpas avatar jaymwong avatar sharronliu avatar tanay-bits avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gpd's Issues

Getting Error after workspace filtering

Hey atenpas,
at first great work with your grasp. works pretty nice. Unfortunately i was trying to run it with the care-o-bot simulation on Gazebo. It creates just a virtual point cloud instead of a real one.
After changing the subscribed topic to the necessary the algorithm creates the follow output:

Processing cloud with: 710 points.
After workspace filtering: 0 points left.
[pcl::KdTreeFLANN::setInputCloud] Cannot create a KDTree with an empty input cloud!
Estimating local reference frames ...
Error: No samples or no indices!

The processing can be more (tried with the raw point cloud from the simulation with arround 10k points), but the error is still occuring.
What exactly is the workspace filtering doing? and how can i prevent this error?

Kind regards, Dimitrij-M Holm

gpd doesn't compile, pthread library

CMakeFiles/cmTC_00a4c.dir/CheckSymbolExists.c.o:CheckSymbolExists.c:function main: error: undefined reference to 'pthread_create'
collect2: error: ld returned 1 exit status
CMakeFiles/cmTC_00a4c.dir/build.make:97: recipe for target 'cmTC_00a4c' failed
make[1]: *** [cmTC_00a4c] Error 1
make[1]: Leaving directory '/code/ext_ws/src/gpd/build/CMakeFiles/CMakeTmp'
Makefile:126: recipe for target 'cmTC_00a4c/fast' failed
make: *** [cmTC_00a4c/fast] Error 2

File /home/inwebit/code/ext_ws/src/gpd/build/CMakeFiles/CMakeTmp/CheckSymbolExists.c:
/* */
#include <pthread.h>

int main(int argc, char** argv)
{
(void)argv;
#ifndef pthread_create
return ((int*)(&pthread_create))[argc];
#else
(void)argc;
return 0;
#endif
}

Determining if the function pthread_create exists in the pthreads failed with the following output:
Change Dir: /home/inwebit/code/ext_ws/src/gpd/build/CMakeFiles/CMakeTmp

Run Build Command:"/usr/bin/make" "cmTC_f2b2d/fast"
/usr/bin/make -f CMakeFiles/cmTC_f2b2d.dir/build.make CMakeFiles/cmTC_f2b2d.dir/build
make[1]: Entering directory '/code/ext_ws/src/gpd/build/CMakeFiles/CMakeTmp'
Building C object CMakeFiles/cmTC_f2b2d.dir/CheckFunctionExists.c.o
/usr/lib/ccache/gcc -DCHECK_FUNCTION_EXISTS=pthread_create -o CMakeFiles/cmTC_f2b2d.dir/CheckFunctionExists.c.o -c /usr/share/cmake-3.5/Modules/CheckFunctionExists.c
Linking C executable cmTC_f2b2d
/usr/bin/cmake -E cmake_link_script CMakeFiles/cmTC_f2b2d.dir/link.txt --verbose=1
/usr/lib/ccache/gcc -DCHECK_FUNCTION_EXISTS=pthread_create CMakeFiles/cmTC_f2b2d.dir/CheckFunctionExists.c.o -o cmTC_f2b2d -rdynamic -lpthreads
/usr/bin/ld: error: cannot find -lpthreads
CMakeFiles/cmTC_f2b2d.dir/CheckFunctionExists.c.o:CheckFunctionExists.c:function main: error: undefined reference to 'pthread_create'
collect2: error: ld returned 1 exit status
CMakeFiles/cmTC_f2b2d.dir/build.make:97: recipe for target 'cmTC_f2b2d' failed
make[1]: *** [cmTC_f2b2d] Error 1
make[1]: Leaving directory '/code/ext_ws/src/gpd/build/CMakeFiles/CMakeTmp'
Makefile:126: recipe for target 'cmTC_f2b2d/fast' failed
make: *** [cmTC_f2b2d/fast] Error 2

Issues when multiple objects placed in the scene.

Hi,
I have two queries.
1.When multiple objects are placed in the scene , the grasp positions keeps on switching between the objects.Is there any way to get all the grasps focused for a single object only at a time.
Parameter values I used are mentioned below

num_samples : 800
num_threads :4
min_score_diff:1400
min_aperture:0.0
max_aperture:0.085
num_selected :80
min_inliers :1

2.Is there any possible method to use the same package for obtaining the centroid (x,y,z) values of object's top surface,using this package

gpd catkin_make error

Hello,
when I complied the gpd with 'catkin_make', there were some errors show below:

[ 36%] Building CXX object gpd/CMakeFiles/gpd_detect_grasps.dir/src/nodes/grasp_detection_node.cpp.o
In file included from /home/xwk/GPD/src/gpd/src/nodes/grasp_detection_node.cpp:1:0:
/home/xwk/GPD/src/gpd/src/nodes/../../../gpd/include/nodes/grasp_detection_node.h:56:30: fatal error: gpd/CloudIndexed.h: No such file or direction
#include <gpd/CloudIndexed.h>
^
compilation terminated.
make[2]: *** [gpd/CMakeFiles/gpd_detect_grasps.dir/src/nodes/grasp_detection_node.cpp.o] Error 1
make[1]: *** [gpd/CMakeFiles/gpd_detect_grasps.dir/all] Error 2
make: *** [all] Error 2

And I couldn't find any 'CloudIndexed.h' in gpd direction.
how can I slove it? Thanks a lot!

Why different Caffe Model for Multiple Cameras?

I'm attempting a pick and place demo very similar to the one in your video with a UR10 robot with a robotiq gripper and a time of flight sensor mounted on the gripper. I want a detailed view of the objects for good segmentation and so I want to move the arm around and combine several pointclouds into a more detailed one. This wasn't shown in the video but given the great detail of the pointcloud I'm assuming this is how you did it. This is fairly similar to using two separate depth cameras, which you described in the readme. However, I'm trying to figure out why you needed a different caffe model for the two camera setup. Assuming the data is properly transformed and merged, the pointcloud data should be the same regardless of what perspectives the cameras are at, correct? I'm trying to figure out why I would be limited to taking only two snapshots at 90 or 53 degrees.

select_grasp.py fails with real RGBD camera

I modified select_grasp.py to accept the point cloud coming from my camera, in line 18:
cloud_sub = rospy.Subscriber('/sls_sense_wrist/depth_registered/points', PointCloud2, cloudCallback)
instead of /cloud_pcd, and also modified the header.frame_id to the one my robot is using. But then it fails and terminates with the following trace:

[ERROR] [WallTime: 1504130573.516602] bad callback: <function cloudCallback at 0x7fb77275ea28>
Traceback (most recent call last):
File "/opt/ros/indigo/lib/python2.7/dist-packages/rospy/topics.py", line 711, in _invoke_callback
cb(msg)
File "src/forked/gpd/scripts/select_grasp.py", line 11, in cloudCallback
cloud.append([p[0], p[1], p[2]])
AttributeError: 'numpy.ndarray' object has no attribute 'append'

Traceback (most recent call last):
File "src/forked/gpd/scripts/select_grasp.py", line 32, in
C, _, _, _ = lstsq(A, X[:,2])
File "/usr/lib/python2.7/dist-packages/scipy/linalg/basic.py", line 507, in lstsq
a1,b1 = map(np.asarray_chkfinite, (a,b))
File "/usr/lib/python2.7/dist-packages/numpy/lib/function_base.py", line 595, in asarray_chkfinite
"array must not contain infs or NaNs")
ValueError: array must not contain infs or NaNs

Apart from this I'm following the steps described in tutorial 2. I'm using an Asus Xtion Pro Live mounted on a UR5 wrist.

Any help regarding how I could resolve this would be much appreciated!

How can I create training data

Can you share with me how to create a train dataset?
The Big BIRD dataset is very large and includes all angles of the object. Do you need to apply it to all the data? How did you choose? And it is the h5 file type instead of the pcd file type, which also brings a lot of trouble.

Feature request: Delete old markers before publishing new ones.

If possible, something like this would be nice.

void GraspPlotter::drawGrasps(const std::vector<Grasp>& hands, const std::string& frame)
{
  for (size_t i = 0; i < markers_.markers.size(); i++)
    markers_.markers[i].action = visualization_msgs::Marker::DELETE;
  
  rviz_pub_.publish(markers_);
  markers_ = convertToVisualGraspMsg(hands, frame);
  rviz_pub_.publish(markers_);
}

Catkin_make fails

Hello,

I'm trying to compile gpd on ubuntu 14.04 running ROS indigo. I ran into the following compilation error:

-- +++ processing catkin package: 'gpd'
-- ==> add_subdirectory(gpd)
-- Using these message generators: gencpp;genlisp;genpy
-- Boost version: 1.54.0
-- Found the following Boost libraries:
--   system
--   filesystem
--   thread
--   date_time
--   iostreams
--   serialization
-- checking for module 'openni-dev'
--   package 'openni-dev' not found
-- checking for module 'openni-dev'
--   package 'openni-dev' not found
-- checking for module 'openni-dev'
--   package 'openni-dev' not found
-- looking for PCL_COMMON
-- looking for PCL_OCTREE
-- looking for PCL_IO
-- looking for PCL_KDTREE
-- looking for PCL_SEARCH
-- looking for PCL_SAMPLE_CONSENSUS
-- looking for PCL_FILTERS
-- looking for PCL_FEATURES
-- looking for PCL_KEYPOINTS
-- looking for PCL_GEOMETRY
-- looking for PCL_SEGMENTATION
-- looking for PCL_VISUALIZATION
-- looking for PCL_OUTOFCORE
-- looking for PCL_REGISTRATION
-- looking for PCL_RECOGNITION
-- looking for PCL_SURFACE
-- looking for PCL_PEOPLE
-- looking for PCL_TRACKING
-- looking for PCL_APPS
CAFFE_DIR: /home/correlllab/caffe/build
-- gpd: 6 messages, 0 services
-- +++ processing catkin package: 'gqcnn'
-- ==> add_subdirectory(gqcnn)
-- Using these message generators: gencpp;genlisp;genpy
-- gqcnn: 2 messages, 1 services
-- +++ processing catkin package: 'meshpy'
-- ==> add_subdirectory(meshpy)
-- Boost version: 1.54.0
-- Found the following Boost libraries:
--   python
-- +++ processing catkin package: 'move_it_learning'
-- ==> add_subdirectory(move_it_learning)
-- +++ processing catkin package: 'perception'
-- ==> add_subdirectory(perception)
-- Using these message generators: gencpp;genlisp;genpy
-- perception: 0 messages, 5 services
-- +++ processing catkin package: 'ur_driver'
-- ==> add_subdirectory(universal_robot/ur_driver)
-- +++ processing catkin package: 'ur_modern_driver'
-- ==> add_subdirectory(universal_robot/ur_modern_driver)
-- Using these message generators: gencpp;genlisp;genpy
-- +++ processing catkin package: 'ur10_moveit_config'
-- ==> add_subdirectory(universal_robot/ur10_moveit_config)
-- +++ processing catkin package: 'ur3_moveit_config'
-- ==> add_subdirectory(universal_robot/ur3_moveit_config)
-- +++ processing catkin package: 'ur5_moveit_config'
-- ==> add_subdirectory(universal_robot/ur5_moveit_config)
-- +++ processing catkin package: 'ur_kinematics'
-- ==> add_subdirectory(universal_robot/ur_kinematics)
-- Using these message generators: gencpp;genlisp;genpy
-- Boost version: 1.54.0
-- Found the following Boost libraries:
--   system
-- +++ processing catkin package: 'moveit_tutorials'
-- ==> add_subdirectory(moveit_tutorials)
-- Using these message generators: gencpp;genlisp;genpy
-- Boost version: 1.54.0
-- Found the following Boost libraries:
--   system
--   filesystem
--   date_time
--   thread
-- Eigen found (include: /usr/include/eigen3)
-- +++ processing catkin package: 'UR5_package'
-- ==> add_subdirectory(UR5_package)
-- Using these message generators: gencpp;genlisp;genpy
WARNING: Catkin package name "UR5_package" does not follow the naming conventions. It should start with a lower case letter and only contain lower case letters, digits, and underscores.
CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the CMake files:
GENERATOR_LIB
    linked by target "gpd_classify_candidates" in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd
    linked by target "gpd_clustering" in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd
    linked by target "gpd_create_grasp_images" in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd
    linked by target "gpd_data_generator" in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd
    linked by target "gpd_detect_grasps" in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd
    linked by target "gpd_generate_candidates" in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd
    linked by target "gpd_grasp_detector" in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd
    linked by target "gpd_learning" in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd
    linked by target "gpd_learning" in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd
    linked by target "gpd_test_occlusion" in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd
GENERATOR_LIB_INCLUDE_DIR
   used as include directory in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd
   used as include directory in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd
   used as include directory in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd
   used as include directory in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd
   used as include directory in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd
   used as include directory in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd
   used as include directory in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd
   used as include directory in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd
   used as include directory in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd
   used as include directory in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd
   used as include directory in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd
   used as include directory in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd
   used as include directory in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd
   used as include directory in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd
   used as include directory in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd
   used as include directory in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd
   used as include directory in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd
   used as include directory in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd
   used as include directory in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd
   used as include directory in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd
   used as include directory in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd
   used as include directory in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd
   used as include directory in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd
   used as include directory in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd
   used as include directory in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd
   used as include directory in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd
   used as include directory in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd
   used as include directory in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd
   used as include directory in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd
   used as include directory in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd
   used as include directory in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd
   used as include directory in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd
   used as include directory in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd
   used as include directory in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd
   used as include directory in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd
   used as include directory in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd
   used as include directory in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd
   used as include directory in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd
   used as include directory in directory /home/correlllab/Jay_IS/UR5_ws/src/gpd

-- Configuring incomplete, errors occurred!
See also "/home/correlllab/Jay_IS/UR5_ws/build/CMakeFiles/CMakeOutput.log".
See also "/home/correlllab/Jay_IS/UR5_ws/build/CMakeFiles/CMakeError.log".
make: *** [cmake_check_build_system] Error 1
Invoking "make cmake_check_build_system" failed

Any ideas on how to fix them?

Thanks

[detect_grasps-1] process has died

I am a master student in Kyoto University, Japan.
Thanks for your code.
I have build up the code successfully. and run the tutorial0 successfully.
But when i try to launch the tutorial1, the process died.

I use realsense camera D435 as the depth camera. it generates a colored pointcloud2 with 640x480 points at 30Hz.
I use the ros, Kinetic.
I use the caffe, CPU_ONLY, with opencv3.3

I am not familiar with your code. So I need your help.
How should i fix this? or should i change some params of the depth camera(so many points?)?

Thanks

process[detect_grasps-1]: started with pid [11863]
WARNING: Logging before InitGoogleLogging() is written to STDERR
I0720 12:28:23.879094 11863 net.cpp:51] Initializing net from parameters:
name: "LeNet"
state {
phase: TEST
level: 0
}
layer {
name: "data"
type: "MemoryData"
top: "data"
top: "label"
memory_data_param {
batch_size: 100
channels: 15
height: 60
width: 60
}
}
layer {
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1"
convolution_param {
num_output: 20
kernel_size: 5
weight_filler {
type: "xavier"
}
}
}
layer {
name: "pool1"
type: "Pooling"
bottom: "conv1"
top: "pool1"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "conv2"
type: "Convolution"
bottom: "pool1"
top: "conv2"
convolution_param {
num_output: 50
kernel_size: 5
weight_filler {
type: "xavier"
}
}
}
layer {
name: "pool2"
type: "Pooling"
bottom: "conv2"
top: "pool2"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "ip1"
type: "InnerProduct"
bottom: "pool2"
top: "ip1"
inner_product_param {
num_output: 500
weight_filler {
type: "xavier"
}
}
}
layer {
name: "relu1"
type: "ReLU"
bottom: "ip1"
top: "ip1"
}
layer {
name: "ip2"
type: "InnerProduct"
bottom: "ip1"
top: "ip2"
inner_product_param {
num_output: 2
weight_filler {
type: "xavier"
}
}
}
I0720 12:28:23.879302 11863 layer_factory.hpp:77] Creating layer data
I0720 12:28:23.879349 11863 net.cpp:84] Creating Layer data
I0720 12:28:23.879359 11863 net.cpp:380] data -> data
I0720 12:28:23.879405 11863 net.cpp:380] data -> label
I0720 12:28:23.879451 11863 net.cpp:122] Setting up data
I0720 12:28:23.879465 11863 net.cpp:129] Top shape: 100 15 60 60 (5400000)
I0720 12:28:23.879470 11863 net.cpp:129] Top shape: 100 (100)
I0720 12:28:23.879473 11863 net.cpp:137] Memory required for data: 21600400
I0720 12:28:23.879482 11863 layer_factory.hpp:77] Creating layer conv1
I0720 12:28:23.879503 11863 net.cpp:84] Creating Layer conv1
I0720 12:28:23.879510 11863 net.cpp:406] conv1 <- data
I0720 12:28:23.879520 11863 net.cpp:380] conv1 -> conv1
I0720 12:28:23.879657 11863 net.cpp:122] Setting up conv1
I0720 12:28:23.879663 11863 net.cpp:129] Top shape: 100 20 56 56 (6272000)
I0720 12:28:23.879667 11863 net.cpp:137] Memory required for data: 46688400
I0720 12:28:23.879685 11863 layer_factory.hpp:77] Creating layer pool1
I0720 12:28:23.879691 11863 net.cpp:84] Creating Layer pool1
I0720 12:28:23.879695 11863 net.cpp:406] pool1 <- conv1
I0720 12:28:23.879700 11863 net.cpp:380] pool1 -> pool1
I0720 12:28:23.879722 11863 net.cpp:122] Setting up pool1
I0720 12:28:23.879727 11863 net.cpp:129] Top shape: 100 20 28 28 (1568000)
I0720 12:28:23.879730 11863 net.cpp:137] Memory required for data: 52960400
I0720 12:28:23.879734 11863 layer_factory.hpp:77] Creating layer conv2
I0720 12:28:23.879739 11863 net.cpp:84] Creating Layer conv2
I0720 12:28:23.879741 11863 net.cpp:406] conv2 <- pool1
I0720 12:28:23.879745 11863 net.cpp:380] conv2 -> conv2
I0720 12:28:23.879889 11863 net.cpp:122] Setting up conv2
I0720 12:28:23.879894 11863 net.cpp:129] Top shape: 100 50 24 24 (2880000)
I0720 12:28:23.879896 11863 net.cpp:137] Memory required for data: 64480400
I0720 12:28:23.879901 11863 layer_factory.hpp:77] Creating layer pool2
I0720 12:28:23.879905 11863 net.cpp:84] Creating Layer pool2
I0720 12:28:23.879909 11863 net.cpp:406] pool2 <- conv2
I0720 12:28:23.879914 11863 net.cpp:380] pool2 -> pool2
I0720 12:28:23.879918 11863 net.cpp:122] Setting up pool2
I0720 12:28:23.879922 11863 net.cpp:129] Top shape: 100 50 12 12 (720000)
I0720 12:28:23.879925 11863 net.cpp:137] Memory required for data: 67360400
I0720 12:28:23.879928 11863 layer_factory.hpp:77] Creating layer ip1
I0720 12:28:23.879933 11863 net.cpp:84] Creating Layer ip1
I0720 12:28:23.879936 11863 net.cpp:406] ip1 <- pool2
I0720 12:28:23.879940 11863 net.cpp:380] ip1 -> ip1
I0720 12:28:23.902318 11863 net.cpp:122] Setting up ip1
I0720 12:28:23.902357 11863 net.cpp:129] Top shape: 100 500 (50000)
I0720 12:28:23.902361 11863 net.cpp:137] Memory required for data: 67560400
I0720 12:28:23.902374 11863 layer_factory.hpp:77] Creating layer relu1
I0720 12:28:23.902382 11863 net.cpp:84] Creating Layer relu1
I0720 12:28:23.902386 11863 net.cpp:406] relu1 <- ip1
I0720 12:28:23.902392 11863 net.cpp:367] relu1 -> ip1 (in-place)
I0720 12:28:23.902417 11863 net.cpp:122] Setting up relu1
I0720 12:28:23.902421 11863 net.cpp:129] Top shape: 100 500 (50000)
I0720 12:28:23.902436 11863 net.cpp:137] Memory required for data: 67760400
I0720 12:28:23.902438 11863 layer_factory.hpp:77] Creating layer ip2
I0720 12:28:23.902443 11863 net.cpp:84] Creating Layer ip2
I0720 12:28:23.902464 11863 net.cpp:406] ip2 <- ip1
I0720 12:28:23.902468 11863 net.cpp:380] ip2 -> ip2
I0720 12:28:23.902484 11863 net.cpp:122] Setting up ip2
I0720 12:28:23.902488 11863 net.cpp:129] Top shape: 100 2 (200)
I0720 12:28:23.902492 11863 net.cpp:137] Memory required for data: 67761200
I0720 12:28:23.902496 11863 net.cpp:200] ip2 does not need backward computation.
I0720 12:28:23.902504 11863 net.cpp:200] relu1 does not need backward computation.
I0720 12:28:23.902508 11863 net.cpp:200] ip1 does not need backward computation.
I0720 12:28:23.902510 11863 net.cpp:200] pool2 does not need backward computation.
I0720 12:28:23.902514 11863 net.cpp:200] conv2 does not need backward computation.
I0720 12:28:23.902520 11863 net.cpp:200] pool1 does not need backward computation.
I0720 12:28:23.902523 11863 net.cpp:200] conv1 does not need backward computation.
I0720 12:28:23.902526 11863 net.cpp:200] data does not need backward computation.
I0720 12:28:23.902529 11863 net.cpp:242] This network produces output ip2
I0720 12:28:23.902534 11863 net.cpp:242] This network produces output label
I0720 12:28:23.902545 11863 net.cpp:255] Network initialization done.
I0720 12:28:23.910292 11863 net.cpp:744] Ignoring source layer label_data_1_split
I0720 12:28:23.911893 11863 net.cpp:744] Ignoring source layer ip2_ip2_0_split
I0720 12:28:23.911900 11863 net.cpp:744] Ignoring source layer accuracy
I0720 12:28:23.911901 11863 net.cpp:744] Ignoring source layer loss
[ INFO] [1532057303.926036328]: Waiting for point cloud to arrive ...
[ INFO] [1532057304.195380734]: Received cloud with 307200 points.
Processing cloud with: 307200 points.
Calculating surface normals ...
Using integral images for surface normals estimation ...
runtime (normals): 0.0737637
Reversing direction of normals that do not point to at least one camera ...
reversed 93044 normals
runtime (reverse normals): 0.00334309
After workspace filtering: 307169 points left.
After voxelization: 71237 points left.
Subsampled 100 at random uniformly.
Estimating local reference frames ...
Estimated 100 local reference frames in 0.0331791 sec.
Finding hand poses ...
Found 5 grasp candidate sets in 0.237061 sec.
====> HAND SEARCH TIME: 0.284676
[ INFO] [1532057304.623607567]: Generated 5 grasp candidate sets.
Creating grasp images for classifier input ...
time for computing 5 point neighborhoods with 0 threads: 0.0406995s
[detect_grasps-1] process has died [pid 11863, exit code -11, cmd /home/mastu2/catkin_ws/devel/lib/gpd/detect_grasps __name:=detect_grasps __log:=/home/mastu2/.ros/log/20bf3cba-8bcc-11e8-9b8d-e470b8cdc2dd/detect_grasps-1.log].
log file: /home/mastu2/.ros/log/20bf3cba-8bcc-11e8-9b8d-e470b8cdc2dd/detect_grasps-1*.log
all processes on machine have died, roslaunch will exit
shutting down processing monitor...
... shutting down processing monitor complete

Caffe_DIR in CMakeLists.txt

Hi,
nice work with the package.
I needed to change CAFFE_DIR to Caffe_DIR in the CMakeLists.txt in order to be able to link with Caffe.
Otherwise I get:
fatal error: caffe/layers/memory_data_layer.hpp: No such file or directory

I am using 14.04, not sure why the original does not work.

GraspConfigList to moveit_msgs/Grasp Message

Hi!
This is not a "pure" issue: I would like to ask you how to use your work with moveit "pick" action. For example, how can I translate from GraspConfigList to moveit_msgs/Grasp?
Many thanks!

Running tutorrial0.launch failed

When I running roslaunch gpd tutorial0.launch, get failed log as belows:
I0424 11:26:03.931042 32066 net.cpp:744] Ignoring source layer loss
Processing cloud with: 4467 points.
After workspace filtering: 4467 points left.
Subsampled 10 at random uniformly.
Calculating surface normals ...
camera: 0, #indices: 4467, #normals: 4467
runtime (normals): 0.112061
Reversing direction of normals that do not point to at least one camera ...
reversed 0 normals
runtime (reverse normals): 0.000105102
Estimating local reference frames ...
Estimated 10 local reference frames in 9.02019e-05 sec.
Finding hand poses ...
Found 7 grasp candidate sets in 0.0184582 sec.
====> HAND SEARCH TIME: 0.0204099
X Error of failed request: BadValue (integer parameter out of range for operation)
Major opcode of failed request: 154 (GLX)
Minor opcode of failed request: 3 (X_GLXCreateContext)
Value in failed request: 0x0
Serial number of failed request: 39
Current serial number in output stream: 40
[classify_grasp_candidates-2] process has died [pid 32066, exit code 1, cmd /home/ros/catkin_ws/devel/lib/gpd/classify_candidates __name:=classify_grasp_candidates __log:=/home/ros/.ros/log/35afa1f6-476f-11e8-99c2-00606e000b84/classify_grasp_candidates-2.log].
log file: /home/ros/.ros/log/35afa1f6-476f-11e8-99c2-00606e000b84/classify_grasp_candidates-2*.log

Anyone encounter on this issue and how to solve it?

Axis in last figure of tutorial 2 might be wrong

Hey,

I think that the axes in the last image of tutorial2 are misleading. Usually rgb corresponds to x y z axes. That would mean that approach, axis, binormal correspond to the x,y,z axes respectively. However in the current implementation approach-> x , binormal->y and axis->z. For the image to be correct axis should point upwards instead of downwards and be coloured blue and also binormal should be painted green. I am just pointing this out because it gave me a lot of trouble to find the correct axes and had to look in the gpg code to find out. On the other hand very good job overall , I implemented this on a fetch robot and it works like a charm it can grasp almost any object on a table. Thank you !

confuse about some grasps

l run the tutorial1.launch under ROS with Kinect, and visualize the grasps in Rviz.
Some grasps always seem to grasp the "boundary" of the PointClouds.
l am confused by this phenomenon, and do not know how to resolve this problem.
Below is the picture l take under Ubuntu.

issue

Memory Errors while using gpg

Hi,
I am a master's student at University of Pennsylvania.
I am exploring your package. I've installed gpg, along with the caffe version of gpd.
The gpd package tutorials work properly.

When I tried using the create_training_data launch files, it caused memory errors. On further investigation, I realized that class objects from gpg package were responsible.

For example, I tried a simple test file say test1.cpp
// System
#include
// Custom
#include <gpg/point_list.h>

int main(int argc, char* argv[])
{
PointList p(5, 1);
return 0;
}

and modified the cmakelists.txt as follows:
add_executable(test1 src/tests/test1.cpp)
target_link_libraries(test1
${GENERATOR_LIB})

The above code causes segmentation faults, I checked and the constructor and destructors are called properly and I use catkin build to compile the package. I believe this happens when the eigen package variables are allocated memory. (In the case of the above constructor call). If I create an object with the default constructor eg: Pointlist p; which doesn't allocate any memory instead, the program exists properly.

The same happens for other objects like CloudCamera.
Also one more observation, same/similar test cases don't cause any problems with gpg.

I would like to request your help. Please correct me if I am making any mistakes. Thank you for your time.

Object detection?

Hi,
I have a question regarding object detection.
Do you perform object detection before applying grasp detection? If yes then could you tell me how?

gpd catkin_make error

I have installed caffe in '~/caffe', but caffe/caffe.hpp still cannot be found

In file included from /home/zhangky/gpd/src/gpd/src/gpd/caffe_classifier.cpp:1:0:
/home/zhangky/gpd/src/gpd/src/gpd/../../include/gpd/caffe_classifier.h:42:27: fatal error: caffe/caffe.hpp: no such file or dictionary
compilation terminated.
gpd/CMakeFiles/gpd_caffe_classifier.dir/build.make:62: recipe for target 'gpd/CMakeFiles/gpd_caffe_classifier.dir/src/gpd/caffe_classifier.cpp.o' failed
make[2]: *** [gpd/CMakeFiles/gpd_caffe_classifier.dir/src/gpd/caffe_classifier.cpp.o] Error 1
CMakeFiles/Makefile2:2184: recipe for target 'gpd/CMakeFiles/gpd_caffe_classifier.dir/all' failed

What should I do? Thanks a lot.

Plots appear blank for tutorial0.launch

Hey,

I managed to use this package with a pointcloud from a camera and managed to see the top grasps in Rviz. Great work! However when I run for example tutorial0.launch and set the plotting params to to true I just get a blank white window for each plot. Why is this?

I am using branch forward and I also cannot see the file launch/openni2_15_channels.launch

Clarification on what defines pose of hand

Hi,

I am trying to use the gpd package but am running into some confusion with how the pose of the hand is defined. I have spent a fair amount of time digging through the code but unfortunately I am still confused. Ultimately what I want to do is to compute the grasp score (using the neural network_ for a specific grasp defined by the pose of a transform located at the base of the hand. I am struggling to do this correctly.

It seems that the pose of a grasp is defined by it's pose_ field (defined here). The orientation is specified in pose_.frame_ which makes sense. My understanding of the other fields is that

  • pose_.bottom_ is the base of the hand, centered.
  • pose_.top_ is between the fingertips, centered.

I think that these are expressed in world frame.

Now the full transform to the hand frame is defined by pose_.frame_ and also by the sample that is passed in during construction of the Grasp object. Then pose_.bottom_ etc. are defined relative to this transform see code. Now my confusion is what is this sample? What does it represent relative to the hand? I am further confused that when you transform points into hand frame to generate the images for the neural network you are applying the inverse of the transform with orign = sample and orientation = frame_. See here and here.

From my own plotting tools and inspecting the GraspConfig.msg messages that are published out it seems that sample can be pretty far from pose_.bottom_. So why does transforming the pointcloud with sample_ as the origin make sense?

Catkin_make fails

Hey,

fatal error: caffe/caffe.hpp: No such file or directory
compilation terminated.

/home/aadityacr7/grasp_ws/src/gpd/src/gpd/../../include/gpd/caffe_classifier.h:42:27: fatal error: caffe/caffe.hpp: No such file or directory compilation terminated.

This seems to be the problem. I tried some other issues but I not able to figure this out.
caffe.hpp does exist hence this seems to be some path problem.

Thank you in advance.
P.s - Sorry if its a stupid question, I am an amateur

error1

Compiling with gcc 4.8

Was anyone with gcc 4.8 able to compile this package successfully? I was getting the following error on doing catkin_make:

...
Linking CXX shared library /devel/lib/libgpd_learning.so
lto1: internal compiler error: in splice_child_die, at dwarf2out.c:4706
Please submit a full bug report,
with preprocessed source if appropriate.
See file:///usr/share/doc/gcc-4.8/README.Bugs for instructions.
lto-wrapper: /usr/bin/c++ returned 1 exit status
/usr/bin/ld: lto-wrapper failed
collect2: error: ld returned 1 exit status
...

But when I switched to gcc 4.9, it compiled without any problems.

Training new model

Hello @atenpas ,
I recently started using GPD and software works just awesome. I went into generating training data as well as grasp images nodes but not found any node for creating new model with that dataset generated.Is it released?Would like to know more about the same.

catkin_make error 'undefined reference to...'

Following the instruction and catkin_make error occurs:
`morgan@morgan-Z170X:~/cute_ws$ catkin_make
Base path: /home/morgan/cute_ws
Source space: /home/morgan/cute_ws/src
Build space: /home/morgan/cute_ws/build
Devel space: /home/morgan/cute_ws/devel
Install space: /home/morgan/cute_ws/install

Running command: "make cmake_check_build_system" in "/home/morgan/cute_ws/build"

Running command: "make -j8 -l8" in "/home/morgan/cute_ws/build"

...
[ 94%] Linking CXX executable /home/morgan/cute_ws/devel/lib/gpd/test_occlusion
[ 95%] Built target gpd_sequential_importance_sampling
[ 96%] Linking CXX executable /home/morgan/cute_ws/devel/lib/gpd/create_training_data
[ 96%] Linking CXX executable /home/morgan/cute_ws/devel/lib/gpd/detect_grasps
[ 96%] Linking CXX executable /home/morgan/cute_ws/devel/lib/gpd/classify_candidates
/home/morgan/cute_ws/devel/lib/libgpd_data_generator.so: undefined reference to caffe::db::GetDB(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)' /home/morgan/cute_ws/devel/lib/libgpd_data_generator.so: undefined reference to caffe::Datum::Datum()'
/home/morgan/cute_ws/devel/lib/libgpd_data_generator.so: undefined reference to google::LogMessage::stream()' /home/morgan/cute_ws/devel/lib/libgpd_data_generator.so: undefined reference to google::protobuf::MessageLite::SerializeToString(std::__cxx11::basic_string<char, std::char_traits, std::allocator >) const'
/home/morgan/cute_ws/devel/lib/libgpd_data_generator.so: undefined reference to google::LogMessage::LogMessage(char const*, int)' /home/morgan/cute_ws/devel/lib/libgpd_data_generator.so: undefined reference to caffe::CVMatToDatum(cv::Mat const&, caffe::Datum
)'
/home/morgan/cute_ws/devel/lib/libgpd_data_generator.so: undefined reference to caffe::Datum::~Datum()' /home/morgan/cute_ws/devel/lib/libgpd_data_generator.so: undefined reference to google::LogMessageFatal::~LogMessageFatal()'
/home/morgan/cute_ws/devel/lib/libgpd_data_generator.so: undefined reference to google::LogMessage::~LogMessage()' /home/morgan/cute_ws/devel/lib/libgpd_data_generator.so: undefined reference to google::LogMessageFatal::LogMessageFatal(char const*, int)'
collect2: error: ld returned 1 exit status
gpd/CMakeFiles/gpd_create_training_data.dir/build.make:394: recipe for target '/home/morgan/cute_ws/devel/lib/gpd/create_training_data' failed
make[2]: *** [/home/morgan/cute_ws/devel/lib/gpd/create_training_data] Error 1
CMakeFiles/Makefile2:2568: recipe for target 'gpd/CMakeFiles/gpd_create_training_data.dir/all' failed
make[1]: *** [gpd/CMakeFiles/gpd_create_training_data.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
/tmp/ccQCJ29Z.ltrans0.ltrans.o: In function main': <artificial>:(.text.startup+0x6f6): undefined reference to caffe::Caffe::Get()'
:(.text.startup+0x7ad): undefined reference to caffe::Net<float>::Net(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, caffe::Phase, int, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const*)' <artificial>:(.text.startup+0x7e7): undefined reference to caffe::Net::CopyTrainedLayersFrom(std::__cxx11::basic_string<char, std::char_traits, std::allocator >)'
:(.text.startup+0x8a2): undefined reference to caffe::Net<float>::layer_by_name(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const' <artificial>:(.text.startup+0xccc): undefined reference to caffe::Net::Forward(float*)'
:(.text.startup+0xdae): undefined reference to caffe::Net<float>::blob_by_name(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const' <artificial>:(.text.startup+0xe76): undefined reference to caffe::Blob::cpu_data() const'
:(.text.startup+0x1312): undefined reference to google::LogMessageFatal::LogMessageFatal(char const*, int, google::CheckOpString const&)' <artificial>:(.text.startup+0x131a): undefined reference to google::LogMessage::stream()'
:(.text.startup+0x1336): undefined reference to google::LogMessageFatal::~LogMessageFatal()' <artificial>:(.text.startup+0x163b): undefined reference to google::LogMessageFatal::LogMessageFatal(char const*, int, google::CheckOpString const&)'
:(.text.startup+0x1643): undefined reference to google::LogMessage::stream()' <artificial>:(.text.startup+0x165f): undefined reference to google::LogMessageFatal::~LogMessageFatal()'
:(.text.startup+0x18ce): undefined reference to google::LogMessageFatal::~LogMessageFatal()' <artificial>:(.text.startup+0x1a51): undefined reference to google::LogMessageFatal::~LogMessageFatal()'
/tmp/ccQCJ29Z.ltrans5.ltrans.o: In function caffe::Blob<float>::LegacyShape(int) const [clone .part.233] [clone .constprop.14]': <artificial>:(.text+0xa4a): undefined reference to google::LogMessageFatal::LogMessageFatal(char const*, int, google::CheckOpString const&)'
:(.text+0xa52): undefined reference to google::LogMessage::stream()' <artificial>:(.text+0xac8): undefined reference to google::LogMessageFatal::~LogMessageFatal()'
:(.text+0xc7b): undefined reference to google::LogMessageFatal::LogMessageFatal(char const*, int, google::CheckOpString const&)' <artificial>:(.text+0xc83): undefined reference to google::LogMessage::stream()'
:(.text+0xcf9): undefined reference to google::LogMessageFatal::~LogMessageFatal()' <artificial>:(.text+0xf98): undefined reference to google::LogMessageFatal::~LogMessageFatal()'
:(.text+0xfa0): undefined reference to google::LogMessageFatal::~LogMessageFatal()' /tmp/ccQCJ29Z.ltrans8.ltrans.o: In function std::__cxx11::basic_string<char, std::char_traits, std::allocator >* google::MakeCheckOpString<int, int>(int const&, int const&, char const*)':
:(.text+0x878): undefined reference to google::base::CheckOpMessageBuilder::CheckOpMessageBuilder(char const*)' <artificial>:(.text+0x88d): undefined reference to google::base::CheckOpMessageBuilder::ForVar2()'
:(.text+0x8a0): undefined reference to google::base::CheckOpMessageBuilder::NewString[abi:cxx11]()' <artificial>:(.text+0x8ab): undefined reference to google::base::CheckOpMessageBuilder::~CheckOpMessageBuilder()'
:(.text+0x8c2): undefined reference to google::base::CheckOpMessageBuilder::~CheckOpMessageBuilder()' collect2: error: ld returned 1 exit status gpd/CMakeFiles/gpd_test_occlusion.dir/build.make:453: recipe for target '/home/morgan/cute_ws/devel/lib/gpd/test_occlusion' failed make[2]: *** [/home/morgan/cute_ws/devel/lib/gpd/test_occlusion] Error 1 CMakeFiles/Makefile2:3265: recipe for target 'gpd/CMakeFiles/gpd_test_occlusion.dir/all' failed make[1]: *** [gpd/CMakeFiles/gpd_test_occlusion.dir/all] Error 2 /home/morgan/cute_ws/devel/lib/libgpd_caffe_classifier.so: undefined reference to google::base::CheckOpMessageBuilder::CheckOpMessageBuilder(char const*)'
/home/morgan/cute_ws/devel/lib/libgpd_caffe_classifier.so: undefined reference to google::base::CheckOpMessageBuilder::~CheckOpMessageBuilder()' /home/morgan/cute_ws/devel/lib/libgpd_caffe_classifier.so: undefined reference to caffe::Net::Net(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, caffe::Phase, int, std::vector<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, std::allocator<std::__cxx11::basic_string<char, std::char_traits, std::allocator > > > const*)'
/home/morgan/cute_ws/devel/lib/libgpd_caffe_classifier.so: undefined reference to google::base::CheckOpMessageBuilder::ForVar2()' /home/morgan/cute_ws/devel/lib/libgpd_caffe_classifier.so: undefined reference to caffe::Blob::cpu_data() const'
/home/morgan/cute_ws/devel/lib/libgpd_caffe_classifier.so: undefined reference to google::LogMessage::stream()' /home/morgan/cute_ws/devel/lib/libgpd_caffe_classifier.so: undefined reference to caffe::Net::CopyTrainedLayersFrom(std::__cxx11::basic_string<char, std::char_traits, std::allocator >)'
/home/morgan/cute_ws/devel/lib/libgpd_caffe_classifier.so: undefined reference to google::LogMessageFatal::LogMessageFatal(char const*, int, google::CheckOpString const&)' /home/morgan/cute_ws/devel/lib/libgpd_caffe_classifier.so: undefined reference to caffe::Caffe::Get()'
/home/morgan/cute_ws/devel/lib/libgpd_caffe_classifier.so: undefined reference to caffe::Net<float>::layer_by_name(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const' /home/morgan/cute_ws/devel/lib/libgpd_caffe_classifier.so: undefined reference to google::LogMessageFatal::~LogMessageFatal()'
/home/morgan/cute_ws/devel/lib/libgpd_caffe_classifier.so: undefined reference to caffe::Net<float>::Forward(float*)' /home/morgan/cute_ws/devel/lib/libgpd_caffe_classifier.so: undefined reference to google::base::CheckOpMessageBuilder::NewStringabi:cxx11'
collect2: error: ld returned 1 exit status
gpd/CMakeFiles/gpd_detect_grasps.dir/build.make:440: recipe for target '/home/morgan/cute_ws/devel/lib/gpd/detect_grasps' failed
make[2]: *** [/home/morgan/cute_ws/devel/lib/gpd/detect_grasps] Error 1
CMakeFiles/Makefile2:2251: recipe for target 'gpd/CMakeFiles/gpd_detect_grasps.dir/all' failed
make[1]: *** [gpd/CMakeFiles/gpd_detect_grasps.dir/all] Error 2
/home/morgan/cute_ws/devel/lib/libgpd_caffe_classifier.so: undefined reference to google::base::CheckOpMessageBuilder::CheckOpMessageBuilder(char const*)' /home/morgan/cute_ws/devel/lib/libgpd_caffe_classifier.so: undefined reference to google::base::CheckOpMessageBuilder::~CheckOpMessageBuilder()'
/home/morgan/cute_ws/devel/lib/libgpd_caffe_classifier.so: undefined reference to caffe::Net<float>::Net(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, caffe::Phase, int, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const*)' /home/morgan/cute_ws/devel/lib/libgpd_caffe_classifier.so: undefined reference to google::base::CheckOpMessageBuilder::ForVar2()'
/home/morgan/cute_ws/devel/lib/libgpd_caffe_classifier.so: undefined reference to caffe::Blob<float>::cpu_data() const' /home/morgan/cute_ws/devel/lib/libgpd_caffe_classifier.so: undefined reference to google::LogMessage::stream()'
/home/morgan/cute_ws/devel/lib/libgpd_caffe_classifier.so: undefined reference to caffe::Net<float>::CopyTrainedLayersFrom(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)' /home/morgan/cute_ws/devel/lib/libgpd_caffe_classifier.so: undefined reference to google::LogMessageFatal::LogMessageFatal(char const*, int, google::CheckOpString const&)'
/home/morgan/cute_ws/devel/lib/libgpd_caffe_classifier.so: undefined reference to caffe::Caffe::Get()' /home/morgan/cute_ws/devel/lib/libgpd_caffe_classifier.so: undefined reference to caffe::Net::layer_by_name(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&) const'
/home/morgan/cute_ws/devel/lib/libgpd_caffe_classifier.so: undefined reference to google::LogMessageFatal::~LogMessageFatal()' /home/morgan/cute_ws/devel/lib/libgpd_caffe_classifier.so: undefined reference to caffe::Net::Forward(float*)'
/home/morgan/cute_ws/devel/lib/libgpd_caffe_classifier.so: undefined reference to google::base::CheckOpMessageBuilder::NewString[abi:cxx11]()' collect2: error: ld returned 1 exit status gpd/CMakeFiles/gpd_classify_candidates.dir/build.make:461: recipe for target '/home/morgan/cute_ws/devel/lib/gpd/classify_candidates' failed make[2]: *** [/home/morgan/cute_ws/devel/lib/gpd/classify_candidates] Error 1 CMakeFiles/Makefile2:3416: recipe for target 'gpd/CMakeFiles/gpd_classify_candidates.dir/all' failed make[1]: *** [gpd/CMakeFiles/gpd_classify_candidates.dir/all] Error 2 Makefile:138: recipe for target 'all' failed make: *** [all] Error 2 Invoking "make -j8 -l8" failed
Thanks for attention!

Reasoning behind using std_msgs wrappers?

Is there any particular reason why your project uses the std_msgs wrappers for data such as int64s instead of the primitives? ROS docs have made it clear that these messages are for prototyping only and should be avoided. I've found it quite annoying to have to constantly convert in my project and I though about doing a PR with the changes.

cudnn error

I have installed gpd package,but there is an error when I run roslaunch gpd tutorial0.launch
I1212 20:48:11.585398 4397 layer_factory.hpp:77] Creating layer conv1
I1212 20:48:11.585474 4397 net.cpp:84] Creating Layer conv1
I1212 20:48:11.585495 4397 net.cpp:406] conv1 <- data
I1212 20:48:11.585522 4397 net.cpp:380] conv1 -> conv1
F1212 20:48:12.472157 4397 cudnn_conv_layer.cpp:53] Check failed: status == CUDNN_STATUS_SUCCESS (4 vs. 0) CUDNN_STATUS_INTERNAL_ERROR
*** Check failure stack trace: ***
[detect_grasps-1] process has died [pid 4397, exit code -6, cmd /home/customer/catkin_ws/devel/lib/gpd/detect_grasps __name:=detect_grasps __log:=/home/customer/.ros/log/f07d7768-df39-11e7-84f4-6cb3113f1d5b/detect_grasps-1.log].
log file: /home/customer/.ros/log/f07d7768-df39-11e7-84f4-6cb3113f1d5b/detect_grasps-1*.log
all processes on machine have died, roslaunch will exit
shutting down processing monitor...
... shutting down processing monitor complete
done
how can I slove it? Thanks!

Trasforming the value

Hello, i want to transform the value I got from gpd like top/bottom with respect to world frame. How can i do that?

Doubts regarding camera positioning and orientation

Hi,
I am using UR5 robot and the ZED camera mounted on a stand kept in inverted position facing downwards.
So the Robot frame and the camera frame are having different orientations.
According to the camera_position in Tutorial1.launch we can provide only the position (X,Y,Z) of the camera ,which I provided as [0.66,0.18,0.65] (in meters) from Robot base link (0,0,0).

So is it possible to work with camera frame and robot frame in different orientations????

How is the score computed?

Hello,

Thank you for releasing this code, it is very nice ! However I wondered how the "score" of a grasp is computed. After a quick glance at the paper, I understand that the NN is trained on a binary classification problem, which means that it will output a 0-1 probability. Then, you mention a heuristic that prioritizes vertical grasps. Is the score outputted in say tutorial1 the output of this heuristic? I had a look at the code but it seems that you take the score directly out of the Caffe network.

Thanks again !

to build "libgpd" common for ROS and ROS2

@atenpas, I'm doing some works to enable GPD for ROS2 which is the next generation ROS. After some investigation and prototyping, the approach I would propose is to have a common library "libgpd", plus the specific nodes for ROS and ROS2 respectively.

The changes are not big, which mainly include:

  • clean up "src/gpd" and make it ROS independent. Only two files affected (the constructor functions need rewrite to get ROS node parameters from nodehandle). Move ROS node parameters parsing into "src/nodes"
  • create "CMake" file to generate "libgpd" from "src/gpd"

What do you think? I can deliver a patch for this.

Tutorial0: Crashes on classify_grasp_candidates after generating grasp candidate

Hi,
After following the installation instructions, I tried running tutorial0 (roslaunch gpd tutorial0.launch) but I get an error about classify_grasp_candidates as below. Any advice on fixing this?
I'm running ros indigo on Ubuntu 14.04
Thanks!

[ INFO] [1500147078.647140787]: Generated 8 grasp candidate sets.
Creating grasp images for classifier input ...
time for computing 8 point neighborhoods with 4 threads: 0.00681678s
[classify_grasp_candidates-2] process has died [pid 5101, exit code -11

cmake error compiling grasp_pose_generator

Hi,

I run into trouble when compiling gpg, see screen shot for error message. I successfully installed Caffe, but from the error message, it looks like a PCL issue. I'm quite confused as to should I build a stand-along pcl package like pcl_ros in current workspace and use it as dependency for the gpg package or would I be fine using ros indigo's built-in libpcl? Thanks in advance for the help.
image

Generate some strange grasps

@atenpas Hi , I run tutorial1.launch with ROS and kinect.I placed some objects on a desktop,and gpd can generate grasps,but sometime it generate some strange grasps(directly through the desktop),l am confused by this phenomenon, and do not know how to resolve this problem.

gpd3

Using the classifier with other grasp proposals

Hi,

Thank you for making the codes available, and I got it to work with our setup. I'm wondering if it's possible to use the NN classifier with grasp proposals generated from some other method; for example from object segmenting and pre-defined grasps based on known object pose.

I tried to use pieces of codes from gpd and got some to work; but often I run into segmentation fault if the gripper is set to be to close to the point cloud.

catkin_make failed when I compiling GPD

Hi, Andreas ten Pas.
When I catkin_make my workspace, there only built 10%.
And it couldn't find the directory ‘’caffe/caffe.hpp‘’.
I have read another questioner‘s experience and try it.
However, it seems useless.
And could you kindly tell me how to solve it ?
Thanks.

2018-06-23 17-49-53

Package not compiling with catkin_make and catkin_build

I am not sure what's the problem but, when I try to compile using the catkin_make or catkin build at some times, the package is not getting compiled at all, this creates the problem for me as I am not able to compile the package in the Docker. Any idea for this particular cause?. How can this be solved?

-----------------------------------------------
Profile:                     default
Extending:             [env] /opt/ros/kinetic
Workspace:                   /catkin_ws
-----------------------------------------------
Source Space:       [exists] /catkin_ws/src
Log Space:         [missing] /catkin_ws/logs
Build Space:        [exists] /catkin_ws/build
Devel Space:        [exists] /catkin_ws/devel
Install Space:      [unused] /catkin_ws/install
DESTDIR:            [unused] None
-----------------------------------------------
Devel Space Layout:          linked
Install Space Layout:        None
-----------------------------------------------
Additional CMake Args:       None
Additional Make Args:        None
Additional catkin Make Args: None
Internal Make Job Server:    True
Cache Job Environments:      False
-----------------------------------------------
Whitelisted Packages:        None
Blacklisted Packages:        None
-----------------------------------------------
Workspace configuration appears valid.

NOTE: Forcing CMake to run for each package.
-----------------------------------------------
[build] No packages were found in the source space '/catkin_ws/src'
[build] No packages to be built.
[build] Package table is up to date.                                                                
Starting  >>> catkin_tools_prebuild                                                                 
Finished  <<< catkin_tools_prebuild                [ 1.1 seconds ]                                  
[build] Summary: All 1 packages succeeded!                                                          
[build]   Ignored:   None.                                                                          
[build]   Warnings:  None.                                                                          
[build]   Abandoned: None.                                                                          
[build]   Failed:    None.                                                                          
[build] Runtime: 1.1 seconds total.

Performance of Caffe vs CUDA?

Pretty self explanatory, how does the Caffe version compare with the non-caffe CUDA version in terms of runtime? I don't have Caffe installed so I can't test it myself.

Grasp detection node returns incorrect gripper width apperture

Hello @atenpas ,
I have been working on GPD from past few days.Awesome work for detecting antipodal grasp points.I do have couple of doubts.First is whats the exact difference between sequential importance sampling and normal sampling?(My observation was sequential importance sampling creates lot more grasps as compared to normal sampling and also takes less time to classify them)
Second is I got GPD running on my cropped object PCD but gripper apperture returned seems to be incorrect for given object PCD.For example if width of object is say 80 mm , grasp detection node returns me width of say 30 mm every time.I have already modified hand_geometry.yaml with my gripper measurements still not getting perfect gripper apperture.Need your assistance in debugging this issue.

Use without GPU

Hi!
And thank you for your wonderful work!
I followed the tutorials and I'm getting this error: Cannot use GPU in CPU-only Caffe: check mode
I tried to change this value in lenet_solver_15_channels.prototxt
(solver_mode: CPU), but error is still present
Thank you!

Avoid specific orientations

Hey @atenpas.
Which parameters in the tutorial1.launch file could one change to avoid grasps where the binormal is in the vertical direction for instance?

How do I know the grasp speed?

hi ten Pas
After read your paper “Grasp Pose Detection in Point Clouds“,but I don't know the speed when grasping in real world?

camera_position

In the file tutorial1.launch, [0, 0, 0] ,which is the camera_position's reference frame or what [0, 0, 0] means? ; Also, when I change this parameters: [0.55, 1.0, -0.41, 0.03, -0.29, 1.0] , I can't see any difference, why?
Before generating the grasps in point clouds, I used the program in the PCL library to split the background of the object. But the effect is not very good. I want to know how you can get the good results in your video? would you like to share with me your code about the split the background of the object and the UR5 grasp code?
How many cameras do you use in your video ? and the positions of the cameras ?

Run GPD with CPU only?

Hi,

When switching to the valid grasps visualization in tutorial0, I get a caffe error.

Cannot use GPU in CPU-only Caffe: check mode.

I don't have access to a GPU. How do I go about this?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.