Code Monkey home page Code Monkey logo

robond-perception-sensorstick's Introduction

Euclidean Clustering with ROS and PCL (Ex1 / Ex2)

A simple stick robot with an RGB-D camera attached to its head via a pan-tilt joint is placed in front of the table. For the detailed steps of how to carry out this exercise, please see the Clustering for Segmentation lesson in the RoboND classroom.

Here's a brief summary of how to get setup for the exercise:

  1. First of all copy/move the sensor_stick package to /src directory of your active ros workspace.

  2. Make sure you have all the dependencies resolved by using the rosdep install tool and run catkin_make:

$ cd ~/catkin_ws
$ rosdep install --from-paths src --ignore-src --rosdistro=kinetic -y
$ catkin_make
  1. Add following to your .bashrc file
export GAZEBO_MODEL_PATH=~/catkin_ws/src/sensor_stick/models

source ~/catkin_ws/devel/setup.bash
  1. Test the simulation setup by launching the gazebo environment. The command stated below will open a gazebo world along with an rviz window.
$ roslaunch sensor_stick robot_spawn.launch
  1. After RViz and Gazebo are online start the segmentation.py. Cd to the file location
$ ./segmentation.py

screen shot 2017-07-05 at 12 56 36 pm To perform the solution build your perception pipeline, that must perform following steps:

  1. Create a python ros node that subscribes to /sensor_stick/point_cloud topic. Use the template.py file found under /sensor_stick/scripts/ to get started.

  2. Use your code from Exercise-1 to apply various filters and segment the table using RANSAC.

  3. Create publishers and topics to publish the segmented table and tabletop objects as separate point clouds

  4. Apply Euclidean clustering on the table-top objects (after table segmentation is successful)

  5. Create a XYZRGB point cloud such that each cluster obtained from the previous step has its own unique color.

  6. Finally publish your colored cluster cloud on a separate topic clusters

You will find pcl_helper.py file under /sensor_stick/scripts. This file contains various functions to help you build up your perception pipeline.

Object Recognition with Python, ROS and PCL (Ex. 3)

In this exercise, you will continue building up your perception pipeline in ROS. Here you are provided with a very simple gazebo world, where you can extract color and shape features from the objects that were sitting on the table from Exercise-1 and Exercise-2, in order to train a classifier to detect them.

Setup

  • If you completed Exercises 1 and 2 you will already have a sensor_stick folder in your ~/catkin_ws/src directory. You should replace that folder with the sensor_stick folder contained in this repository and add the Python script you wrote for Exercise-2 to the scripts directory.

  • If you do not already have a sensor_stick directory, first copy/move the sensor_stick folder to the ~/catkin_ws/src directory of your active ros workspace.

  • Make sure you have all the dependencies resolved by using the rosdep install tool and running catkin_make:

$ cd ~/catkin_ws
$ rosdep install --from-paths src --ignore-src --rosdistro=kinetic -y
$ catkin_make
  • If it's not already there, add the following lines to your .bashrc file
export GAZEBO_MODEL_PATH=~/catkin_ws/src/sensor_stick/models
source ~/catkin_ws/devel/setup.bash

Preparing for training

Launch the training.launch file to bring up the Gazebo environment:

$ roslaunch sensor_stick training.launch

You should see an empty scene in Gazebo with only the sensor stick robot.

Capturing Features

Next, in a new terminal, run the capture_features.py script to capture and save features for each of the objects in the environment. This script spawns each object in random orientations (default 5 orientations per object) and computes features based on the point clouds resulting from each of the random orientations.

$ rosrun sensor_stick capture_features.py

The features will now be captured and you can watch the objects being spawned in Gazebo. It should take 5-10 sec. for each random orientations (depending on your machine's resources) so with 7 objects total it takes awhile to complete. When it finishes running you should have a training_set.sav file.

Training

Once your feature extraction has successfully completed, you're ready to train your model. First, however, if you don't already have them, you'll need to install the sklearn and scipy Python packages. You can install these using pip:

pip install sklearn scipy

After that, you're ready to run the train_svm.py model to train an SVM classifier on your labeled set of features.

$ rosrun sensor_stick train_svm.py

Note: Running this exercise out of the box your classifier will have poor performance because the functions compute_color_histograms() and compute_normal_histograms() (within features.py in /sensor_stick/src/sensor_stick) are generating random junk. Fix them in order to generate meaningful features and train your classifier!

Classifying Segmented Objects

If everything went well you now have a trained classifier and you're ready to do object recognition! First you have to build out your node for segmenting your point cloud. This is where you'll bring in your code from Exercises 1 and 2.

Make yourself a copy of the template.py file in the sensor_stick/scripts/ directory and call it something like object_recognition.py. Inside this file, you'll find all the TODO's from Exercises 1 and 2 and you can simply copy and paste your code in there from the previous exercises.

The new code you need to add is listed under the Exercise-3 TODO's in the pcl_callback() function. You'll also need to add some new publishers for outputting your detected object clouds and label markers. For the step-by-step instructions on what to add in these Exercise-3 TODOs, see the lesson in the classroom.

Object recognition

roslaunch sensor_stick robot_spawn.launch

In another terminal: chmod +x object_recognition.py ./object_recognition.py object recognition

robond-perception-sensorstick's People

Contributors

thom087 avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.