This is the project repo for the final project of the Udacity Self-Driving Car Nanodegree: Programming a Real Self-Driving Car. For more information about the project, see the project introduction here.
This was an extremely educational project. I learned all about how to use ROS, the Unity simulator, how to gather training data, how to train an object detection model, and how to deploy this application of ML in the sim. The most challenging parts for me were around tuning the performance of the ROS environment such that it ran smoothly, finding a good computer to train the object detection model on, and gathering training data.
This project heavily utilizes ROS (Robot Operating System) which is a runtime environment which facilitates service management, communication, and API generation.
The ROS nodes involved are:
- a waypoint loader (loads a set of static known waypoints in the area)
- a waypoint updater (dynamically sets the velocity associated with waypoints to achieve maneuvers like slowing down for a stop light).
- an object detection node (used to detect traffic lights)
- a waypoint follower (smoothly reaches target waypoint velocities specified by the waypoint loader and updater)
- a control node (attempts to minimize control error using PID controllers for steering and acceleration)
- a DBW (Drive By Wire) node which interfaces with the car hardware
- a simulator bridge (allows ROS to talk to the Unity3D simulator)
To learn about training an object detector, I read over the following blog posts and repos: - blog post: https://medium.com/@anthony_sarkis/self-driving-cars-implementing-real-time-traffic-light-detection-and-classification-in-2017-7d9ae8df1c58 - the repo for that post: https://github.com/swirlingsand/deeper-traffic-lights/blob/master/object_detection_sim_run.ipynb When it came to the practicalities of training the object detection model, the following repo really helped me avoid the gotchas: https://github.com/alex-lechner/Traffic-Light-Classification
I began collected and labeled my own data using a combination of the simulator and labelImg.
Gather data from simulator:
- record unlabeled images from simulator (done)
- rosrun image_view image_saver _sec_per_frame:=1 image:=/image_color
- use labelimg to label images (done)
- downloaded from here https://github.com/tzutalin/labelImg
- write Python script to convert to TensorFlow training Examples
Steps involved in training
- Configure the model at traffic_light/ssd_inception_v2_coco.config to use appropriate label map, training data set and validation data set
- Use the training script provided by https://github.com/tensorflow/models/tree/master/research/ to train the model
- Freeze the model so that it can be used for inferrence
Gotchas: I attempted several times to train the SSD model on AWS with a GPU instance, but could never produce the magic incantation necessary to get it to run. After many wasted hours, I went back and trained the model on the Udacity sim with the GPU enabled.
- As indicated in Alex's Github repo, the TensorFlow object detection API is a good choice for detecting traffic lights. It offers a very declarative way to do transfer learning which saves on a ton of training time: https://github.com/tensorflow/models/tree/master/research/object_detection#tensorflow-object-detection-api
- Code from the object detection subproject was modified to do inferrence within ROS
Please use one of the two installation options, either native or docker installation.
-
Be sure that your workstation is running Ubuntu 16.04 Xenial Xerus or Ubuntu 14.04 Trusty Tahir. Ubuntu downloads can be found here.
-
If using a Virtual Machine to install Ubuntu, use the following configuration as minimum:
- 2 CPU
- 2 GB system memory
- 25 GB of free hard drive space
The Udacity provided virtual machine has ROS and Dataspeed DBW already installed, so you can skip the next two steps if you are using this.
-
Follow these instructions to install ROS
- ROS Kinetic if you have Ubuntu 16.04.
- ROS Indigo if you have Ubuntu 14.04.
-
- Use this option to install the SDK on a workstation that already has ROS installed: One Line SDK Install (binary)
-
Download the Udacity Simulator.
Build the docker container
docker build . -t capstone
Run the docker file
docker run -p 4567:4567 -v $PWD:/capstone -v /tmp/log:/root/.ros/ --rm -it capstone
To set up port forwarding, please refer to the instructions from term 2
- Clone the project repository
git clone https://github.com/udacity/CarND-Capstone.git
- Install python dependencies
cd CarND-Capstone
pip install -r requirements.txt
- Make and run styx
cd ros
catkin_make
source devel/setup.sh
roslaunch launch/styx.launch
- Run the simulator
- Download training bag that was recorded on the Udacity self-driving car.
- Unzip the file
unzip traffic_light_bag_file.zip
- Play the bag file
rosbag play -l traffic_light_bag_file/traffic_light_training.bag
- Launch your project in site mode
cd CarND-Capstone/ros
roslaunch launch/site.launch
- Confirm that traffic light detection works on real life images