Code Monkey home page Code Monkey logo

carnd-vehicle-detection's Introduction

Vehicle Detection Project

The goals / steps of this project are the following:

  • Perform a Histogram of Oriented Gradients (HOG) feature extraction on a labeled training set of images and train a classifier Linear SVM classifier
  • Optionally, you can also apply a color transform and append binned color features, as well as histograms of color, to your HOG feature vector.
  • Note: for those first two steps don't forget to normalize your features and randomize a selection for training and testing.
  • Implement a sliding-window technique and use your trained classifier to search for vehicles in images.
  • Run your pipeline on a video stream (start with the test_video.mp4 and later implement on full project_video.mp4) and create a heat map of recurring detections frame by frame to reject outliers and follow detected vehicles.
  • Estimate a bounding box for vehicles detected.

How I complete the pipeline

I complete this pipeline by threes steps:

The first step: Initial completion of pipeline

    1. Load traing data using glob, the data include cars and noncars.
    1. Show some images.
    1. Use the scikit-image to extract Histogram of Oriented Gradient features. The documentation for this function can be found here and a brief explanation of the algorithm and tutorial can be found here.And then test it.
    1. Use np.histogram to get histograms of color, and then test it.
    1. Get the spatially binned features, and then test it.
    1. Define a single_img_features to get the single image's features, and can choose as: spatial_feat=True, hist_feat=True, hog_feat=True
    1. Define a function extract_features to extract features from a list of images, the function extract_features use the single_img_features which defined before.
    1. Define the parameters used in train a classifier Linear SVM classifier.
    1. Train a classifier Linear SVM classifier.
    1. Implement a sliding-window,
    1. Explore a more efficient method for doing the sliding window approach, one that allows us to only have to extract the Hog features once. This method is defined in find_cars, and in which We should choose the same parameters used in train a classifier Linear SVM classifier, such as cspace,hog_channel, spatial_feat, hist_feat, hog_feat and others.
    1. Create a heat map and reject outliers.
    1. Estimate a bounding box for vehicles detected use label. And show them.
    1. Finish the pipeline.

The second step: Parameter Tuning

it is a

    1. Test on the test images.
    1. Change the parameters, such as color_space,hog_channel, spatial_feat, hist_feat, hog_feat...and multi-scale Windows.
    1. Do as code:
if car detection is fine:
        go to third step
    else:
        go to second step

The third step:

    1. Run pipeline on a video stream(both on test_video.mp4 and project_video.mp4)

Rubric Points

Here I will consider the rubric points individually and describe how I addressed each point in my implementation.


Writeup / README

1. Provide a Writeup / README that includes all the rubric points and how you addressed each one. You can submit your writeup as markdown or pdf.

You're reading it! Submit my writeup as markdown

Histogram of Oriented Gradients (HOG)

1. Explain how (and identify where in your code) you extracted HOG features from the training images.

Using the scikit-image to extract Histogram of Oriented Gradient features. The documentation for this function can be found here and a brief explanation of the algorithm and tutorial can be found here. The code for this is :

def get_hog_features(img,orient=9, pix_per_cell=8, cell_per_block=2
                    ,vis=False,feature_vec=True):
    """
    return the HOG feature extraction and hog images.
    """
    if vis==True:
        hog_features, hog_image = hog(img, orientations=orient,
                                  pixels_per_cell=(pix_per_cell, pix_per_cell), 
                                  cells_per_block=(cell_per_block, cell_per_block), 
                                  visualise=vis, feature_vector=feature_vec,
                                  transform_sqrt=False, 
                                  block_norm="L2-Hys")
        return hog_features, hog_image
    else: 
        hog_image = hog(img, orientations=orient,
                                  pixels_per_cell=(pix_per_cell, pix_per_cell), 
                                  cells_per_block=(cell_per_block, cell_per_block), 
                                  visualise=vis, feature_vector=feature_vec,
                                  transform_sqrt=False, 
                                  block_norm="L2-Hys")
        return hog_image 

I started by reading in all the vehicle and non-vehicle images. Here is an example of one of each of the vehicle and non-vehicle classes:

alt text Here is an example of hog feature: alt text Here is an example of color space hist feature: alt text Here is an example of spatital bin feature: alt text

2. Explain how you settled on your final choice of HOG parameters.

After I tried many combinations of parameters, I choose the the parameters which had highest test Accuracy in SVM classifier:

hog_feat spatial_feat hist_feat color_sapce orient pix_per_cell cell_per_block hog_channel Test Accuracy
True False False YUV 9 16 2 ALL 0.9752
True False False YUV 11 16 2 ALL 0.9789
True True True YUV 11 16 2 ALL 0.9724
True True True YUV 11 8 2 ALL 0.9769
True True True YUV 10 8 2 ALL 0.9828
True True True YCrCb 10 8 2 ALL 0.9840
True True True YCrCb 11 8 2 ALL 0.9901

Finally the parameters are as below:

hog_feat spatial_feat hist_feat color_sapce orient pix_per_cell cell_per_block hog_channel Test Accuracy
True True True YCrCb 11 8 2 ALL 0.9901

3. Describe how (and identify where in your code) you trained a classifier using your selected HOG features (and color features if you used them).

  • I trained a linear SVM using hog features, spatial features and hist features, The code is in vehicle_detection.ipynb
  • extract features in function def extract_features
  • And divide the test and train data by 0.2.

Sliding Window Search

1. Describe how (and identify where in your code) you implemented a sliding window search. How did you decide what scales to search and how much to overlap windows?

I don't know what size the car will be, and also where the car in images. So I do some test on the size of car in images, I get that the car which far from camera, it will be small size. So I use the two size sliding windows:

  1. y 350 to 500 with scaling factor 1: alt text
  2. y 400 to 656 with scaling factor 1.5: alt text
  3. Combine the two windows. alt text

Finally, I combine 3 windows as below: ystart = 300 ystop = 400 scale = 0.8 ystart = 350 ystop = 500 scale = 1.0 ystart = 350 ystop = 656 scale = 1.5

2. Show some examples of test images to demonstrate how your pipeline is working. What did you do to optimize the performance of your classifier?

The follow imags can show how my pipeline is working:

  1. I have done many experiments and choose the best paras.
  2. I use multi-windows which can improve the performance.
  3. I use three features: hog features, spatial features and hist features,

Video Implementation

1. Provide a link to your final video output.

Here's a link to my test video result Here's a link to my project video result

2. Describe how (and identify where in your code) you implemented some kind of filter for false positives and some method for combining overlapping bounding boxes.

  1. I created a heatmap and then thresholded that map to identify vehicle positions. I then used scipy.ndimage.measurements.label() to identify individual blobs in the heatmap. I then assumed each blob corresponded to a vehicle. I constructed bounding boxes to cover the area of each blob detected.
    here is images to show this flow: alt text

and here is final code to process a single image:

def process_image(img):
    boxes_list=[]
    
    ystart = 380
    ystop = 500
    scale = 1
    out_img, boxes = find_cars(img, ystart, ystop, scale, colorspace, hog_channel,
                        svc, X_scaler,
                        orient, pix_per_cell, cell_per_block, spatial_size, hist_bins,
                       color_feat)
    boxes_list.append(boxes)
    
    ystart = 400
    ystop = 656
    scale = 1.5
    
    out_img, boxes = find_cars(img, ystart, ystop, scale, colorspace, hog_channel,
                        svc, X_scaler,
                        orient, pix_per_cell, cell_per_block, spatial_size, hist_bins,
                       color_feat)
    boxes_list.append(boxes)
    
    boxes_list = [item for sublist in boxes_list for item in sublist]
    # make a heat-map 
    heat_img = np.zeros_like(img[:,:,0]).astype(np.float)
    heat_img = add_heat(heat_img, boxes)
    heat_img = apply_threshold(heat_img, 2)
    
    #Using label to find the box.
    labels = label(heat_img)    
    draw_img = draw_labeled_bboxes(np.copy(img), labels)
    return draw_img
  1. When process the image in the videos, I do as below in def process_image(img, is_video):
from collections import deque
heatmaps = deque(maxlen = 4)
...
heat_img = add_heat(heat_img, boxes)
    if is_video:
            global heatmaps
            heatmaps.append(heat_img)
            combined = sum(heatmaps)
            threshold = 2
            if len(heatmaps) == 1:                
                threshold = 1;
            elif len(heatmaps) == 2:
                threshold = 2;
            elif len(heatmaps) == 3:
                threshold = 2;
            elif len(heatmaps) == 4:
                threshold = 3;
            else:
                threshold = 5;
            heat_img = apply_threshold(combined,threshold)
    else:
        heat_img = apply_threshold(heat_img,1)
  1. And then, I think that the car which on the other land don't need to detect. Because there is a green belt in the middle of the road sO I set :
    xstart = 500
    xstop = 1280 

Discussion

1. Briefly discuss any problems / issues you faced in your implementation of this project. Where will your pipeline likely fail? What could you do to make it more robust?

  1. I think we should use some filtering algorithm to connect objects frames in videos.such as Karman filters.
  2. Traing data can not be large enough, on the one hand, we can collect more data, on the other hand, we can combine the traditional detection algorithm with SVM!

carnd-vehicle-detection's People

Contributors

yaoqii avatar

Watchers

James Cloos avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.