Code Monkey home page Code Monkey logo

autonomous-truck's Introduction

Autonomous-Truck

An autonomous driving system built to drive in a truck driving simulator. This system uses a Tensorflow implementation of NVIDIA End-to-End Deep Learning for Self-Driving Cars. To run this program you must have Tensorflow-GPU 1.15 (This program does not support Tensorflow 2.0), follow these guides to install on Windows 10 or Ubuntu.

Installation

Git clone the repository and cd into the directory, install requirements and clone pyvjoy into Autonomous-Truck

git clone https://github.com/greerviau/Autonomous-Truck.git && cd Autonomous-Truck
pip install -r requirements.txt
git clone https://github.com/tidzo/pyvjoy

Download and install vJoy from http://vjoystick.sourceforge.net/site/index.php/download-a-install/download
Navigate to vJoy/x86 in wherever you installed vJoy. Copy vJoyInterface.dll and paste it into the pyvjoy directory.


Usage

Game Settings

Make sure the game detects the gamepad.
Make sure the Controller subtype is set to Gamepad, joystick.
Use 1280x720 resolution in game.
Try to use the highest graphics settings possible while still being able to run the program effectively (this will take some fine-tuning).
With the gamepad plugged in, set the Steering axis to Joy X Axis.
Set the Acceleration and Brake axis to Joy RY Axis, this will be converted to Joy Y Rotation when using Keyboard + vJoy Device as input. Set the Acceleration axis mode to Centered and the Brake axis mode to Inverted and Centered. (This is used for the autopilot to accelerate and brake, you do not have to use the Y axis for data collection).
Bind Light Modes to L.
Bind Roof Camera to the controller B button and the P key.


Data Collection

Recording Data

For collecting data make sure the input is set to Keyboard + XInput Gamepad 1 and the Controller subtype is set to Gamepad, joystick
To collect data run python3 collect_data.py <session> Make sure to specify different sessions for every execution.
While recording, use the B button to start a new recording, this will not create a new session but instead split into a new clip.

Use this feature to start recording new clips of desired data. This will make data cleaning easier. Ex. Before changing lanes, press B before changing and press B after lane change is finished. This will create 3 clips, 1 before lane change, 1 of the lane change and the final will continue recording the rest of the drive. Then durring data cleaning simply delete the clip of the lane change.

Recording sessions will be saved to data/roof_cam/raw/<session>/<clips>

Cleaning

If additional cleaning is required, run python3 clip_video.py raw/<session>/<clip> While video is playing, press q to keyframe. Once video is done playing the program will split the video along key frames and saved to data/roof_cam/raw/<session>/<clips>/<splits>

Then simply move the clips that you want to keep to data/roof_cam/raw/<session> and discard the rest.


Preprocessing

Once the data has been cleaned, run python3 preprocess.py This will preprocess all of the clips in data/roof_cam/raw The subfolders of this directory must have the file structure of <session>/<clips> with the mp4 and csv files within.

This will save the preprocessed data to data/roof_cam/processed The data will be divided into sessions but the clips will be aggregated into X.npy and Y.csv

There will also be a total aggregate of all sessions as X.npy and Y.csv


Train Steering Model

After preprocessing, open train_conv_net.py Make sure to specify the SAVE_PATH for the model as well as the hyperparams.
Run python3 train_conv_net.py to train the model.


Train Digit Recognition

To train the digit recognition for monitoring speed run python3 train_digit_rec.py


Train Brake Prediction model

To train the conv net for brake prediction run python3 train_brake_net.py


Testing

Open your game and in gameplay settings set your input as Keyboard + vJoy Device.
If vJoy is not detected then run python3 detect_vjoy_ingame.py while your game is open and it should ask you to use vJoy as a controller. Like the Xbox controller, make sure the Controller subtype is set to Gamepad, joystick

In test_autopilot.py specify the CONV_NET_MODEL directory for your saved model. Also specify if you want to record data from the test. Run python3 test_autopilot.py, if you want to record data from the test, specify the session as an argument in the command line execution ex. test_autopilot.py sess_01. Data will be saved to data/roof_cam/raw_autonomous

Once the program is running open the game (if you have 2 monitors it makes it easier to monitor the program while testing) Get your truck onto the highway and up to reasonable speed. Press B on your controller or P on your keyboard to engage the autopilot. If your button bindings are set up correctly this should also switch to the roof camera. While the system is running you still have control over steering with the keyboard A and D keys.

LB and RB activate respective lane changes.
You can disengage the system with the keyboard W and S keys aswell, this allows for disengagement on human throttle or brake.


Notes

  • For data collection and cleaning, removing data of changing lanes and odd outliers drastically improves model performance. This system is essentialy meant to be an advanced lane assist with additional features. So removing data that is not staying in lane is ideal.
  • For data collection and testing, using the same truck also improves performance. In my testing I bought the cheapest Peterbilt truck and used it for my data collection and testing. This is because different trucks have different roof heights which affects the height of the roof camera. Alternatively you could collect data from a large enough sample of trucks so that your model can generalize across varying roof camera heights. I attempted this and it did work however results are still better if you use the same truck for all of your training and testing.
  • Load on the truck seems to affect the performance of the system if not trained on a robust enough dataset. Essentially since the system doesnt know if the truck is under load or not its predictions do not change accordingly however load may affect how quickly a miscalculation can be corrected for. This effect is miniscule based on testing.

References

autonomous-truck's People

Contributors

greerviau avatar

Stargazers

 avatar

Watchers

 avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.