Code Monkey home page Code Monkey logo

donkeycar's Introduction

Donkeycar: a python self driving library

Build Status Lint Status Release

All Contributors Issues Pull Requests Forks Stars License

Discord

Donkeycar is minimalist and modular self driving library for Python. It is developed for hobbyists and students with a focus on allowing fast experimentation and easy community contributions. It is being actively used at the high school and university level for learning and research. It offers a rich graphical interface and includes a simulator so you can experiment with self-driving even before you build a robot.

Quick Links

donkeycar

Use Donkeycar if you want to:

  • Build a robot and teach it to drive itself.
  • Experiment with autopilots, gps, computer vision and neural networks.
  • Compete in self driving races like DIY Robocars, including online simulator races against competitors from around the world.
  • Participate in a vibrant online community learning cutting edge techology and having fun doing it.

What do you need to know before starting? (TL;DR nothing)

Donkeycar is designed to be the 'Hello World' of automomous driving; it is simple yet flexible and powerful. No specific prequisite knowledge is required, but it helps if you have some knowledge of:

  • Python programming. You do not have to do any programming to use Donkeycar. The file that you edit to configure your car, myconfig.py, is a Python file. You mostly just uncomment the sections you want to change and edit them; you can avoid common mistakes if you know how Python comments and indentation works.
  • Raspberry Pi. The Raspberry Pi is the preferred on-board computer for a Donkeycar. It is helpful to have setup and used a Raspberry Pi, but it is not necessary. The Donkeycar documentation describes how to install the software on a RaspberryPi OS, but the specifics of how to install the RaspberryPi OS using Raspberry Pi Imager and how to configure the Raspberry Pi using raspi-config is left to the Raspberry Pi documentation, which is extensive and quite good. I would recommend setting up your Raspberry Pi using the Raspberry Pi documentation and then play with it a little; use the browser to visit websites and watch YouTube videos, like this one taken at the very first outdoor race for a Donkeycar. Use a text editor to write and save a file. Open a terminal and learn how to navigate the file system (see below). If you are comfortable with the Raspberry Pi then you won't have to learn it and Donkeycar at the same time.
  • The Linux command line shell. The command line shell is also often called the terminal. You will type commands into the terminal to install and start the Donkeycar software. The Donkeycar documentation describes how this works. It is also helpful to know how navigate the file system and how to list, copy and delete files and directories/folders. You may also access your car remotely; so you will want to know how to enable and connect WIFI and how to enable and start an SSH terminal or VNC session from your host computer to get a command line on your car.

Get driving.

After building a Donkeycar and installing the Donkeycar software you can choose your autopilot template and calibrate your car and get driving!

Modify your car's behavior.

Donkeycar includes a number of pre-built templates that make it easy to get started by just changing configuration. The pre-built templates are all you may ever need, but if you want to go farther you can change a template or make your own. A Donkeycar template is organized as a pipeline of software parts that run in order on each pass through the vehicle loop, reading inputs and writing outputs to the vehicle's software memory as they run. A typical car has a parts that:

  • Get images from a camera. Donkeycar supports lots of different kinds of cameras, including 3D cameras and lidar.
  • Get position readings from a GPS receiver.
  • Get steering and throttle inputs from a game controller or RC controller. Donkeycar support PS3, PS4, XBox, WiiU, Nimbus and Logitech Bluetooth game controllers and any game controller that works with RaspberryPi. Donkeycar also implements a WebUI that allows any browser compatible game controller to be connected and also offers an onscreen touch controller that works with phones.
  • Control the car's drivetrain motors for acceleration and steering. Donkeycar supports various drivetrains including the ESC/Steering-servo configuration that is common to most RC cars and Differential Drive configurations.
  • Save telemetry data such as camera images, steering and throttle inputs, lidar data, etc.
  • Drive the car on autopilot. Donkey supports three kinds of autopilots; a deep-learning autopilot, a gps autopilot and a computer vision autopilot. The Deep Learning autopilot supports Tensorflow, Tensorflow Lite, and Pytorch and many model architectures.

If there isn't a Donkeycar part that does what you want then write your own part and add it to a vehicle template.

#Define a vehicle to take and record pictures 10 times per second.

import time
from donkeycar import Vehicle
from donkeycar.parts.cv import CvCam
from donkeycar.parts.tub_v2 import TubWriter
V = Vehicle()

IMAGE_W = 160
IMAGE_H = 120
IMAGE_DEPTH = 3

#Add a camera part
cam = CvCam(image_w=IMAGE_W, image_h=IMAGE_H, image_d=IMAGE_DEPTH)
V.add(cam, outputs=['image'], threaded=True)

#warmup camera
while cam.run() is None:
    time.sleep(1)

#add tub part to record images
tub = TubWriter(path='./dat', inputs=['image'], types=['image_array'])
V.add(tub, inputs=['image'], outputs=['num_records'])

#start the drive loop at 10 Hz
V.start(rate_hz=10)

See home page, docs or join the Discord server to learn more.

donkeycar's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

donkeycar's Issues

Create a manouvers library.

Vehicles have common maneuvers they can use without input from the pilot. For example.

  1. Reverse 3 ft.
  2. Pass right. (this a fast maneuver that could be used to shortcut turns)?

JSON decoder error on startup, fixed by restarting server

Occasionally when starting drive.py on the Pi, I would see the following error:

pi@raspberrypi:~/donkey $ python scripts/drive.py --remote http://172.20.10.5:8887
Detected running on rasberrypi. Only importing select modules.
Using TensorFlow backend.
center: 410
PiVideoStream loaded.. .warming camera
/usr/lib/python3/dist-packages/picamera/encoders.py:544: PiCameraResolutionRounded: frame size rounded up from 160x120 to 160x128
width, height, fwidth, fheight)))
Exception in thread Thread-2:
Traceback (most recent call last):
File "/usr/lib/python3.4/threading.py", line 920, in _bootstrap_inner
self.run()
File "/usr/lib/python3.4/threading.py", line 868, in run
self._target(*self._args, **self._kwargs)
File "/home/pi/donkey/donkey/remotes.py", line 79, in update
self.state['milliseconds'],)
File "/home/pi/donkey/donkey/remotes.py", line 140, in decide
data = json.loads(r.text)
File "/usr/lib/python3.4/json/init.py", line 318, in loads
return _default_decoder.decode(s)
File "/usr/lib/python3.4/json/decoder.py", line 343, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python3.4/json/decoder.py", line 361, in raw_decode
raise ValueError(errmsg("Expecting value", s, err.value)) from None
ValueError: Expecting value: line 1 column 1 (char 0)

I've not managed to get repro steps; I found that restarting the Docker container and then retrying the drive.py script with the same arguments worked.

Try normalizing the steering angle

I think it might help to normalize the steering angle. Maybe just divide by 90 before training - and multiply by 90 on the output from predict. Or pass -1 to 1 through the steering control.

Timeout error with drive.py script, same script works in jupyter

When I run drive.py --remote The connection to the server never completes. When I Ctrl-C I see this traceback showing it's getting stuck when requests trys to make a connection.

If I open jupyter notebook and use the same drive script, it works.

Maybe this has something to do with cashed ip addresses in bash?

^CTraceback (most recent call last):
  File "scripts/drive.py", line 57, in <module>
    car.start()
  File "/home/pi/donkey/donkey/vehicles.py", line 33, in start
    milliseconds)
  File "/home/pi/donkey/donkey/remotes.py", line 60, in decide
    'json': json.dumps(data)}) #hack to put json in file 
  File "/usr/lib/python3/dist-packages/requests/api.py", line 94, in post
    return request('post', url, data=data, json=json, **kwargs)
  File "/usr/lib/python3/dist-packages/requests/api.py", line 49, in request
    return session.request(method=method, url=url, **kwargs)
  File "/usr/lib/python3/dist-packages/requests/sessions.py", line 457, in request
    resp = self.send(prep, **send_kwargs)
  File "/usr/lib/python3/dist-packages/requests/sessions.py", line 569, in send
    r = adapter.send(request, **kwargs)
  File "/usr/lib/python3/dist-packages/requests/adapters.py", line 362, in send
    timeout=timeout
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 516, in urlopen
    body=body, headers=headers)
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 308, in _make_request
    conn.request(method, url, **httplib_request_kw)
  File "/usr/lib/python3.4/http/client.py", line 1090, in request
    self._send_request(method, url, body, headers)
  File "/usr/lib/python3.4/http/client.py", line 1128, in _send_request
    self.endheaders(body)
  File "/usr/lib/python3.4/http/client.py", line 1086, in endheaders
    self._send_output(message_body)
  File "/usr/lib/python3.4/http/client.py", line 924, in _send_output
    self.send(msg)
  File "/usr/lib/python3.4/http/client.py", line 859, in send
    self.connect()
  File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 154, in connect
    conn = self._new_conn()
  File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 133, in _new_conn
    (self.host, self.port), self.timeout, **extra_kw)
  File "/usr/lib/python3/dist-packages/urllib3/util/connection.py", line 78, in create_connection
    sock.connect(sa)
KeyboardInterrupt

Limit ranges of angle and speed in car monitor.

Currently the user can move make the angles and speed go well beyond range the car can implement. This makes it hard to correct once these ranges are way out of range. This could be accomplished by only increasing/decreasing ranges if the value is below the max / above the min.

Error starting drive.py

I've tested my motors, using the Adafruit example scripts, and everything is working.

When I get to the last step of starting the remote control, this is the response:

Traceback (most recent call last): File "scripts/drive.py", line 32, in <module> mythrottlecontroller = dk.actuators.PCA9685_Controller(cfg['throttle_actuator_channel']) File "/home/bob/donkey/donkey/actuators.py", line 34, in __init__ self.pwm = Adafruit_PCA9685.PCA9685() File "/home/bob/donkey/env/lib/python3.4/site-packages/Adafruit_PCA9685/PCA9685.py", line 75, in __init__ self.set_all_pwm(0, 0) File "/home/bob/donkey/env/lib/python3.4/site-packages/Adafruit_PCA9685/PCA9685.py", line 111, in set_all_pwm self._device.write8(ALL_LED_ON_L, on & 0xFF) File "/home/bob/donkey/env/lib/python3.4/site-packages/Adafruit_GPIO/I2C.py", line 114, in write8 self._bus.write_byte_data(self._address, register, value) File "/home/bob/donkey/env/lib/python3.4/site-packages/Adafruit_PureIO/smbus.py", line 236, in write_byte_data self._device.write(data) OSError: [Errno 5] Input/output error

Allow models to be trained off of multiple datasets.

It's currently impossible to train a model on several datasets. You can get around this by creating a dataset out of the sessions you need and training the model on the bigger datasets. Training on datasets too big for memory creates some new problems.

  • How can you shuffle the data so that each batch is representative.
  • How do you create a generator that loops through the datasets.

Users should only need to list two datasets in the train.py or explore.py script to use.

Create VehicleState class to generalize the data record/retrieve functions.

At the next race (june 16th), several of us plan to implement lidar + odometry on Donkey2 cars. However the current donkey software doesn't support adding additional inputs since the record/decided functions are hardcoded (angle, steering, img_arr). A generalized state class would allow the cars to support data from additional sensors and allow flexible data retrieval (ie images from last 10 frames).

This VehicleState class would act like a ROSBag and could save data like this:

state = VehicleState(save_to='/path/to/session/')
state.put('image', img_arr)
state.put('throttle', throttle)
state.put('angle', angle)

VehicleState saves data to disk depending on the type of data. 1 or 3 channel arrays (images) would be saved as jpg images. Single value data (speed, throttle would be saved to a csv file that contains a list of all values in the format:

key, value, time
throttle, .23, 12:32:32
angle, -.1, 12:32:33
image, /path/to/image, 12:32:33
throttle, .23, 12:32:35
angle, -.1, 12:32:36
image, /path/to/image, 12:32:37

VehicleState also saves recent data in memory in a first-in-first-out queue (ring queue). This will be used by pilots using recurrent networks that need several frames of data.

This is how you'd create a state that saves the last 4 values of each variable.

state = VehicleState(memory=4)

You could then retrieve the last variable values like this:

img_arr, throttle, angle = state.get(['image', 'throttle', 'angle'])

Since the variables are not being recorded at the same time the state class would need to interpolate the different data sets to create a tabular output that Keras / Tensorflow needs.

Standardize the command line conventions

--datasets - comma separated list of datasets
--sessions - comma separated list of sessions
--loops - how many times to try
--name - name of created model, dataset, or results

Create default Pi disk image.

This image would provide the default folder structure, settings and could optionally include keras/tensorflow and opencv.

Refactor server to control car instead of the monitor.

Current the way a vehicle is controlled is by loading a webpage served by the vehicle's Raspberry Pi. This makes it impossible to control from far way because it does not have as static ip address.

To fix this, a remote server can act as a proxy between the user and the vehicle. The remote server serves the page for the user controls and the vehicle constantly sends and receives updates from the server.

Code canges:

  • Move monitor code to the server

Allow RC control of car (for training or driving around)

As I understand RemoteClient already sends angle and throttle data when making the request to the Tornado server. I guess it would be feasible to get these values from an RC receiver (whose driver needs implementing), and pass them to the server.

Web: enable auto pilot switching.

Currently the only way to change auto pilots is by restarting the server using different CLI variables. This is slow and doesn't facilitate quick iteration.

A faster way to test poilots would be to switch them from the control webpage. To do this the following would need to be changed.

  • remotes.py - change pilots from a singleton to a dictionary, include model and description
  • monitor.html - add pilots control.
  • main.js - add ajax to update current pilot.

Question about class DifferentialDriveMixer

I would like to study the source codes.
I have a question about class DifferentialDriveMixer.
I am confused about below codes

l_speed = ((self.left_motor.speed + throttle)/3 - angle/5)
            r_speed = ((self.right_motor.speed + throttle)/3 + angle/5)

I want to know why '3' and '5' are used.
What are they stand for? Are they experience values?

Create donkey-admin script to run from CL

This attempts to separate the core donkey library and users vehicle configurations.

When a user runs pip install donkeycar the donkey-admin.py script will be added to the PATH. This will then allow the user then run commands like donkey-admin makecar mydonkey to create a folder ~/mydonkey that contains all the config files to run the car.

A command can be added during the pip process like this. http://python-packaging.readthedocs.io/en/latest/command-line-scripts.html

Docker instance does not update git repo when started.

People who use the docker image to start the server don't use an updated version of the git repo. The repo can be updated manually by running

bash start-server.sh -d

and then running the following inside the docker instance.

git pull origin master

but it would be better if this updated automatically, assuming we can keep the master branch free of conflicts. @yconst do you know how to do this.

Make demo to show how to calibrate your vehicle on startup.

Currently if a vehicle is turned on when a PWM throttle signal is being pulsed, the vehicle calibrates this as zero. This way when the PWM throttle value goes to 0 the car will go in reverse.

The demo script should show how the vehicle should be initialized to ensure that it's calibrated.

Error while driving. Adafruit servo hat write error.

After driving 10 minutes, the car stopped responding and updating images. This was the error from on the pi.

/usr/lib/python3/dist-packages/picamera/encoders.py:545: PiCameraResolutionRoun$
  width, height, fwidth, fheight)))
123
angle: -6   throttle: 46
remote client: {"angle": "-6", "throttle": "46"}
Traceback (most recent call last):
  File "demos/drive_pi.py", line 55, in <module>
    car.start()
  File "/home/pi/code/donkey/donkey/vehicles.py", line 39, in start
    self.steering_actuator.update(angle)
  File "/home/pi/code/donkey/donkey/actuators.py", line 73, in update
    self.pwm.set_pwm(self.channel, 0, pulse)
  File "/home/pi/code/donkey/env/lib/python3.4/site-packages/Adafruit_PCA9685/P$
    self._device.write8(LED0_ON_L+4*channel, on & 0xFF)
  File "/home/pi/code/donkey/env/lib/python3.4/site-packages/Adafruit_GPIO/I2C.$
    self._bus.write_byte_data(self._address, register, value)
  File "/home/pi/code/donkey/env/lib/python3.4/site-packages/Adafruit_PureIO/sm$
    self._device.write(data)
OSError: [Errno 5] Input/output error

Lapack and fortran packages

I noticed that the installation procedure stops because of lapack and fortran packages are missing if

git clone https://github.com/wroscoe/donkey donkeycar
pip install -e donkeycar

You can get them with:

sudo apt-get install libblas-dev liblapack-dev
sudo apt-get install gfortran

Improve small screen UI of vehicle control page

Here are some initial thoughts on improvements that I think could be made here:

  • Switch to gyroscopic control of vehicle steering and throttle (instead of nipple.js), as this will free up significant screen real estate for other controls. The DeviceOrientationEvent API seems to have sufficient cross-platform browser support that it could be used reliably on most iOS and Android phones. More info here: https://developer.mozilla.org/en-US/docs/Web/API/DeviceOrientationEvent
  • figure out what the "dead zone" of control should be at the middle of the throttle and steering ranges to make it easy to get to zero throttle/zero steering angle without requiring too precise of a device position. i found I had to figure this out with trial and error on an actual device to get a good feel for it
  • optimize the layout for widescreen positioning (more intuitive to hold the phone horizontally when you are using the device position as a steering wheel)
  • add an easily accessible start/stop button to the screen to avoid the car running off when you set the phone down ๐Ÿ˜‰
  • Take advantage of the extra on-screen real estate to include controls for:
    • Pilot mode selection
    • Model selection
    • Start/Stop recording toggle

Feature request - Servo control via PPM instead of I2C

There have been some comments about swapping in/out the stock RC controller and since there are only 2 servo controls used( steering / throttle ) an Arduino could do this. But since the rPi has only 1 hardware PWM, it would be more efficient to stream the servo control via a single PPM stream. This way an Arduino could decode the PPM and drive the servo's. And with a single digial I/O flag/pin the Arduino could read and use the stock RC receiver signals for training.

Write function to generate image variants

Image variants would be helpful to train more generalized driving models from a limited image set. The variants should include.

  • Horizontal flips. (also need to reverse steering angel)
  • Gradient equalization.
  • Brightness
  • Saturation
  • Contrast
  • Vertical clipping (to avoid non essential cues when following lines)

start-server.sh failing from Pi3

I'm getting the following from command line:
(env)pi@raspberrypi:~/donkey $ sudo bash start-server.sh
start-server: Building Donkey server image...
Sending build context to Docker daemon 113.8 MB
Step 1/18 : FROM python:3
---> b6cc5d70bc28
Step 2/18 : RUN apt-get -y update
---> Running in 78d46290b93b
standard_init_linux.go:178: exec user process caused "exec format error"
The command '/bin/sh -c apt-get -y update' returned a non-zero code: 1
start-server: Running Donkey server container...
Unable to find image 'donkey:latest' locally
docker: Error response from daemon: repository donkey not found: does not exist or no pull access.
See 'docker run --help'.

Does anyone have an idea to fix this?
I'm running this on a Raspberry Pi 3, using the image from https://s3.amazonaws.com/donkey_resources/donkey.img.zip.
Docker was installed with: curl -sSL https://get.docker.com | sh

Update Donkey AMI

  • Include sample datasets (all + angle specific + track specific).
  • change bashrc to (load dir, env and clone master)
  • Publish

Refactor: Make drive loop easier to innovate by changing vehicle model.

"""
Proposed Refactor: 
The current platform design does not leave room to change/innovate
the drive loop. This an alternative way to define the drive loop
using modular components and shared variables. It borrows from
the design of Keras and ROS.

"""



#Local Car

    V = Vehicle()

    V.data = ['img',  		#image from camera
    			'c_angle',		#control angle (from user)
    			'c_throttle',
    			'c_drive_mode',
    			'p_angle', 
    			'p_throttle',
    			'a_angle',
    			'a_throttle',

    V.add(WebMonitor(), 
    		output='c_angle', 'c_throttle', 'c_drive_mode')
    
    V.add(PiCamera(), 
    		output='img')

    V.add(CNN(), 
    		input=['img', 'a_angle', 'a_throttle'],
    		output=['p_angle', 'p_throttle'])
    

    V.add(DriveLogic(), 
    		input=['c_angle', 'c_throttle', 'c_drive_mode', 
    			   'p_angle', 'p_throttle'],
    		output=['a_angle', 'a_throttle'])

    V.add(SteeringActuator(), 
    		input='a_angle')
    
    V.add(ThrottleActuator(), 
    		input='a_throttle')

    V.add(Recorder(), input='*')





#Remote Car

    V = Vehicle()

    V.data = ['img',  		#image from camera
    			'c_angle',		#control angle (from user)
    			'c_throttle',
    			'c_drive_mode',
    			'p_angle', 
    			'p_throttle',
    			'a_angle',
    			'a_throttle',

    V.add(RemoteMonitor(), 
    		output='c_angle', 'c_throttle', 'c_drive_mode')
    
    V.add(PiCamera(), 
    		output='img')


    V.add(RemoteLogic(), 
    		input=['img',
    			   'c_angle', 'c_throttle', 'c_drive_mode', 
    			   'a_angle', 'a_throttle'],
    		output=['a_angle', 'a_throttle'])

    V.add(SteeringActuator(), 
    		input='a_angle')
    
    V.add(ThrottleActuator(), 
    		input='a_throttle')

Implement odometry with hall effect sensor

The plan is to attempt an odometer setup inspired by this post.

Here's my initial take on how this might work:

  • Mount hall effect sensor to car. @adammconway suggested attempting to sense the motor directly, which would imply mounting the sensor in an axial orientation relative to the motor.
  • If we can't accurately sense the motor directly, use small magent(s) mounted on the drive shaft or a gear in the drivetrain and mount sensor in an appropriate place given the location of the magnets.
  • Connect hall effect sensor signal pin to one of the Pi's GPIO pins that supports interrupt handling
  • Connect hall effect power and ground to the corresponding pins on the servo hat

Reading the sensor:

  • Configure the interrupt on the Pi with a callback function (aka ISR - interrupt service routine) that is called when the sensor signal pin changes state. The minimum here would be to read a single state change (RISING or FALLING), additional precision could be achieved by configuring the interrupt to track both rising and falling changes.
  • Inside the ISR, update variables for tracking state changes (those that will need to be read by the drive loop will need to be global):
    • (local) Timestamp of previous state change
    • (global) Time elapsed between previous state change and current state change
    • (global) Total count of state changes

Calculating distance and speed:

  • Distance traveled will be informed by the relationship between the point in the drive train that is being monitored by the hall effect sensor and the physical distance traveled by the car for 1 cycle (or one half cycle if we monitor both rising and falling edges) of the hall effect sensor signal. We'll need some constant to tie these two together (something like "feet per hall sensor state change"), which will be vehicle specific. Perhaps this is a new addition to the vehicle config file?
  • Velocity will be calculated using the distance above divided by the elapsed time that was recorded in the ISR. Calculating speed using the time elapsed from only the most recent sensor cycle would yield the most recent velocity, but we might find that averaging this over several cycles to yield an average velocity over a slightly longer time period would smooth out the data a bit. Let's start with the 1 cycle calculation and then we can adjust if needed.

Sending distance and velocity back to server:

  • TBD @wroscoe do you have any thoughts here?

@wroscoe @adammconway does the above approach sound reasonable to you? Do you have preferences on the units used for distance and velocity?

mydonkey/models folder missing

Hi everyone, I just cloned on Ubuntu 14, installed and run docker. When I bash start-server.sh I get:

.........
start-server: Running Donkey server container...
Loading modules for server.
Starting Donkey Server...
Using TensorFlow backend.
Traceback (most recent call last):
File "/donkey/scripts/serve.py", line 12, in
w = dk.remotes.DonkeyPilotApplication()
File "/donkey/donkey/remotes.py", line 175, in init
self.pilots = ph.default_pilots()
File "/donkey/donkey/pilots.py", line 84, in default_pilots
pilot_list = self.pilots_from_models()
File "/donkey/donkey/pilots.py", line 71, in pilots_from_models
models_list = [f for f in os.scandir(self.models_path)]
FileNotFoundError: [Errno 2] No such file or directory: '/root/mydonkey/models'

Move session handling inside remote server.

Currently sessions are passed to the remote sever on creation. This limits the server to only handling one session at a time and prevents switching.

  • updates required to scripts/serve.py & donkey/remotes.py

Implement categorical steering models.

Instead of using models to predict a single steering value, use a model that predicts the steering angle category. This will also give us the probability the angle is correct and will let us more easily combine models.

  • Pilot handler needs to load models.
  • Server needs to be able to give the model a picture and get a steering angle.
  • Train needs to train the models.py
  • Explore needs to work.

Implement single mixer class instead of separate throttle/steering classes

A single "mixer" class that handles command distribution to all actuators is more flexible than separate "throttle" and "steering" classes.
For instance in differential or skid steering one should be aware of both the throttle as well as the steering value to assign correct (PWM) values to the actuators

Refactor imports?

Donkey is importing all modules in the donkey/ directory, which leads to a lot of unneeded module imports, and breaks in many cases (such as running drive_pi.py, which requires envoy and Keras, even thought they are not used). Can this be architected differently so that only modules needed are imported?

Running Docker image on Ubuntu 14.04LTS fails

I am following the instructions from the google doc, and using the following instructions:

git clone http://github.com/wroscoe/donkey.git
cd donkey
sudo bash start-server.sh

The process fails with the following error.

~/donkey$ sudo bash start-server.sh
start-server: Running Donkey server container...
Loading modules for server.
Starting Donkey Server...
Using TensorFlow backend.
Traceback (most recent call last):
File "/donkey/scripts/serve.py", line 12, in
w = dk.remotes.DonkeyPilotApplication()
File "/donkey/donkey/remotes.py", line 175, in init
self.pilots = ph.default_pilots()
File "/donkey/donkey/pilots.py", line 84, in default_pilots
pilot_list = self.pilots_from_models()
File "/donkey/donkey/pilots.py", line 71, in pilots_from_models
models_list = [f for f in os.scandir(self.models_path)]
FileNotFoundError: [Errno 2] No such file or directory: '/root/mydonkey/models'

I have had a search and could not find any clues, so am wondering whether I am being dense and this is something really simple or whether there is a genuine issue.

Many thanks.

Figure out root cause of spikes in control lag time

This was the main issue I experienced at the Feb 18th track day that prevented me from reliably driving the car around the track for training. It always manifests as an intermittent problem but I am also able to observe it at home, although less frequently than I saw it at the track day.

I'm running a local donkey server over wifi, so 4G latency is not a factor here. On wifi, I'm frequently seeing lag times spike above 1s, sometimes as long as 30 or more seconds. The clues that I've seen so far are:

  1. This seems to happen more frequently in areas of high network congestion (like the track day when everyone was running nearby).
  2. It happens on my home wifi network at least once per minute while driving the donkey car, to varying levels of severity.
  3. I tried an alternate router at home on a non-standard wifi channel, and was not able to reproduce the delays.

Here's a sample console log from the pi that shows the spikes. Lag time of ~0.06 is about normal on my home network.

{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 0.06897997856140137
throttle update: 0.0
pulse: 370
angle: 0.0   throttle: 0.0
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 0.07510542869567871
throttle update: 0.0
pulse: 370
angle: 0.0   throttle: 0.0
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 0.06453394889831543
throttle update: 0.0
pulse: 370
angle: 0.0   throttle: 0.0
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 0.759141206741333
throttle update: 0.0
pulse: 370
angle: 0.0   throttle: 0.0
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 0.05977487564086914
throttle update: 0.0
pulse: 370
angle: 0.0   throttle: 0.0
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 0.0692141056060791
throttle update: 0.0
pulse: 370
angle: 0.0   throttle: 0.0
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 0.06003284454345703
throttle update: 0.0
pulse: 370
angle: 0.0   throttle: 0.0
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 0.16736602783203125
throttle update: 0.0
pulse: 370
angle: 0.0   throttle: 0.0
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 0.06820440292358398
throttle update: 0.0
pulse: 370
angle: 0.0   throttle: 0.0
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 0.0678567886352539
throttle update: 0.0
pulse: 370
angle: 0.0   throttle: 0.0
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 0.12179088592529297
throttle update: 0.0
pulse: 370
angle: 0.0   throttle: 0.0
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 0.05697226524353027
throttle update: 0.0
pulse: 370
angle: 0.0   throttle: 0.0
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 0.0699162483215332
throttle update: 0.0
pulse: 370
angle: 0.0   throttle: 0.0
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 0.06665158271789551
throttle update: 0.0
pulse: 370
angle: 0.0   throttle: 0.0
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 0.17603182792663574
throttle update: 0.0
pulse: 370
angle: 0.0   throttle: 0.0
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 1.0047976970672607
throttle update: 0.0
pulse: 370
angle: 0.0   throttle: 0.0
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 0.0619354248046875
throttle update: 0.0
pulse: 370

Refactor Recorder to be Session and SessionsFactory

One of the most common and laborious tasks of building a self driving car is saving and accessing data for different trial sessions. Currently this is accessed through the FileRecorder. This is not a clear way to represent the access of the data.

A much cleaner way would be to use Session objects with uses cases like:

sfactory = SessionFactory('~/sessions')
session = sfactory.new('port')
for i in records:
    session.record(r[img], r[angle])

X, Y = sesssion.array()

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.