Code Monkey home page Code Monkey logo

rpg_public_dronet's Introduction

DroNet: Learning to fly by driving

This repository contains the code used to train and evaluate DroNet, a convolutional neural network that can safely drive a drone along the streets of a city.

Citing

If you use DroNet in an academic context, please cite the following publication:

Paper: DroNet:Learning to Fly by Driving

Video: YouTube

@article{Loquercio_2018,
	doi = {10.1109/lra.2018.2795643},
	year = 2018,
	author = {Antonio Loquercio and Ana Isabel Maqueda and Carlos R. Del Blanco and Davide Scaramuzza},
	title = {Dronet: Learning to Fly by Driving},
	journal = {{IEEE} Robotics and Automation Letters}
}

Introduction

Due to the danger that flying a drone can cause in an urban environment, collecting training data results impossible. For that reason, DroNet learns how to fly by imitating the behavior of manned vehicles that are already integrated in such environment. It produces a steering angle and a collision probability for the current input image captured by a forward-looking camera. Then, these high-level commands are transferred to control commands so that the drone keeps navigating, while avoiding obstacles.

Model

DroNet has been designed as a forked CNN that predicts, from a single 200×200 frame in gray-scale, a steering angle and a collision probability. The shared part of the architecture consists of a fast ResNet-8 with 3 residual blocks, followed by dropout and ReLU non-linearity. After them, the network branches into two separated fully-connected layers, one to carry out steering prediction, and the other one to infer collision probability. See cnn_models.py for more details.

architecture

Data

In order to learn steering angles, the publicly available Udacity dataset has been used. It provides several hours of video recorded from a car. We additionally recorded an outdoor dataset to learn the probability of collision by riding a bicycle in the streets of our city.

dataset

Running the code

Software requirements

This code has been tested on Ubuntu 14.04, and on Python 3.4.

Dependencies:

  • TensorFlow 1.5.0
  • Keras 2.1.4 (Make sure that the Keras version is correct!)
  • NumPy 1.12.1
  • OpenCV 3.1.0
  • scikit-learn 0.18.1
  • Python gflags

Data preparation

Steering data

Once Udacity dataset is downloaded, extract the contents from each rosbag/experiment. To have instructions on how to do it, please follow the instructions from the udacity-driving-reader, to dump images + CSV. After extraction, you will only need center/ folder and interpolated.csv file from each experiment to create the steeering dataset. To prepare the data in the format required by our implementation, follow these steps:

  1. Rename the center/ folder to images/.
  2. Process the interpolated.csv file to extract data corresponding only to the central camera. It is important to sync images and the corresponding steerings matching their timestamps. You can use the script time_stamp_matching.py to accomplish this step.

The final structure of the steering dataset should look like this:

training/
    HMB_1_3900/*
        images/
        sync_steering.txt
    HMB_2/
    HMB_4/
    HMB_5/
    HMB_6/
validation/
    HMB_1_501/*
testing/
    HMB_3/

*Since Udacity dataset does not provide validation experiments, we split HMB_1 so that the first 3900 samples are used for training and the rest for validation.

Collision data

Collision dataset is ready to be used after downloading. It can be directly downloaded from here. Its structure is as follows:

training/
    DSCN2561/
        images/
        labels.txt
    DSCN2562/
    ...
    GOPR0387/
validation/
    DSCN2682/
    GOPR0227/
testing/
    DSCN2571/
    GOPR0200/
    ...
    GOPR0386/

Finally, merge the training/, validation/ and testing/ directories from both datasets, and use them to train and evalutate DroNet.

Training DroNet

  1. Train DroNet from scratch:
python cnn.py [flags]

Use [flags] to set batch size, number of epochs, dataset directories, etc. Check common_flags.py to see the description of each flag, and the default values we used for our experiments.

Example:

python cnn.py --experiment_rootdir='./model/test_1' --train_dir='../training' --val_dir='../validation' --batch_size=16 --epochs=150 --log_rate=25 
  1. Fine-tune DroNet by loading a pre-trained model:
python cnn.py --restore_model=True --experiment_rootdir='./model' --weights_fname='model_weights.h5'

The pre-trained model must be in the directory you indicate in --experiment_rootdir.

Evaluating DroNet

We evaluate our model on the testing data from each dataset.

python evaluation.py [flags]

Example:

python evaluation.py --experiment_rootdir='./model' --weights_fname='model_weights.h5' --test_dir='../testing' 

Download our trained model

A folder containing the trained model that we used in our real world experiments can be download here: best_model. To evaluate it, use the following command:

python evaluation.py --experiment_rootdir=PATH_TO_FOLDER --weights_fname=best_weights.h5 --test_dir='../testing' 

Test dronet on a real robot

In the folder drone_control you can find instructions on how to use DroNet to fly a Bebop drone.

rpg_public_dronet's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rpg_public_dronet's Issues

Unable to download the Udacity dataset

Hi!
Thanks for your amazing works!
I have a non-technical issues that I can't download the dataset from Udacity. This repository has been archived by the owner on Nov 26, 2021. It is now read-only. Do you have any other link to download the dataset?
Waiting for your reply!
Best Wishes!

can not download the Udacity dataset

Hi!
Thanks for your amazing works!
I have a non-technical issues that I can't download the dataset from Udacity. Maybe the dataset is too old, and there is barely seed files to download. Do you have any other link to download the dataset?
Waiting for your reply!
Best Wishes!
Aaron

Evaluate testing data

Hi everyone!

I have an issue when I try to evaluate the testing data set via the command line:

python evaluation.py --experiment_rootdir=./dronet --weights_fname='best_weights.h5' --test_dir='./testing'

I get the following error:

Using TensorFlow backend.
Traceback (most recent call last):
File "evaluation.py", line 281, in
main(sys.argv)
File "evaluation.py", line 277, in main
_main()
File "evaluation.py", line 183, in _main
batch_size = FLAGS.batch_size)
File "/home/edgar/rpg_public_dronet/utils.py", line 34, in flow_from_directory
follow_links=follow_links)
File "/home/edgar/rpg_public_dronet/utils.py", line 105, in init
self._decode_experiment_dir(subpath)
File "/home/edgar/rpg_public_dronet/utils.py", line 131, in _decode_experiment_dir
delimiter=',', skiprows=1)
File "/home/edgar/.local/lib/python2.7/site-packages/numpy/lib/npyio.py", line 858, in loadtxt
fh = iter(open(fname, 'U'))
IOError: [Errno 2] No such file or directory: './testing/DSCN2571/sync_steering.txt'

When I try to evaluate a dataset with only steering data parameters (with the steering dataset) it works fine but fails when the script tries to get the labels for collision probability:

Using TensorFlow backend.
Found 4401 images belonging to 1 experiments.
./dronet/best_weights.h5
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
Loaded model from ./dronet/best_weights.h5
137/137 [==============================] - 54s
EVA = 0.0
RMSE = 0.0794238299131
Written file ./dronet/constant_regression.json
EVA = -1.00365029169
RMSE = 0.112437359102
Written file ./dronet/random_regression.json
EVA = 0.10682195425
RMSE = 0.0805458426476
Written file ./dronet/test_regression.json
Written file ./dronet/predicted_and_real_steerings.json
/home/edgar/.local/lib/python2.7/site-packages/numpy/lib/function_base.py:1110: RuntimeWarning: Mean of empty slice.
avg = a.mean(axis)
/home/edgar/.local/lib/python2.7/site-packages/numpy/core/_methods.py:80: RuntimeWarning: invalid value encountered in double_scalars
ret = ret.dtype.type(ret / rcount)
('Average accuracy = ', nan)
('Precision = ', 0.0)
('Recall = ', 0.0)
('F1-score = ', 0.0)
Traceback (most recent call last):
File "evaluation.py", line 281, in
main(sys.argv)
File "evaluation.py", line 277, in main
_main()
File "evaluation.py", line 261, in _main
evaluate_classification(pred_prob, pred, real_labels, abs_fname)
File "evaluation.py", line 163, in evaluate_classification
dictionary = {"ave_accuracy": ave_accuracy.tolist(), "precision": precision.tolist(),
AttributeError: 'float' object has no attribute 'tolist'

I think that it is because one part of the dataset is for steering angle prediction (udacity-dataset with sync_steering.txt) and another for collision estimation (collision data with labels.txt). Nevertheless, I didn't find a solution for testing both.

I am doing something wrong?, Do I need to define an special configuration for testing both?.

Thanks in advance for your response.

Greetings!

ValueError: too many values to unpack (expected 3)

(py3) neu105@TitanX:~/test/dronet/rpg_public_dronet-master$ python evaluation.py --experiment_rootdir='./model' --weights_fname='model_weights.h5' --test_dir='/home/neu105/test/dronet/collision_dataset/testing'
Using TensorFlow backend.
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcublas.so.7.5 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcudnn.so.5 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcufft.so.7.5 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcurand.so.7.5 locally
Found 6855 images belonging to 9 experiments.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties:
name: TITAN X (Pascal)
major: 6 minor: 1 memoryClockRate (GHz) 1.531
pciBusID 0000:02:00.0
Total memory: 11.90GiB
Free memory: 11.75GiB
I tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0
I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0: Y
I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: TITAN X (Pascal), pci bus id: 0000:02:00.0)
I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: TITAN X (Pascal), pci bus id: 0000:02:00.0)
Loaded model from ./model/model_weights.h5
Traceback (most recent call last):
File "evaluation.py", line 276, in
main(sys.argv)
File "evaluation.py", line 272, in main
_main()
File "evaluation.py", line 203, in _main
model, test_generator, nb_batches, verbose = 1)
File "/home/neu105/test/dronet/rpg_public_dronet-master/utils.py", line 254, in compute_predictions_and_gt
generator_output = next(generator)
File "/home/neu105/anaconda2/envs/py3/lib/python3.5/site-packages/keras/preprocessing/image.py", line 835, in next
return self.next(*args, **kwargs)
File "/home/neu105/test/dronet/rpg_public_dronet-master/utils.py", line 172, in next
self.index_generator)
ValueError: too many values to unpack (expected 3)

evaluation on the other dataset of udacity

Thank you for your great work.
Currently, I am working based on your work. However, when I validated the model by using HMB_2 dataset (since it's quite difficult to download the HMB_3 dataset), the result shows not good as the paper.
I used script data_preprocessing to pre-processing the udacity dataset, and copy to the validation folder.

This is the output of the validation:
EVA = -0.5063827037811279
RMSE = 0.3688988983631134
Written file /Autonomous_drone/rpg_public_dronet/model/dronet_model/dronet/test_regression.json
EVA = -1.0062613594628216
RMSE = 0.42530794406649747
Written file /Autonomous_drone/rpg_public_dronet/model/dronet_model/dronet/random_regression.json
EVA = 0.0
RMSE = 0.30026811361312866
Written file /Autonomous_drone/rpg_public_dronet/model/dronet_model/dronet/constant_regression.json
Written file /Autonomous_drone/rpg_public_dronet/model/dronet_model/dronet/predicted_and_real_steerings.json
Average accuracy = 0.8614232209737828
Precision = 0.10869565217391304
Recall = 0.10869565217391304
F1-score = 0.11904761904761904
Written file /Autonomous_drone/rpg_public_dronet/model/dronet_model/dronet/test_classification.json
Average accuracy = 0.5056179775280899
Precision = 0.07196969696969698
Recall = 0.07196969696969698
F1-score = 0.12582781456953643

Could you give me any advice to improve the accuracy?
Thank you in advance!

Resource not found (how to recover)

Resource not found: The following package was not found in : bebop_driver
ROS path [0]=/opt/ros/melodic/share/ros
ROS path [1]=/opt/ros/melodic/share
The traceback for the exception was written to the log file

Unknown CMake command "quad_cmake_setup"

Thanks so much for publishing the control code, it's saved me quite a bit of time and I appreciate how straightforward it is.

I encountered this error when building dronet_control:

CMake Error at /home/eridgd/bebop_ws/src/dronet/dronet_control/CMakeLists.txt:6 (quad_cmake_setup): Unknown CMake command "quad_cmake_setup"

Which seems to be due to a missing non-standard dependency. If I comment out this line in CMakeLists.txt it does seem to then build. Can I safely ignore this?

Question: why randomly transform image during test evalution?

Hi,
More of a question, then an issue at the moment, to try to understand ..

I run the evaluation.py with the test set and the best trained model, and am able to reproduce the results as per table 1 in the paper.

However, when running evaluation.py on the test set, this would eventually lead to invoking your next function, which then invokes the _get_batches_of_transformed_samples, which gets test images to build a batch. But it also does a random transformation of each image (see *** below in code snip).

I would think that you would want to do evaluation of performance on the actual images and not on images that are getting randomly transformed. Can you explain why you apply random transform on test set images and why not just do random transform during training?

Thanks.

def _get_batches_of_transformed_samples(self, index_array) :
"""
Public function to fetch next batch.
# Returns
The next batch of images and labels.
"""
.. snip

Build batch of image data

    for i, j in enumerate(index_array):
        fname = self.filenames[j]
        x = img_utils.load_img(os.path.join(self.directory, fname),
                grayscale=grayscale,
                crop_size=self.crop_size,
                target_size=self.target_size)

        x = self.image_data_generator.random_transform(x)    ***** why do this during test evaluation?
        x = self.image_data_generator.standardize(x)
        batch_x[i] = x

..snip

Tello drone?

Can we make this work on tello? Lot of people would love try it.

there is not python3.4 with keras2.1.4 and tensorflow1.0.1 ; so , i install python3.5 with keras2.1.4 and tensorflow1.0.1

there is not python3.4 with keras2.1.4 and tensorflow1.0.1 ; so , i install python3.5 with keras2.1.4 and tensorflow1.0.1

@TitanX:~$ conda search tensorflow
Loading channels: done
Name                       Version                   Build  Channel        
tensorflow                 0.10.0rc0           np111py27_0  https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
tensorflow                 0.10.0rc0           np111py27_0  https://mirrors.ustc.edu.cn/anaconda/pkgs/free
tensorflow                 0.10.0rc0           np111py27_0  defaults       
tensorflow                 0.10.0rc0           np111py34_0  https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
tensorflow                 0.10.0rc0           np111py34_0  https://mirrors.ustc.edu.cn/anaconda/pkgs/free
tensorflow                 0.10.0rc0           np111py34_0  defaults       
tensorflow                 0.10.0rc0           np111py35_0  https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
tensorflow                 0.10.0rc0           np111py35_0  https://mirrors.ustc.edu.cn/anaconda/pkgs/free
tensorflow                 0.10.0rc0           np111py35_0  defaults       
tensorflow                 1.0.1               np112py27_0  https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
tensorflow                 1.0.1               np112py27_0  https://mirrors.ustc.edu.cn/anaconda/pkgs/free
tensorflow                 1.0.1               np112py27_0  defaults       
tensorflow                 1.0.1               np112py35_0  https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
tensorflow                 1.0.1               np112py35_0  https://mirrors.ustc.edu.cn/anaconda/pkgs/free
tensorflow                 1.0.1               np112py35_0  defaults       
tensorflow                 1.0.1               np112py36_0  https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
tensorflow                 1.0.1               np112py36_0  https://mirrors.ustc.edu.cn/anaconda/pkgs/free
tensorflow                 1.0.1               np112py36_0  defaults       
tensorflow                 1.1.0               np111py27_0  https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
tensorflow                 1.1.0               np111py27_0  https://mirrors.ustc.edu.cn/anaconda/pkgs/free
keras                      2.1.2                    py35_0  defaults       
keras                      2.1.2                    py36_0  defaults       
keras                      2.1.3                    py27_0  defaults       
keras                      2.1.3                    py35_0  defaults       
keras                      2.1.3                    py36_0  defaults       
keras                      2.1.4                    py27_0  defaults       
keras                      2.1.4                    py35_0  defaults       
keras                      2.1.4                    py36_0  defaults       
keras                      2.1.5                    py27_0  defaults 

there is not python3.4 with keras2.1.4 and tensorflow1.0.1 ; so , i install python3.5 with keras2.1.4 and tensorflow1.0.1 . Then, happen follow error:

/home/anaconda2/envs/dronet/bin/python -u /home/pytest/dronet/rpg_public_dronet-master1/cnn.py --experiment_rootdir='./model/test_1' --train_dir='/home/datafile/dronet_data/collision_dataset/training' --val_dir='/home/datafile/dronet_data/collision_dataset/validation' --batch_size=16 --epochs=150 --log_rate=25
Using TensorFlow backend.
Found 63169 images belonging to 132 experiments.
Found 1035 images belonging to 3 experiments.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_1 (InputLayer)            (None, 200, 200, 1)  0                                            
__________________________________________________________________________________________________
conv2d_1 (Conv2D)               (None, 100, 100, 32) 832         input_1[0][0]                    
__________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D)  (None, 49, 49, 32)   0           conv2d_1[0][0]                   
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 49, 49, 32)   128         max_pooling2d_1[0][0]            
__________________________________________________________________________________________________
activation_1 (Activation)       (None, 49, 49, 32)   0           batch_normalization_1[0][0]      
__________________________________________________________________________________________________
conv2d_2 (Conv2D)               (None, 25, 25, 32)   9248        activation_1[0][0]               
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 25, 25, 32)   128         conv2d_2[0][0]                   
__________________________________________________________________________________________________
activation_2 (Activation)       (None, 25, 25, 32)   0           batch_normalization_2[0][0]      
__________________________________________________________________________________________________
conv2d_4 (Conv2D)               (None, 25, 25, 32)   1056        max_pooling2d_1[0][0]            
__________________________________________________________________________________________________
conv2d_3 (Conv2D)               (None, 25, 25, 32)   9248        activation_2[0][0]               
__________________________________________________________________________________________________
add_1 (Add)                     (None, 25, 25, 32)   0           conv2d_4[0][0]                   
                                                                 conv2d_3[0][0]                   
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, 25, 25, 32)   128         add_1[0][0]                      
__________________________________________________________________________________________________
activation_3 (Activation)       (None, 25, 25, 32)   0           batch_normalization_3[0][0]      
__________________________________________________________________________________________________
conv2d_5 (Conv2D)               (None, 13, 13, 64)   18496       activation_3[0][0]               
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, 13, 13, 64)   256         conv2d_5[0][0]                   
__________________________________________________________________________________________________
activation_4 (Activation)       (None, 13, 13, 64)   0           batch_normalization_4[0][0]      
__________________________________________________________________________________________________
conv2d_7 (Conv2D)               (None, 13, 13, 64)   2112        add_1[0][0]                      
__________________________________________________________________________________________________
conv2d_6 (Conv2D)               (None, 13, 13, 64)   36928       activation_4[0][0]               
__________________________________________________________________________________________________
add_2 (Add)                     (None, 13, 13, 64)   0           conv2d_7[0][0]                   
                                                                 conv2d_6[0][0]                   
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, 13, 13, 64)   256         add_2[0][0]                      
__________________________________________________________________________________________________
activation_5 (Activation)       (None, 13, 13, 64)   0           batch_normalization_5[0][0]      
__________________________________________________________________________________________________
conv2d_8 (Conv2D)               (None, 7, 7, 128)    73856       activation_5[0][0]               
__________________________________________________________________________________________________
batch_normalization_6 (BatchNor (None, 7, 7, 128)    512         conv2d_8[0][0]                   
__________________________________________________________________________________________________
activation_6 (Activation)       (None, 7, 7, 128)    0           batch_normalization_6[0][0]      
__________________________________________________________________________________________________
conv2d_10 (Conv2D)              (None, 7, 7, 128)    8320        add_2[0][0]                      
__________________________________________________________________________________________________
conv2d_9 (Conv2D)               (None, 7, 7, 128)    147584      activation_6[0][0]               
__________________________________________________________________________________________________
add_3 (Add)                     (None, 7, 7, 128)    0           conv2d_10[0][0]                  
                                                                 conv2d_9[0][0]                   
__________________________________________________________________________________________________
flatten_1 (Flatten)             (None, 6272)         0           add_3[0][0]                      
__________________________________________________________________________________________________
activation_7 (Activation)       (None, 6272)         0           flatten_1[0][0]                  
__________________________________________________________________________________________________
dropout_1 (Dropout)             (None, 6272)         0           activation_7[0][0]               
__________________________________________________________________________________________________
dense_2 (Dense)                 (None, 1)            6273        dropout_1[0][0]                  
__________________________________________________________________________________________________
dense_1 (Dense)                 (None, 1)            6273        dropout_1[0][0]                  
__________________________________________________________________________________________________
activation_8 (Activation)       (None, 1)            0           dense_2[0][0]                    
==================================================================================================
Total params: 321,634
Trainable params: 320,930
Non-trainable params: 704
__________________________________________________________________________________________________
None
configure_output_dir: not storing the git diff, probably because you're not in a git repo
Logging data to ./model/test_1/log.txt
/home/anaconda2/envs/dronet/lib/python3.5/site-packages/tensorflow/python/ops/gradients_impl.py:91: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
  "Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
Epoch 1/150
1.0
0.0
Traceback (most recent call last):
  File "/home/anaconda2/envs/dronet/lib/python3.5/site-packages/keras/utils/data_utils.py", line 564, in get
    inputs = self.queue.get(block=True).get()
  File "/home/anaconda2/envs/dronet/lib/python3.5/multiprocessing/pool.py", line 644, in get
    raise self._value
  File "/home/anaconda2/envs/dronet/lib/python3.5/multiprocessing/pool.py", line 119, in worker
    result = (True, func(*args, **kwds))
  File "/home/anaconda2/envs/dronet/lib/python3.5/site-packages/keras/utils/data_utils.py", line 390, in get_index
    return _SHARED_SEQUENCES[uid][i]
  File "/home/anaconda2/envs/dronet/lib/python3.5/site-packages/keras/preprocessing/image.py", line 799, in __getitem__
    return self._get_batches_of_transformed_samples(index_array)
  File "/home/anaconda2/envs/dronet/lib/python3.5/site-packages/keras/preprocessing/image.py", line 845, in _get_batches_of_transformed_samples
    raise NotImplementedError
NotImplementedError

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/pytest/dronet/rpg_public_dronet-master1/cnn.py", line 176, in <module>
    main(sys.argv)
  File "/home/pytest/dronet/rpg_public_dronet-master1/cnn.py", line 172, in main
    _main()
  File "/home/pytest/dronet/rpg_public_dronet-master1/cnn.py", line 161, in _main
    trainModel(train_generator, val_generator, model, initial_epoch)
  File "/home/pytest/dronet/rpg_public_dronet-master1/cnn.py", line 89, in trainModel
    initial_epoch=initial_epoch)
  File "/home/anaconda2/envs/dronet/lib/python3.5/site-packages/keras/legacy/interfaces.py", line 91, in wrapper
    return func(*args, **kwargs)
  File "/home/anaconda2/envs/dronet/lib/python3.5/site-packages/keras/engine/training.py", line 2212, in fit_generator
    generator_output = next(output_generator)
  File "/home/anaconda2/envs/dronet/lib/python3.5/site-packages/keras/utils/data_utils.py", line 570, in get
    six.raise_from(StopIteration(e), e)
  File "<string>", line 2, in raise_from
StopIteration

Integration with Drone

Can you guys provide some documents for implementing these models into the real drone along with the flight controller hardware and firmware. That would be great. Thank you.

adout conda envs

Hello, I'm very interested in your code, but it's always hard to configur Python environment, so my advice is that you can package the environment required by the code into an anconda environment, just like fast.ai, ,so that different computers with different Configurations can be easily downloaded and quickly configured.

in evaluation.py the pragram error: nan and 'float' object has no attribute 'tolist'~

In my experiment,I notice the value evas is always NaN,and rmse is a value which type is float32, so when execute "evas.tolist(),and rmse.tolist()",the result is :
EVA = nan
RMSE = 0.15523529052734375
dictionary = {"evas": evas.tolist(), "rmse": rmse.tolist(), "highest_errors": highest_errors.tolist()}
AttributeError: 'float' object has no attribute 'tolist'

my python version is 3.6, do u konw why, 3Q for your help.
PS:in testing/HMB_3/sync_steering.txt the value all is 0.000000000000000000e+00,whether the angle speed or alt~

result problem

Hi,

I have tested your result. Unfortunately, I think there is a problem when running evaulation.

Could you check it?

(env) stmoon:~/Test/dronet$ python evaluation.py --experiment_rootdir=model --weights_fname=__best_weights.h5 --test_dir='./collision_dataset/testing' 
Using TensorFlow backend.
Found 1576 images belonging to 8 experiments.
Impossible to find weight path. Returning untrained model
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
50/50 [==============================] - 57s   
/Users/stmoon/Test/dronet/env/lib/python3.5/site-packages/numpy/core/fromnumeric.py:2889: RuntimeWarning: Mean of empty slice.
  out=out, **kwargs)
/Users/stmoon/Test/dronet/env/lib/python3.5/site-packages/numpy/core/_methods.py:80: RuntimeWarning: invalid value encountered in true_divide
  ret = ret.dtype.type(ret / rcount)
/Users/stmoon/Test/dronet/env/lib/python3.5/site-packages/numpy/core/_methods.py:135: RuntimeWarning: Degrees of freedom <= 0 for slice
  keepdims=keepdims)
/Users/stmoon/Test/dronet/env/lib/python3.5/site-packages/numpy/core/_methods.py:105: RuntimeWarning: invalid value encountered in true_divide
  arrmean, rcount, out=arrmean, casting='unsafe', subok=False)
/Users/stmoon/Test/dronet/env/lib/python3.5/site-packages/numpy/core/_methods.py:127: RuntimeWarning: invalid value encountered in true_divide
  ret = ret.dtype.type(ret / rcount)
/Users/stmoon/Test/dronet/env/lib/python3.5/site-packages/numpy/core/fromnumeric.py:3126: RuntimeWarning: Degrees of freedom <= 0 for slice
  **kwargs)
EVA = nan
RMSE = nan
Written file model/test_regression.json
EVA = nan
RMSE = nan
Written file model/constant_regression.json
/Users/stmoon/Test/dronet/env/lib/python3.5/site-packages/numpy/core/_methods.py:127: RuntimeWarning: invalid value encountered in double_scalars
  ret = ret.dtype.type(ret / rcount)
EVA = nan
/Users/stmoon/Test/dronet/env/lib/python3.5/site-packages/numpy/core/_methods.py:80: RuntimeWarning: invalid value encountered in double_scalars
  ret = ret.dtype.type(ret / rcount)
RMSE = nan
Written file model/random_regression.json
Written file model/predicted_and_real_steerings.json
Average accuracy =  0.508248730964
Precision =  0.212534059946
Recall =  0.212534059946
F1-score =  0.287028518859
Written file model/random_classification.json
Average accuracy =  0.747461928934
Precision =  0.277227722772
Recall =  0.277227722772
F1-score =  0.123348017621
Written file model/test_classification.json
Written file model/predicted_and_real_labels.json

Full collision dataset available? (without the collision-gap)

Hello Antonio,

first of all, thanks for your work. Great idea and very interesting project.
I've trained my own DroNet model in a simulator environment and would like to test it on the real data now. My problem here is, that the datasets that you provide all have the gap right before the collision. But the behaviour in these frames is exactly what interests me the most, so I can't really test my model properly. Do you by any chance still have the datasets including the missing frames? Would be a great help for my work.

Thanks and best regards,
Moritz

single image test

Hello.
Thanks for code.
I want to test single image to get the collision rate.
The weird thing is that usual grayscale image is (1, 200, 200).
However your model require "outs = model.predict_on_batch(cv_image[None])"
the "input_1 to have 4 dimensions"
Could you point out which part should I look into it. I do not have much experience with keras. I know tensorflow little bit.
Thank you.

test_classification

Written file C:\Users\donghok\Desktop\dronet_model\rpg_public_dronet-master2\dronet_model\constant_regression.json
Written file C:\Users\donghok\Desktop\dronet_model\rpg_public_dronet-master2\dronet_model\predicted_and_real_steerings.json
Average accuracy = 0.9530456852791879
Precision = 0.8951841359773371
Recall = 0.8951841359773371
F1-score = 0.8951841359773371
Written file C:\Users\donghok\Desktop\dronet_model\rpg_public_dronet-master2\dronet_model\test_classification.json
Average accuracy = 0.5253807106598984
Precision = 0.2348993288590604
Recall = 0.2348993288590604
F1-score = 0.31876138433515483
Written file C:\Users\donghok\Desktop\dronet_model\rpg_public_dronet-master2\dronet_model\random_classification.json
Written file C:\Users\donghok\Desktop\dronet_model\rpg_public_dronet-master2\dronet_model\predicted_and_real_labels.json

test classification is 0.5. What is the meaning. I mean real_steering looks fine.
The collision is 0.5?

Perception and control block connections for RotorS simulations

Hi Antonio,

Thank you @antonilo very much for sharing your great work out. It's very helpful.

I wonder can you help me with conducting simulations of your codes in RotorS? I installed necessary pieces followed the intructions of drone_control/dronet unitl step 6. In step 6, instead of launching the real bebop, I lanuched a bebop2 in the RotorS environment and verifed the connections, then I launched perception and control blocks by roslaunch dronet_perception dronet_launch.launch and roslaunch dronet_control deep_navigation.launch. The rqt_graph looks as follows
Screenshot from 2019-04-03 11-28-40

I tried to conduct the following Connect the Perception and Control Block instructions. I don't have the bebop/takeoff rostopic and the drone has no respond to rostopic pub --once /bebop/state_change std_msgs/Bool "data: true". The rqt give a blank window when I tried to use you recomended GUI approach.

I wonder did I miss any step in order to make the simulation work? Thanks lot for any advice.

Regards,
Jie

How does match_idx = match_idx[:,0] not work

Hi,
I am using "time-stamp-matching" to generate “sync_steering .txt”
but the error occurs when the program executes to "match_idx = match_idx[:,0]".
The error is:

File "F:/rpg_public_dronet-master/data_preprocessing/time_stamp_matching.py", line 69, in
match_idx = match_idx[:,0]
IndexError: too many indices for array

and if I do not execute this sentence ,the program can be executed ,but the number of image is The number of images is much smaller than the number of lines in the “sync_steering .txt” .
How does it happens?

Quick test using the "best_model" error

Hi all,

I cloned the repository and was trying to do a quick test by using the provided "best_model". When I was running $ python evaluation.py --experiment_rootdir=./dronet --weights_fname='best_weights.h5' --test_dir='./collision_dataset/testing', I got

Using TensorFlow backend.
Traceback (most recent call last):
  File "evaluation.py", line 276, in <module>
    main(sys.argv)
  File "evaluation.py", line 272, in main
    _main()
  File "evaluation.py", line 180, in _main
    batch_size = FLAGS.batch_size)
  File "C:\Users\jwang\OneDrive\Desktop\DroNet\rpg_public_dronet\utils.py", line 34, in flow_from_directory
    follow_links=follow_links)
  File "C:\Users\jwang\OneDrive\Desktop\DroNet\rpg_public_dronet\utils.py", line 88, in __init__
    for subdir in sorted(os.listdir(directory)):
FileNotFoundError: [WinError 3] The system cannot find the path specified: "'./collision_dataset/testing'"

I was wondering do I understand anything wrong? What extra steps I need to do? Thank you very much for any help.

Regards,
Jie

Problem about roslaunch full_perception_launch.launch

I am a freshman of this, and when I run this command /bebop_ws/dronet/dronet_perception/launch$ roslaunch full_perception_launch.launch There are such questions:
myx@ubuntu:
/bebop_ws/dronet/dronet_perception/launch$ roslaunch full_perception_launch.launch
... logging to /home/myx/.ros/log/09365ad0-cd56-11e8-b9d6-000c29545061/roslaunch-ubuntu-5346.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.

Traceback (most recent call last):
File "/opt/ros/indigo/lib/python2.7/dist-packages/roslaunch/init.py", line 307, in main
p.start()
File "/opt/ros/indigo/lib/python2.7/dist-packages/roslaunch/parent.py", line 268, in start
self._start_infrastructure()
File "/opt/ros/indigo/lib/python2.7/dist-packages/roslaunch/parent.py", line 217, in _start_infrastructure
self._load_config()
File "/opt/ros/indigo/lib/python2.7/dist-packages/roslaunch/parent.py", line 132, in _load_config
roslaunch_strs=self.roslaunch_strs, verbose=self.verbose)
File "/opt/ros/indigo/lib/python2.7/dist-packages/roslaunch/config.py", line 451, in load_config_default
loader.load(f, config, verbose=verbose)
File "/opt/ros/indigo/lib/python2.7/dist-packages/roslaunch/xmlloader.py", line 746, in load
self._load_launch(launch, ros_config, is_core=core, filename=filename, argv=argv, verbose=verbose)
File "/opt/ros/indigo/lib/python2.7/dist-packages/roslaunch/xmlloader.py", line 718, in _load_launch
self._recurse_load(ros_config, launch.childNodes, self.root_context, None, is_core, verbose)
File "/opt/ros/indigo/lib/python2.7/dist-packages/roslaunch/xmlloader.py", line 654, in _recurse_load
n = self._node_tag(tag, context, ros_config, default_machine, verbose=verbose)
File "/opt/ros/indigo/lib/python2.7/dist-packages/roslaunch/xmlloader.py", line 95, in call
return f(*args, **kwds)
File "/opt/ros/indigo/lib/python2.7/dist-packages/roslaunch/xmlloader.py", line 406, in _node_tag
self._param_tag(t, param_ns, ros_config, force_local=True, verbose=verbose)
File "/opt/ros/indigo/lib/python2.7/dist-packages/roslaunch/xmlloader.py", line 95, in call
return f(*args, **kwds)
File "/opt/ros/indigo/lib/python2.7/dist-packages/roslaunch/xmlloader.py", line 256, in _param_tag
vals = self.opt_attrs(tag, context, ('value', 'textfile', 'binfile', 'command'))
File "/opt/ros/indigo/lib/python2.7/dist-packages/roslaunch/xmlloader.py", line 202, in opt_attrs
return [self.resolve_args(tag_value(tag,a), context) for a in attrs]
File "/opt/ros/indigo/lib/python2.7/dist-packages/roslaunch/xmlloader.py", line 183, in resolve_args
return substitution_args.resolve_args(args, context=context.resolve_dict, resolve_anon=self.resolve_anon)
File "/opt/ros/indigo/lib/python2.7/dist-packages/roslaunch/substitution_args.py", line 316, in resolve_args
resolved = _resolve_args(resolved, context, resolve_anon, commands)
File "/opt/ros/indigo/lib/python2.7/dist-packages/roslaunch/substitution_args.py", line 329, in _resolve_args
resolved = commands[command](resolved, a, args, context)
File "/opt/ros/indigo/lib/python2.7/dist-packages/roslaunch/substitution_args.py", line 142, in _find
source_path_to_packages=source_path_to_packages)
File "/opt/ros/indigo/lib/python2.7/dist-packages/roslaunch/substitution_args.py", line 188, in _find_executable
full_path = _get_executable_path(rp.get_path(args[0]), path)
File "/usr/lib/python2.7/dist-packages/rospkg/rospack.py", line 203, in get_path
raise ResourceNotFound(name, ros_paths=self._ros_paths)
ResourceNotFound: dronet_perception
ROS path [0]=/opt/ros/indigo/share/ros
ROS path [1]=/home/myx/bebop_ws/src/bebop_autonomy/bebop_autonomy
ROS path [2]=/home/myx/bebop_ws/src/bebop_autonomy/bebop_description
ROS path [3]=/home/myx/bebop_ws/src/bebop_autonomy/bebop_msgs
ROS path [4]=/home/myx/bebop_ws/src/bebop_autonomy/bebop_driver
ROS path [5]=/home/myx/bebop_ws/src/bebop_autonomy/bebop_tools
ROS path [6]=/home/myx/rgbdslam_catkin_ws/src
ROS path [7]=/opt/ros/indigo/share
ROS path [8]=/opt/ros/indigo/stacks
Is there any ways to solve this problem?Thanks!

How to run the evaluation.py?

Hi, all
I have organized my training, testing, evaluation dataset and used time_stamp_matching.py to get the sync_steering.txt file.
The following was the file structure:

  1. Root_direct:
    2

The collision datasets have images and labels.txt. The steering datasets have images and sync_steering.txt

  1. Training:
    training

  2. Testing:
    testing

  3. Validation:
    validation

When I run python evaluation.py:
1

Could anyone help me. I would appreciate that.

Cheers.

configure_output_dir: not storing the git diff, probably because you're not in a git repo

$ python cnn.py --experiment_rootdir='/home/neu105/datafile/test/dronet/rpg_public_dronet-master/model/test_1' --train_dir='/home/neu105/datafile/test/dronet/collision_dataset/training' --val_dir='/home/neu105/datafile/test/dronet/collision_dataset/validation' --batch_size=16 --epochs=150 --log_rate=25


Total params: 321,634
Trainable params: 320,930
Non-trainable params: 704


None
configure_output_dir: not storing the git diff, probably because you're not in a git repo
Logging data to /home/neu105/datafile/test/dronet/rpg_public_dronet-master/model/test_1/log.txt
/home/neu105/anaconda2/envs/py3/lib/python3.5/site-packages/tensorflow/python/ops/gradients_impl.py:91: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
"Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
1.0
0.0
Epoch 1/150
Traceback (most recent call last):
File "/home/neu105/anaconda2/envs/py3/lib/python3.5/site-packages/keras/utils/data_utils.py", line 555, in get
inputs = self.queue.get(block=True).get()
File "/home/neu105/anaconda2/envs/py3/lib/python3.5/multiprocessing/pool.py", line 644, in get
raise self._value
File "/home/neu105/anaconda2/envs/py3/lib/python3.5/multiprocessing/pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "/home/neu105/anaconda2/envs/py3/lib/python3.5/site-packages/keras/utils/data_utils.py", line 392, in get_index
return _SHARED_SEQUENCES[uid][i]
File "/home/neu105/anaconda2/envs/py3/lib/python3.5/site-packages/keras/preprocessing/image.py", line 800, in getitem
return self._get_batches_of_transformed_samples(index_array)
File "/home/neu105/anaconda2/envs/py3/lib/python3.5/site-packages/keras/preprocessing/image.py", line 846, in _get_batches_of_transformed_samples
raise NotImplementedError
NotImplementedError

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "cnn.py", line 176, in
main(sys.argv)
File "cnn.py", line 172, in main
_main()
File "cnn.py", line 161, in _main
trainModel(train_generator, val_generator, model, initial_epoch)
File "cnn.py", line 89, in trainModel
initial_epoch=initial_epoch)
File "/home/neu105/anaconda2/envs/py3/lib/python3.5/site-packages/keras/legacy/interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "/home/neu105/anaconda2/envs/py3/lib/python3.5/site-packages/keras/engine/training.py", line 2145, in fit_generator
generator_output = next(output_generator)
File "/home/neu105/anaconda2/envs/py3/lib/python3.5/site-packages/keras/utils/data_utils.py", line 561, in get
six.raise_from(StopIteration(e), e)
File "", line 3, in raise_from
StopIteration

can not find ‘collision_dataset/testing/DSCN2571/sync_steering.txt'’

(py3) n5@TitanX:~/test/dronet/rpg_public_dronet-master$ python evaluation.py --experiment_rootdir='./model' --weights_fname='model_weights.h5' --test_dir='/home/neu105/test/dronet/collision_dataset/testing'
Using TensorFlow backend.
Traceback (most recent call last):
File "evaluation.py", line 276, in
main(sys.argv)
File "evaluation.py", line 272, in main
_main()
File "evaluation.py", line 180, in _main
batch_size = FLAGS.batch_size)
File "/home/n5/test/dronet/rpg_public_dronet-master/utils.py", line 34, in flow_from_directory
follow_links=follow_links)
File "/home/n5/test/dronet/rpg_public_dronet-master/utils.py", line 105, in init
self._decode_experiment_dir(subpath)
File "/home/n5/test/dronet/rpg_public_dronet-master/utils.py", line 131, in _decode_experiment_dir
delimiter=',', skiprows=1)
File "/home/n5/anaconda2/lib/python2.7/site-packages/numpy/lib/npyio.py", line 896, in loadtxt
fh = iter(open(fname, 'U'))
IOError: [Errno 2] No such file or directory: '/home/neu105/test/dronet/collision_dataset/testing/DSCN2571/sync_steering.txt'

How can I get the prediction of a single image?

I want just to see the prediction of a single image using the following code:

`

Model reconstruction from JSON file

with open('model_struct.json', 'r') as f:
model = model_from_json(f.read())

Load weights into the new model

model.load_weights('best_weights.h5')

Load the image file

parser = argparse.ArgumentParser(description='Image to test')
parser.add_argument('image', type=str, help='The image image file to check')
args = parser.parse_args()

img = load_img(args.image)

Predict

prediction = model.predict(img) `

But it raises the following error:
ValueError: Error when checking : expected input_1 to have 4 dimensions, but got array with shape (720, 960, 3)

An issue when roslaunch dronet_launch.launch

Hi,
My name is Dong, nice to write to you. Well, I got a problem with dronet_launch.launch, to be actual, it is a problem with utils.py file.
Here are the outcomes:
File "/home/leo/fubuki_ws/src/dronet/dronet_perception/src/Dronet/utils.py", line 14, in callback_img
image_type = data.encoding()
AttributeError: 'cv2.VideoCapture' object has no attribute 'encoding'
[dronet_perception-2] process has died [pid 15319, exit code 1, cmd /home/leo/fubuki_ws/src/dronet/dronet_perception/nodes/dronet_node.py cnn_predictions:=/cnn_out/predictions state_change:=bebop/state_change camera:=bebop/image_raw __name:=dronet_perception __log:=/home/leo/.ros/log/663c4db6-d7c9-11e8-9bed-7c67a289d2da/dronet_perception-2.log].
log file: /home/leo/.ros/log/663c4db6-d7c9-11e8-9bed-7c67a289d2da/dronet_perception-2*.log

Now it the problem, I run this on my laptop, not in the drone, however, if I use cv2.imread() to import one picture and edit some lines like this:

imgpath = "/home/leo/photo1/1479425441182877835.jpg"

bridge = CvBridge()

def callback_img(data, target_size, crop_size, rootpath, save_img):
'''
try:
image_type = data.encoding()

    img = bridge.imgmsg_to_cv2(data, image_type)

except CvBridgeError, e:
print e
'''
img = cv2.imread(imgpath)
img = cv2.resize(img, target_size)
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
img = central_image_crop(img, crop_size[0], crop_size[1])

Then, it works with this picture, I guess this is because of some version issues, but my opencv-python is 3.1.0, the same with your message in readme.md. Could you please tell me what's wrong with this "encoding" in utils.py?
Thank you!

collision probability prediction

I downloaded your best model weight, collision probability dataset and steering angle dataset.
Then, I tested the model using your dataset.
But the model predicts collision probability as 1 for almost dataset.
I guess best model is well not trained.
Can you check the model and dataset?

my test code
from os.path import *
import cv2
from keras import backend as K
import utils
from common_flags import FLAGS
import tensorflow as tf
import keras
from keras.backend.tensorflow_backend import set_session
from PIL import Image
import numpy as np
import time

config = tf.ConfigProto()
config.gpu_options.allow_growth = True
config.log_device_placement = True

sess = tf.Session(config=config)
set_session(sess)

json_path = "path/original_model/model_struct.json"
weights_path = "path/original_model/model_weights.h5"
loaded_json = open(json_path, 'r')
loaded_model = loaded_json.read()

model = keras.models.model_from_json(loaded_model)
model.load_weights(weights_path)

count = 1

while(True):
#AirSim camera data get
if count <10:
fname = 'C:/Users/user/Desktop/collision_dataset/training/DSCN2561/images/frame_0000' + str(count) + '.jpg'
elif count <100:
fname = 'C:/Users/user/Desktop/collision_dataset/training/DSCN2561/images/frame_000' + str(count) + '.jpg'
elif count <1000:
fname = 'C:/Users/user/Desktop/collision_dataset/training/DSCN2561/images/frame_00' + str(count) + '.jpg'
cam_data = cv2.imread(fname)
count +=1
a = time.time()
img = cv2.cvtColor(cam_data,cv2.COLOR_RGB2GRAY)
img = np.resize(img,(200,200))
img_np = img.reshape((1,200,200,1))
outs1 = model.predict(img_np)
print(outs1)
result

DroNet in the indoor environments

@antonilo Hi Antonilo,
I was reading your work, " Dronet: Learning to Fly by Driving", and trying to predict the steering and collision probabilities of indoor images. However, when I try to fly the UAV on AirSim, the UAV always stuck at the corner.

In addition, I was using the following code to predict a single image. The model and the best weights I used was from https://github.com/uzh-rpg/rpg_public_dronet. In addition, I tested the single image from Himax_Dataset-master/test_2/frame_26.pgm from https://github.com/pulp-platform/pulp-dronet/tree/master/dataset.

It's really good to test the outdoor images, but hard to get the correct results for indoor sceneries.
I saw the video from https://www.youtube.com/watch?v=ow7aw9H4BcA&feature=youtu.be, which you tested the DroNet in your lab. Does it can steer at the corner of your lab corner?

from PIL import Image
import utils
import numpy as np
import cv2


model = utils.jsonToModel("model/model_struct.json")
model.load_weights("model/best_weights.h5")
model.summary()


def central_image_crop(img, crop_width=150, crop_heigth=150):

    half_the_width = int(img.shape[1] / 2)
    img = img[img.shape[0] - crop_heigth: img.shape[0],
          half_the_width - int(crop_width / 2):
          half_the_width + int(crop_width / 2)]
    return img


def read_img():
    img = np.array(Image.open("./model/Himax_Dataset-master/test_2/frame_26.pgm"))  
    cv2.imshow('', img)
    cv2.waitKey(0)
    img_central = central_image_crop(img, 200, 200) 
    img_01 = np.asarray(img_central, dtype=np.float32) * np.float32(1.0/255.0) 
    img_3d = np.expand_dims(img_01, axis=0)  
    im_4d = img_3d[:, :, :, np.newaxis]    
    return im_4d



im = read_img()    
outs = model.predict([im])
steer, coll = outs[0][0], outs[1][0]
print("Steer angle= ", steer)
print("Collision prob= ", coll)

using ROS package with python 3

Hi,

In the README.md file inside the root directory, you specify python 3.4 as a requisite. I have used python 3 and successfully used the provided model+weights.
Following that, I moved on to the ROS implementation you provided. in the file drone_control/dronet/dronet_perception/src/Dronet/utils.py there is a clear requirement to use python 2 (the syntax will not work on python 3.)
Can you confirm that you used python 3 to run the model+ROS thread? if this is indeed a mistake and python 2 should be used, maybe it should be updated in one of the readme files.

thanks!

Missing outdoor.yaml

Two of the launch files refer to a bebop_driver config file outdoor.yaml that is not currently in the repo:

<arg name="config_file" default="$(find bebop_driver)/config/outdoor.yaml" />

<arg name="config_file" default="$(find bebop_driver)/config/outdoor.yaml" />

If I change this to defaults.yaml it launches, but I would be curious what outdoor-specific settings you are using.

roslaunch bebop_launch.launch

Hi Antonio,

Thank you @antonilo very much for sharing your great work out.

I installed necessary pieces followed the intructions of "drone_control/dronet" until step 6, but in this step, I encountered some problems.

When I run "roslaunch bebop_launch.launch" for a test, I got some issues: I can't find the cpp file containing these two nodelet nodes in "bebop_launch.launch":

<node pkg="nodelet" type="nodelet" name="bebop_nodelet"
          args="load bebop_driver/BebopDriverNodelet bebop_nodelet_manager">

How can I solve this problem?

Could you please specify the training strategy

Hi guys!
What's a great job you did!
I try to change the structure of your model. I need retrain the network from the scratch, so I want to draw lesson from your training strategy. I have read your paper, but I haven't find how to get the finally model . Could you please specify the training strategy, like training epochs, when to change learning rate and so on.
Looking forward your reply!
Best wishes!
yours, Wang

Evaluation results on the steering task

Hi @antonilo,
I have run the evaluation with your best model. However, I got the result of 0.28 RMSE in the test set, which is different from your report in the paper (0.109 RMSE).

To be more specific, I wonder if I have tested in a test set different from yours. In fact, for the steering tasks, I download the test set as the HMB_3 release bag from this torrent (https://github.com/udacity/self-driving-car/tree/master/datasets/CH2#ch2_001). This test set has 5614 images and also includes the steering ground truths. Is it the same as the test set reported in the paper? Or maybe other problems or mismatches?

Thank you for your consideration.

image

building the project

Hi Antonio,

We're following your instructions, and when building dronet_control, we get the following:

yoni@yoni-ThinkPad-E470:~/bebop_ws/dronet/dronet_control$ catkin build --this

Profile: default
Extending: [cached] /opt/ros/kinetic
Workspace: /home/yoni/bebop_ws

Source Space: [exists] /home/yoni/bebop_ws/src
Log Space: [exists] /home/yoni/bebop_ws/logs
Build Space: [exists] /home/yoni/bebop_ws/build
Devel Space: [exists] /home/yoni/bebop_ws/devel
Install Space: [unused] /home/yoni/bebop_ws/install
DESTDIR: [unused] None

Devel Space Layout: linked
Install Space Layout: None

Additional CMake Args: None
Additional Make Args: None
Additional catkin Make Args: None
Internal Make Job Server: True
Cache Job Environments: False

Whitelisted Packages: None
Blacklisted Packages: None

Workspace configuration appears valid.

[build] Found '5' packages in 0.0 seconds.
[build] Given package 'dronet_control' is not in the workspace

Is /dronet/dronet_control supposed to be a package inside the main workspace src folder?

Thanks, Yoni

Running evalutation.py gives error

When running evaluation.py it gives me this error:

from constants import TEST_PHASE
ImportError: cannot import name 'TEST_PHASE'

What do I do to troubleshoot this?

About 501 images from HMB_1 for validation experiment

Hi! I'm coming to the segment before the training.
According to your instruction, we need to split HMB_1 and 501 images of it are used for validation. So I wonder that how to select these '501 images'? Should I just select the last 501 images or use a script to randomly select them?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.