Behavioral cloning project from udaciy self-driving course
Lake Track |
---|
![]() |
YouTube Link |
Behavioral Cloning Project
The goals / steps of this project are the following:
My project includes the following files:
model.py
containing the script to create and train the modeldrive.py
for driving the car in autonomous modeutils.py
script for preprocess operations (i.e resize, augumentation)model.h5
containing a trained convolution neural network writeup_report.md
summarizing the resultsUsing the Udacity provided simulator and my drive.py file, the car can be driven autonomously around the track by executing
python drive.py model.h5
The model.py file contains the code for training and saving the convolution neural network. The file shows the pipeline I used for training and validating the model, and it contains comments to explain how the code works.
The network is based on The NVIDIA model, which has been proven to work in this problem domain.
Only changes is the dropout-layer
.
Figure shows a block diagram of our training system. Images are fed into a CNN that then computes a proposed steering command. The proposed command is compared to the desired command for that image, and the weights of the CNN are adjusted to bring the CNN output closer to the desired output. Here we are doing different preprocessing operations rather than “Random shift and rotation”. We will explain preprocessing steps later.
After training, the network is able to generate steering commands from the video images of a single center camera.
Layer (type) | Output Shape | Params | Connected to |
---|---|---|---|
lambda_1 (Lambda) | (None, 66, 200, 3) | 0 | lambda_input_1 |
convolution2d_1 | (None, 31, 98, 24) | 1824 | lambda_1 |
convolution2d_2 | (None, 14, 47, 36) | 21636 | convolution2d_1 |
convolution2d_3 | (None, 5, 22, 48) | 43248 | convolution2d_2 |
convolution2d_4 | (None, 3, 20, 64) | 27712 | convolution2d_3 |
convolution2d_5 | (None, 1, 18, 64) | 36928 | convolution2d_4 |
dropout_1 (Dropout) | (None, 1, 18, 64) | 0 | convolution2d_5 |
flatten_1 (Flatten) | (None, 1152) | 0 | dropout_1 |
dense_1 (Dense) | (None, 100) | 115300 | flatten_1 |
dense_2 (Dense) | (None, 50) | 5050 | dense_1 |
dense_3 (Dense) | (None, 10) | 510 | dense_2 |
dense_4 (Dense) | (None, 1) | 11 | dense_3 |
Total params | 252219 |
The model contains dropout layers in order to reduce overfitting (model.py lines 67).
The model was trained and validated on different data sets to ensure that the model was not overfitting. We also use data augmentation to prevent overfitting.
The model was tested by running it through the simulator and ensuring that the vehicle could stay on the track.
The model used an adam optimizer, so the learning rate was not tuned manually (model.py line 84).
Training data was chosen to keep the vehicle driving on the road. I used a combination of center lane driving, recovering from the left and right sides of the road and also collect data from the other map on simulator.
One important thing is if you collect only center images, when the car tends to going outside of the road probably don’t know how to turn back to the center. To handle this problem we need to collect recovering data.
The approach i used here is only record data when the car is driving from the side of the road back toward the center line.
When you drive with keyboard, most of the time steering angle can be zero. But if you drive with mause mostly steering angle have a value rather than zero.
I am not sure about that it is effects the result or not.
In utils.py include the preprocessing and data augmentation functions. We are feeding the model with image and corresponding steering angle.
These are the preprocessing steps:
We are using this function before training and inference.
To getting more data and preventing from overfitting we are using data augmentation. We are applying different methods and use all of them to train the model.
These are the data that i am using for training.
I used flipper for last one because the map mostly have left turning. To handle this problem, i added flipped images to have a balance.
You can add flipper to another step, i think it doesn’t effect the result too much.
The model can drive the course without bumping into the side ways.