|
| 1 | +#**Traffic Sign Recognition** |
| 2 | + |
| 3 | +##Writeup Template |
| 4 | + |
| 5 | +###You can use this file as a template for your writeup if you want to submit it as a markdown file, but feel free to use some other method and submit a pdf if you prefer. |
| 6 | + |
| 7 | +--- |
| 8 | + |
| 9 | +**Behavrioal Cloning Project** |
| 10 | + |
| 11 | +The goals / steps of this project are the following: |
| 12 | +* Use the simulator to collect data of good driving behavior |
| 13 | +* Build, a convolution neural network in Keras that predicts steering angles from images |
| 14 | +* Train and validate the model with a training and validation set |
| 15 | +* Test that the model successfully drives around track one without leaving the road |
| 16 | +* Summarize the results with a written report |
| 17 | + |
| 18 | + |
| 19 | +[//]: # (Image References) |
| 20 | + |
| 21 | +[image1]: ./examples/placeholder.png "Model Visualization" |
| 22 | +[image2]: ./examples/placeholder.png "Grayscaling" |
| 23 | +[image3]: ./examples/placeholder_small.png "Recovery Image" |
| 24 | +[image4]: ./examples/placeholder_small.png "Recovery Image" |
| 25 | +[image5]: ./examples/placeholder_small.png "Recovery Image" |
| 26 | +[image6]: ./examples/placeholder_small.png "Normal Image" |
| 27 | +[image7]: ./examples/placeholder_small.png "Flipped Image" |
| 28 | + |
| 29 | +## Rubric Points |
| 30 | +###Here I will consider the [rubric points](https://review.udacity.com/#!/rubrics/432/view) individually and describe how I addressed each point in my implementation. |
| 31 | + |
| 32 | +--- |
| 33 | +###Files Submitted & Code Quality |
| 34 | + |
| 35 | +####1. Submission includes all required files and can be used to run the simulator in autonomous mode |
| 36 | + |
| 37 | +My project includes the following files: |
| 38 | +* model.py containing the script to create and train the model |
| 39 | +* drive.py for driving the car in autonomous mode |
| 40 | +* model.h5 containing a trained convolution neural network |
| 41 | +* writeup_report.md or writeup_report.pdf summarizing the results |
| 42 | + |
| 43 | +####2. Submssion includes functional code |
| 44 | +Using the Udacity provided simulator and my drive.py file, the car can be driven autonomously around the track by executing |
| 45 | +```sh |
| 46 | +python drive.py model.h5 |
| 47 | +``` |
| 48 | + |
| 49 | +####3. Submssion code is usable and readable |
| 50 | + |
| 51 | +The model.py file contains the code for training and saving the convolution neural network. The file shows the pipeline I used for training and validating the model, and it contains comments to explain how the code works. |
| 52 | + |
| 53 | +###Model Architecture and Training Strategy |
| 54 | + |
| 55 | +####1. An appropriate model arcthiecture has been employed |
| 56 | + |
| 57 | +My model consists of a convolution neural network with 3x3 filter sizes and depths between 32 and 128 (model.py lines 18-24) |
| 58 | + |
| 59 | +The model includes RELU layers to introduce nonlinearity (code line 20), and the data is normalized in the model using a Keras lambda layer (code line 18). |
| 60 | + |
| 61 | +####2. Attempts to reduce overfitting in the model |
| 62 | + |
| 63 | +The model contains dropout layers in order to reduce overfitting (model.py lines 21). |
| 64 | + |
| 65 | +The model was trained and validated on different data sets to ensure that the model was not overfitting (code line 10-16). The model was tested by running it through the simulator and ensuring that the vehicle could stay on the track. |
| 66 | + |
| 67 | +####3. Model parameter tuning |
| 68 | + |
| 69 | +The model used an adam optimizer, so the learning rate was not tuned manually (model.py line 25). |
| 70 | + |
| 71 | +####4. Appropriate training data |
| 72 | + |
| 73 | +Training data was chosen to keep the vehicle driving on the road. I used a combination of center lane driving, recovering from the left and right sides of the road ... |
| 74 | + |
| 75 | +For details about how I created the training data, see the next section. |
| 76 | + |
| 77 | +###Model Architecture and Training Strategy |
| 78 | + |
| 79 | +####1. Solution Design Approach |
| 80 | + |
| 81 | +The overall strategy for deriving a model architecture was to ... |
| 82 | + |
| 83 | +My first step was to use a convolution neural network model similar to the ... I thought this model might be appropriate because ... |
| 84 | + |
| 85 | +In order to gauge how well the model was working, I split my image and steering angle data into a training and validation set. I found that my first model had a low mean squared error on the training set but a high mean squared error on the validation set. This implied that the model was overfitting. |
| 86 | + |
| 87 | +To combat the overfitting, I modified the model so that ... |
| 88 | + |
| 89 | +Then I ... |
| 90 | + |
| 91 | +The final step was to run the simulator to see how well the car was driving around track one. There were a few spots where the vehicle fell off the track... to improve the driving behavior in these cases, I .... |
| 92 | + |
| 93 | +At the end of the process, the vehicle is able to drive autonomously around the track without leaving the road. |
| 94 | + |
| 95 | +####2. Final Model Architecture |
| 96 | + |
| 97 | +The final model architecture (model.py lines 18-24) consisted of a convolution neural network with the following layers and layer sizes ... |
| 98 | + |
| 99 | +Here is a visualization of the architecture (note: visualizing the architecture is optional according to the project rubric) |
| 100 | + |
| 101 | +![alt text][image1] |
| 102 | + |
| 103 | +####3. Creation of the Training Set & Training Process |
| 104 | + |
| 105 | +To capture good driving behavior, I first recorded two laps on track one using center lane driving. Here is an example image of center lane driving: |
| 106 | + |
| 107 | +![alt text][image2] |
| 108 | + |
| 109 | +I then recorded the vehicle recovering from the left side and right sides of the road back to center so that the vehicle would learn to .... These images show what a recovery looks like starting from ... : |
| 110 | + |
| 111 | +![alt text][image3] |
| 112 | +![alt text][image4] |
| 113 | +![alt text][image5] |
| 114 | + |
| 115 | +Then I repeated this process on track two in order to get more data points. |
| 116 | + |
| 117 | +To augment the data sat, I also flipped images and angles thinking that this would ... For example, here is an image that has then been flipped: |
| 118 | + |
| 119 | +![alt text][image6] |
| 120 | +![alt text][image7] |
| 121 | + |
| 122 | +Etc .... |
| 123 | + |
| 124 | +After the collection process, I had X number of data points. I then preprocessed this data by ... |
| 125 | + |
| 126 | + |
| 127 | +I finally randomly shuffled the data set and put Y% of the data into a validation set. |
| 128 | + |
| 129 | +I used this training data for training the model. The validation set helped determine if the model was over or under fitting. The ideal number of epochs was Z as evidenced by ... I used an adam optimizer so that manually training the learning rate wasn't necessary. |
0 commit comments