Skip to content

Commit 0c6002d

Browse files
committed
Initial commit
0 parents  commit 0c6002d

File tree

2,671 files changed

+983
-0
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

2,671 files changed

+983
-0
lines changed

.DS_Store

6 KB
Binary file not shown.

README.md

Lines changed: 124 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,124 @@
1+
# CarND Behavioral Cloning Project
2+
3+
The goals / steps of this project are the following:
4+
* Use the simulator to collect data of good driving behavior
5+
* Build, a convolution neural network in Keras that predicts steering angles from images
6+
* Train and validate the model with a training and validation set
7+
* Test that the model successfully drives around track one without leaving the road
8+
* Summarize the results with a written report
9+
10+
11+
### Files Submitted & Code Quality
12+
13+
1. Submission includes all required files and can be used to run the simulator in autonomous mode
14+
15+
My project includes the following files:
16+
* [model.py](model.py) containing the script to create and train the model
17+
* [drive.py](drive.py) for driving the car in autonomous mode
18+
* [model.h5](model-004.h5) containing a trained convolution neural network (this model.h5 file is model-004.h5(which is one of many model-xxx.h5 files produced by tweaking the model parameters and running ```python model.py```))
19+
* [output text file 1 while running ```python model.py``` on aws EC2 GPU instance](output-text-file1)
20+
* [output text file 2 while running ```python model.py```](output-text-file2)
21+
* [output text file 3 while running ```python model.py```](output-text-file3)
22+
* [video.py](video.py) for converting the image files to video
23+
* [README.md](README.md) summarizing the results
24+
25+
2. Submission includes functional code
26+
Using the Udacity provided simulator and my drive.py file, the car can be driven autonomously around the track by executing
27+
```python drive.py model.h5```
28+
29+
3. Submission code is usable and readable
30+
31+
The [model.py](model.py) file contains the code for training and saving the convolution neural network. The file shows the pipeline I used for training and validating the model, and it contains comments to explain how the code works.
32+
33+
### Model Architecture and Training Strategy
34+
35+
#### 1. An appropriate model architecture has been employed
36+
37+
[NVIDIA's End-to-End Deep Learning Model for Self-Driving Cars](https://devblogs.nvidia.com/parallelforall/deep-learning-self-driving-cars/) is used. It consists of a convolution neural network with 5x5 and 3x3 filter sizes and depths between 24 and 64 ([model.py](model.py) code lines 46-75)
38+
39+
The data is normalized in the model using a Keras lambda layer ([model.py](model.py) code line 67)
40+
41+
The model includes ELU layers to introduce nonlinearity ([model.py](model.py) code lines 70-80)
42+
43+
#### 2. Attempts to reduce overfitting in the model
44+
45+
The model contains dropout layers in order to reduce overfitting ([model.py](model.py) code line 76).
46+
47+
The model was trained and validated on different data sets to ensure that the model was not overfitting ([model.py](model.py) code line 142).
48+
49+
The model was tested by running it through the simulator and ensuring that the vehicle could stay on the track.
50+
51+
#### 3. Model parameter tuning
52+
53+
The model used an adam optimizer with the default learning rate (0.0001) ([model.py](model.py) code line 110 and line 148).
54+
55+
#### 4. Appropriate training data
56+
57+
I used [Udacity's SDC-ND Sample Training Data](https://d17h27t6h515a5.cloudfront.net/topher/2016/December/584f6edd_data/data.zip) for Training [The Model](model.py)
58+
59+
### Model Architecture and Training Strategy
60+
61+
#### 1. Solution Design Approach
62+
63+
I started of with a simple CNN Model
64+
65+
In order to gauge how well the model was working, I split my image and steering angle data into a training and validation set. I found that my first model had a low mean squared error on the training set but a high mean squared error on the validation set. This implied that the model was overfitting.
66+
67+
To combat the overfitting, I modified the model so that there is less overfitting.
68+
69+
The final step was to run the simulator to see how well the car was driving around track one. There were a few spots where the vehicle fell off the track or slammed the bridge and stopped or hit the tree and stopped. To improve the driving behavior in these cases, I tried to implement [NVIDIA's End-to-End Deep Learning Model for Self-Driving Cars](https://devblogs.nvidia.com/parallelforall/deep-learning-self-driving-cars/)
70+
71+
After lots of trial and error , and tweaking the training set(from 80% to 90%) and test set (from 20% to 10%) and changing the batch size and learning rate.
72+
73+
#### 2. Final Model Architecture
74+
75+
Finally , [NVIDIA's End-to-End Deep Learning Model for Self-Driving Cars](https://devblogs.nvidia.com/parallelforall/deep-learning-self-driving-cars/) was used.
76+
77+
The Final Model Architecture consisted of convolution neural network with 5x5 and 3x3 filter sizes and depths between 24 and 64 ([model.py](model.py) code lines 46-75)
78+
79+
The data is normalized in the model using a Keras lambda layer ([model.py](model.py) code line 67)
80+
81+
The model includes ELU layers to introduce nonlinearity ([model.py](model.py) code lines 70-80)
82+
83+
The model contains dropout layers in order to reduce overfitting ([model.py](model.py) code line 76).
84+
85+
The model was trained and validated on different data sets to ensure that the model was not overfitting ([model.py](model.py) code line 142).
86+
87+
Here is a visualization of the architecture (note: visualizing the architecture is optional according to the project rubric)
88+
89+
| Layer (type) | Output Shape | Param # | Connected to |
90+
|--------------|--------------|--------------|--------------|
91+
|lambda_1 (Lambda)| (None, 66, 200, 3) | 0 | lambda_input_1[0][0] |
92+
|convolution2d_1 (Convolution2D)| (None, 31, 98, 24) | 1824 | lambda_1[0][0] |
93+
|convolution2d_2 (Convolution2D) | (None, 14, 47, 36) | 21636 | convolution2d_1[0][0] |
94+
|convolution2d_3 (Convolution2D) | (None, 5, 22, 48) | 43248 | convolution2d_2[0][0] |
95+
|convolution2d_4 (Convolution2D) |(None, 3, 20, 64) | 27712 | convolution2d_3[0][0] |
96+
|convolution2d_5 (Convolution2D) | (None, 1, 18, 64) | 36928 | convolution2d_4[0][0] |
97+
|dropout_1 (Dropout) | (None, 1, 18, 64) | 0 | convolution2d_5[0][0] |
98+
|flatten_1 (Flatten) | (None, 1152) | 0 | dropout_1[0][0] |
99+
|dense_1 (Dense) | (None, 100) | 115300 | flatten_1[0][0] |
100+
|dense_2 (Dense) | (None, 50) | 5050 | dense_1[0][0] |
101+
|dense_3 (Dense) | (None, 10) | 510 | dense_2[0][0] |
102+
|dense_4 (Dense) | (None, 1) | 11 | dense_3[0][0] |
103+
104+
#### 3. Creation of the Training Set & Training Process
105+
106+
#### 4. Final Video
107+
108+
for recording or saving the images for video in folder [rn1](rn1)
109+
110+
```python drive.py model-004.h5 rn1```
111+
112+
113+
114+
for taking the recorded or saved images and making the video rn1.mp4 at 60 frames per second (default)
115+
116+
```python video.py rn1 ``` outputs [video(youtube)](https://youtu.be/gvwRCXzHGGs) / [video in the repo](Video-60fps.mp4)
117+
118+
119+
120+
121+
for taking the recorded or saved images and making the video rn1.mp4 at 40 frames per second
122+
123+
```python video.py rn1 --fps 40``` outputs [video(youtube)](https://youtu.be/lEZAF99rWQI) / [video in the repo](Video-40fps.mp4)
124+

Video-40fps.mp4

8 MB
Binary file not shown.

Video-60fps.mp4

7.14 MB
Binary file not shown.

drive.py

Lines changed: 142 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,142 @@
1+
#parsing command line arguments
2+
import argparse
3+
#decoding camera images
4+
import base64
5+
#for frametimestamp saving
6+
from datetime import datetime
7+
#reading and writing files
8+
import os
9+
#high level file operations
10+
import shutil
11+
#matrix math
12+
import numpy as np
13+
#real-time server
14+
import socketio
15+
#concurrent networking
16+
import eventlet
17+
#web server gateway interface
18+
import eventlet.wsgi
19+
#image manipulation
20+
from PIL import Image
21+
#web framework
22+
from flask import Flask
23+
#input output
24+
from io import BytesIO
25+
26+
#load our saved model
27+
from keras.models import load_model
28+
29+
#helper class
30+
import utils
31+
32+
#initialize our server
33+
sio = socketio.Server()
34+
#our flask (web) app
35+
app = Flask(__name__)
36+
#init our model and image array as empty
37+
model = None
38+
prev_image_array = None
39+
40+
#set min/max speed for our autonomous car
41+
MAX_SPEED = 25
42+
MIN_SPEED = 10
43+
44+
#and a speed limit
45+
speed_limit = MAX_SPEED
46+
47+
#registering event handler for the server
48+
@sio.on('telemetry')
49+
def telemetry(sid, data):
50+
if data:
51+
# The current steering angle of the car
52+
steering_angle = float(data["steering_angle"])
53+
# The current throttle of the car, how hard to push peddle
54+
throttle = float(data["throttle"])
55+
# The current speed of the car
56+
speed = float(data["speed"])
57+
# The current image from the center camera of the car
58+
image = Image.open(BytesIO(base64.b64decode(data["image"])))
59+
try:
60+
image2 = np.asarray(image) # from PIL image to numpy array
61+
image2 = utils.preprocess(image2) # apply the preprocessing
62+
#image = np.array([image2]) # the model expects 4D array
63+
image2 = image2[None, :, :, :]
64+
65+
# predict the steering angle for the image
66+
steering_angle = float(model.predict(image2, batch_size=1))
67+
# lower the throttle as the speed increases
68+
# if the speed is above the current speed limit, we are on a downhill.
69+
# make sure we slow down first and then go back to the original max speed.
70+
global speed_limit
71+
if speed > speed_limit:
72+
speed_limit = MIN_SPEED # slow down
73+
else:
74+
speed_limit = MAX_SPEED
75+
throttle = 1.0 - steering_angle**2 - (speed/speed_limit)**2
76+
77+
print('{} {} {}'.format(steering_angle, throttle, speed))
78+
send_control(steering_angle, throttle)
79+
except Exception as e:
80+
print(e)
81+
82+
# save frame
83+
if args.image_folder != '':
84+
timestamp = datetime.utcnow().strftime('%Y_%m_%d_%H_%M_%S_%f')[:-3]
85+
image_filename = os.path.join(args.image_folder, timestamp)
86+
image.save('{}.jpg'.format(image_filename))
87+
else:
88+
89+
sio.emit('manual', data={}, skip_sid=True)
90+
91+
92+
@sio.on('connect')
93+
def connect(sid, environ):
94+
print("connect ", sid)
95+
send_control(0, 0)
96+
97+
98+
def send_control(steering_angle, throttle):
99+
sio.emit(
100+
"steer",
101+
data={
102+
'steering_angle': steering_angle.__str__(),
103+
'throttle': throttle.__str__()
104+
},
105+
skip_sid=True)
106+
107+
108+
if __name__ == '__main__':
109+
parser = argparse.ArgumentParser(description='Remote Driving')
110+
parser.add_argument(
111+
'model',
112+
type=str,
113+
help='Path to model h5 file. Model should be on the same path.'
114+
)
115+
parser.add_argument(
116+
'image_folder',
117+
type=str,
118+
nargs='?',
119+
default='',
120+
help='Path to image folder. This is where the images from the run will be saved.'
121+
)
122+
args = parser.parse_args()
123+
124+
#load model
125+
model = load_model(args.model)
126+
127+
if args.image_folder != '':
128+
print("Creating image folder at {}".format(args.image_folder))
129+
if not os.path.exists(args.image_folder):
130+
os.makedirs(args.image_folder)
131+
else:
132+
shutil.rmtree(args.image_folder)
133+
os.makedirs(args.image_folder)
134+
print("RECORDING THIS RUN ...")
135+
else:
136+
print("NOT RECORDING THIS RUN ...")
137+
138+
# wrap Flask application with engineio's middleware
139+
app = socketio.Middleware(sio, app)
140+
141+
# deploy as an eventlet WSGI server
142+
eventlet.wsgi.server(eventlet.listen(('', 4567)), app)

model-000.h5

2.93 MB
Binary file not shown.

model-001.h5

2.93 MB
Binary file not shown.

model-002.h5

2.93 MB
Binary file not shown.

model-003.h5

2.93 MB
Binary file not shown.

model-004.h5

2.93 MB
Binary file not shown.

model-005.h5

2.93 MB
Binary file not shown.

model-006.h5

2.93 MB
Binary file not shown.

model-007.h5

2.93 MB
Binary file not shown.

model-008.h5

2.93 MB
Binary file not shown.

model-009.h5

2.93 MB
Binary file not shown.

model-010.h5

2.93 MB
Binary file not shown.

model-011.h5

2.93 MB
Binary file not shown.

model-012.h5

2.93 MB
Binary file not shown.

model-013.h5

2.93 MB
Binary file not shown.

model-014.h5

2.93 MB
Binary file not shown.

model-015.h5

2.93 MB
Binary file not shown.

model-016.h5

2.93 MB
Binary file not shown.

model-017.h5

2.93 MB
Binary file not shown.

model-018.h5

2.93 MB
Binary file not shown.

model-019.h5

2.93 MB
Binary file not shown.

model-020.h5

2.93 MB
Binary file not shown.

model-021.h5

2.93 MB
Binary file not shown.

model-022.h5

2.93 MB
Binary file not shown.

model-023.h5

2.93 MB
Binary file not shown.

model-024.h5

2.93 MB
Binary file not shown.

model-025.h5

2.93 MB
Binary file not shown.

model-026.h5

2.93 MB
Binary file not shown.

model-027.h5

2.93 MB
Binary file not shown.

0 commit comments

Comments
 (0)