Skip to content

Commit 3dd96c4

Browse files
committed
update README.md
1 parent 29cfca3 commit 3dd96c4

19 files changed

+230
-207
lines changed

README.md

+54-14
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,57 @@
1-
21
[//]: # (Image References)
3-
[real0000]: ./examples/real0000.png
2+
[i3738]: ./examples/frame_003738.jpg
3+
[aloss]: ./examples/TotLoss.png
4+
5+
# Traffic Light Detection
6+
7+
Implemented with TensorFlow Object Detection API.
8+
9+
Tested on LaRA dataset.
10+
11+
Model inference example:
12+
13+
![alt-text][i3738]
14+
15+
Check out the rendered video in
16+
[Youtube](https://youtu.be/BcPy9m__bY4) or
17+
[BaiduPan](https://pan.baidu.com/s/1slwWdBJ)
18+
19+
20+
## LaRA Traffic Lights Recognition (TLR) Public Benchmarks
21+
22+
On-board vehicle acquisition in a dense urban environment:
423

5-
# Traffic Light Detection and Classification with TensorFlow Object Detection API
24+
- 11179 frames (8min 49sec, @25FPS)
25+
- 640×480 (RGB, 8bits)
26+
- Paris (France)
627

7-
The project is forked from https://github.com/coldKnight/TrafficLight_Detection-TensorFlowAPI.git
28+
Links:
829

9-
A brief introduction to the project is available [here](https://medium.com/@Vatsal410/traffic-light-detection-tensorflow-api-c75fdbadac62)
30+
- Download [Dataset download link](http://s150102174.onlinehome.fr/Lara/files/Lara_UrbanSeq1_JPG.zip)
1031

32+
- Download [Ground truth labels](http://s150102174.onlinehome.fr/Lara/files/Lara_UrbanSeq1_GroundTruth_GT.txt)
1133

12-
### Get the dataset
34+
- [A detailed dataset description](http://www.lara.prd.fr/benchmarks/trafficlightsrecognition)
1335

14-
[Drive location](https://drive.google.com/file/d/0B-Eiyn-CUQtxdUZWMkFfQzdObUE/view?usp=sharing)
36+
To make TFRecord files for Tensorflow tranning, read [this](lara/README.md)
37+
38+
39+
## Performance
40+
41+
Here records an informal test performance on 592 unseen images:
42+
43+
- Model = SSD MobileNet, pre-trained on COCO
44+
- Infer time per image = 9 ms
45+
- Green light [email protected] = 0.385
46+
- Red light [email protected] = 0.725
47+
- Yellow light [email protected] = 0.385
48+
- Precision [email protected] = 0.620
49+
50+
*Running on Tesla P40 GPU*
51+
52+
Training total loss:
53+
54+
![alt-text][aloss]
1555

1656

1757
### Get the tensorflow models lib
@@ -32,9 +72,8 @@ Download the required model tar.gz files and untar them into `models/` directory
3272

3373
`python data_conversion.py --input_yaml lara/annotations_test.yaml --output_path lara/test.record`
3474

35-
## Using Faster-RCNN / Inception SSD v2 / MobileNet SSD v1 model
3675

37-
#### Training, Evaluating, and Tensorboarding
76+
### Training, Evaluating, and Tensorboarding
3877

3978
`sh train.sh <faster_rcnn | ssd_inception | ssd_mobilenet>`
4079

@@ -44,13 +83,14 @@ Download the required model tar.gz files and untar them into `models/` directory
4483

4584
*note you'd better not run train & evaluate together because they will use up GPU memory*
4685

47-
#### Saving Weights for Inference
86+
87+
### Saving Weights for Inference
4888

4989
`sh freeze.sh <faster_rcnn | ssd_inception | ssd_mobilenet> <model checkpoint version num>`
50-
---
5190

5291

53-
**Inference results can be viewed using the TrafficLightDetection-Inference.ipynb or .html files.**
92+
### Infer Results, Visualize, and Make Video
93+
using the `TrafficLightDetection-Inference.ipynb`
94+
5495

55-
### Camera Image and Model's Detection Sample
56-
![alt-text][real0000]
96+
###

TrafficLightDetection-Inference.ipynb

+152-193
Large diffs are not rendered by default.

examples/TotLoss.png

59 KB
Loading

examples/frame_000890.jpg

45.1 KB
Loading

examples/frame_000961.jpg

45 KB
Loading

examples/frame_003738.jpg

40.2 KB
Loading

examples/left0000.jpg

-315 KB
Binary file not shown.

examples/left0003.jpg

-72.8 KB
Binary file not shown.

examples/left0011.jpg

-73.3 KB
Binary file not shown.

examples/left0027.jpg

-73.1 KB
Binary file not shown.

examples/left0140.jpg

-315 KB
Binary file not shown.

examples/left0701.jpg

-352 KB
Binary file not shown.

examples/real0000.png

-333 KB
Binary file not shown.

examples/real0140.png

-324 KB
Binary file not shown.

examples/real0701.png

-346 KB
Binary file not shown.

examples/sim0003.png

-138 KB
Binary file not shown.

examples/sim0011.png

-138 KB
Binary file not shown.

examples/sim0027.png

-144 KB
Binary file not shown.

lara/README.md

+24
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,24 @@
1+
# LaRA Traffic Lights Recognition (TLR) public benchmarks
2+
3+
On-board vehicle acquisition in a dense urban environment:
4+
5+
- 11179 frames (8min 49sec, @25FPS)
6+
- 640×480 (RGB, 8bits)
7+
- Paris (France)
8+
9+
Links:
10+
11+
- Download [Dataset download link](http://s150102174.onlinehome.fr/Lara/files/Lara_UrbanSeq1_JPG.zip)
12+
13+
- Download [Ground truth labels](http://s150102174.onlinehome.fr/Lara/files/Lara_UrbanSeq1_GroundTruth_GT.txt)
14+
15+
- [A detailed dataset description](http://www.lara.prd.fr/benchmarks/trafficlightsrecognition)
16+
17+
18+
## Make TFRecord files for Tensorflow model training
19+
20+
cd to `./lara`
21+
22+
Manually preprocess `Lara_UrbanSeq1_GroundTruth_GT.txt` to `ground_truth.txt`
23+
24+
`python to_annotations.py` to split data into training and test set, and generate annotation files in YAML format.

0 commit comments

Comments
 (0)