Skip to content

Latest commit

 

History

History
61 lines (50 loc) · 3.2 KB

README.md

File metadata and controls

61 lines (50 loc) · 3.2 KB

pothole-detection

pothole detection -keras with tensorflow

Implementation from the paper "Pothole Detection Using Location-Aware Convolutional Neural networks".

Note:

The public experimental code has been changed a lot of times in doing the experiment,which will cause a lot of redundant issues, it was not design in accordance with the priciples and norms of software engineering.

The structure of project

  • dataset
  • preprocessing
  • models
  • result
  • src
  • train.py
  • test.py

Requirements

  • Tensorflow
  • Keras
  • Pothole dataset:
    The dataset(each image is 800 × 600 in JPG format) was first released in the Data Science Hackathon, a computer vision challenge sponsored by IBM,the Machine Intelligence Institute of Africa, and Cortex Logic, which took place in Johannesburg in September 2017.
    you can download the dataset fromTraining & Test Data and Python Notebooks

    The orginal images(each image is 3680 × 2760 in JPG format) was first released in paper:
    [1] S. Nienaber, M.J. Booysen, R.S. Kroon, “Detecting potholes using simple image processing
    techniques and real-world footage”, SATC, July 2015, Pretoria, South Africa.
    [2] S. Nienaber, R.S. Kroon, M.J. Booysen , “A Comparison of Low-Cost Monocular Vision Techniques
    for Pothole Distance Estimation”, IEEE CIVTS, December 2015, Cape Town, South Africa.
    you can download the orginal image from here提取码:va7g

Usage

step1: The re-organized dataset are created by a simple preprocessing operation.

1、because the files name of two dataset is not the same, we have to find the same picture and rename it at the same with the help of some Similarity Image Finder software;
2、run preprocessing/b3org_to_roi.py to crop the road images from the origianl image and create the new labels;
3、hold out 800 training images as a validation set;
4、the road images are resized to 352*244 to meet the LCNN model requirement. 5、run preprocessing/s6create_heatmap.py to create the ground truth of heatmap;
6、run preprocessing/create_patch.py to create the patch dataset.

Or: The re-organized dataset can directly downloaded from here

step2: Training the LCNN model

1、Download pre-triand weights(ImageNet classification),and put them under the $PRJ_ROOT.The default backbone is resnet50. 2、If you want to change the folder for your own path of the dataset, you will need to change the XXX path in the trainlcnn.py file; 3、Train:run trainlcnn.py. By default, trained networks are saved under: $PRJ_ROOT/models;
note:During training, the best model is chosen the lowest error on the validation set;

step3: Training the PCNN model

source code will coming soon
note:During training, the best model is chosen the lowest error on the validation set.

step4: Testing

1、you can change the path of dataset and other parameters in testwhole.py;
2、run testwhole.py;
2、if set Debug=True in code, The results will be output under $PRJ_ROOT/result;

#whole source code will coming soon