This was developed for my Undergraduate Thesis in AI & Computer Science at the University of Edinburgh. Please find the link to my report attached to this repository. Overall ResNet152 proved to be the best performing model on a standardised, and shuffled dataset with a 2s window size which achieved 92.8% AUC, 87.28% sensitivity, and 98.33% specificity.
(The section between the green and red bars represents a fall)
- Compile json data recording chunks into their relevant recording objects
- Parse these recording objects into sliding windows
- Standardise the dataset
- Randomly shuffle the dataset samples
- Split the dataset into train/validation/test sets
- Label the data by applying the sigmoid function to the average label in a sliding window (where 1 represents fall and 0 no fall)
I first trained and tested some baseline models (such as K-Nearest Neighbours, Gaussian Naive Bayes, simple Neural Networks, etc.) to get an insight on the performance of my data on simple models. In the end they all seemed to average out at around 70% AUC, the best performing model here was K-Nearest Neighbours with k set to 3 with a test AUC of 72.17%.
After this I trained LSTM and ResNet deep learning models on my dataset using variable window sizes and tuning parameters. Overall ResNet152 proved to be the best performing model on a standardised, and shuffled dataset with a 2s window size which achieved 92.8% AUC, 87.28% sensitivity, and 98.33% specificity.
Exported my PyTorch model to .tflite
using the following conversions: PyTorch -> ONNX -> TensorFlow -> TFLite
This conversion was done to allow the ML model to be able to be run locally in the background on a user's mobile phone.