You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am looking for help about the following problem.
I am trying to retrain the official mobildet model for the Edge TPU to detect 5 hand gestures from 440 annotated images used for training and 160 images used for evaluation. The non-annotated dataset is available here: https://lttm.dei.unipd.it/downloads/gesture/#kinect_leap.
The number of training steps was 25000 (it took 6 hours!).
I created the proper train and test .tfrecord's and double checked that they are fine with tfrecord-viewer (I slightly modified the pipeline in the above project to point to my tfrecord and label files instead of the default one). The training and the compilation go well and I can load the model onto the USB Coral Edge TPU.
However when evaluating this model with a live stream from the camera, it cannot detect anything or, when it detects something with some certain confidence score, it's just garbage more often than not! The curious thing is that from the tensorboard it seemed the model being trained could successfully detect the gestures after just some initial training steps.
Some people on youtube got a pretty good model starting from the official mobilenet (not det) with just a few of images and 2000 steps. So what am I doing wrong? Below is the pipeline configuration I used.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi guys,
I am looking for help about the following problem.
I am trying to retrain the official mobildet model for the Edge TPU to detect 5 hand gestures from 440 annotated images used for training and 160 images used for evaluation. The non-annotated dataset is available here: https://lttm.dei.unipd.it/downloads/gesture/#kinect_leap.
I am using the following Google Colab project to speed up the training with the GPU and to compile the trained TFLite model for the EdgeTPU: https://colab.research.google.com/github/Namburger/edgetpu-ssdlite-mobiledet-retrain/blob/master/ssdlite_mobiledet_transfer_learning_cat_vs_dog.ipynb
The number of training steps was 25000 (it took 6 hours!).
I created the proper train and test .tfrecord's and double checked that they are fine with tfrecord-viewer (I slightly modified the pipeline in the above project to point to my tfrecord and label files instead of the default one). The training and the compilation go well and I can load the model onto the USB Coral Edge TPU.
However when evaluating this model with a live stream from the camera, it cannot detect anything or, when it detects something with some certain confidence score, it's just garbage more often than not! The curious thing is that from the tensorboard it seemed the model being trained could successfully detect the gestures after just some initial training steps.
Some people on youtube got a pretty good model starting from the official mobilenet (not det) with just a few of images and 2000 steps. So what am I doing wrong? Below is the pipeline configuration I used.
Beta Was this translation helpful? Give feedback.
All reactions