Skip to content
Discussion options

You must be logged in to vote

I see several potential issues with your Coral TPU performance. Your 219.85 ms inference time is significantly slower than expected for a Coral TPU, which should typically achieve 5-10 ms inference speeds (1).

The main issue I notice in your configuration is that you're using the wrong model path. You have:

model:
  path: /cpu_model.tflite

(2)

This is the CPU model, not the EdgeTPU model. For EdgeTPU detectors, you should be using the EdgeTPU-optimized model. The Edge TPU detector uses a TensorFlow Lite model at /edgetpu_model.tflite by default (3). Remove the model path specification entirely to use the default EdgeTPU model, or explicitly set it to:

model:
  path: /edgetpu_model.tflite

(3)

Replies: 2 comments 1 reply

Comment options

You must be logged in to vote
1 reply
@Sauws
Comment options

Answer selected by Sauws
Comment options

You must be logged in to vote
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment