Different models for Coral #20124
Replies: 1 comment
-
Coral devices are specifically designed to accelerate TensorFlow Lite models that have been quantized to 8-bit integers and compiled using Google's Edge TPU compiler. YOLO models are indeed more accurate because they are usually implemented in frameworks like PyTorch or Darknet and use 32-bit floating-point operations. To run a YOLO model on Coral, you would need to convert it to TensorFlow/TensorFlow Lite format, quantize it to INT8, and ensure all operations are supported by the Edge TPU - a process that often requires significant model modifications and may not preserve the original model's accuracy. Additionally, many YOLO architectures contain operations or layer types that aren't supported by the Edge TPU's specialized instruction set, making direct deployment impossible without substantial reworking of the model architecture. And finally, many of the freely available TensorFlow and YOLO models are all based on the same COCO dataset, which has not been trained on images specific to security cameras. If you're looking for more accuracy, you might consider new hardware - like a Hailo or MemryX. Support for the Hailo 8/8L already exists in Frigate 0.16 while support for the MemryX MX3 is coming in Frigate 0.17. And while it's not a separate piece of dedicated hardware, an Intel iGPU can also run many YOLO variants very well. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Is anyone using a different model for their Coral, and if so, which are you using?
I've seen some discussions around people using different models but interested to see if there's anything better than the default model that frigate ships with for the Coral.
I'm interested in better accuracy and I've seen that some yolo models are slightly better, but the discussions seem around Frigate 13.
Example here: #11548
Beta Was this translation helpful? Give feedback.
All reactions