[Detector Support]: ONXX and OpenVINO #20121
Replies: 3 comments 1 reply
-
The difference comes down to how Frigate routes detection workloads and which backend is actually doing the inference:
Directly configuring the OpenVINO detector ( In summary, for Intel UHD Graphics 630, using the OpenVINO detector is the most efficient and recommended configuration for object detection in Frigate(1)(3)(4)(2). 📚 Sources: Have another question? Just tag @inkeep. |
Beta Was this translation helpful? Give feedback.
-
The onnx detector runs on CPU because there's a bug that causes openvino to not be used. In general it's recommended to just use OpenVINO detector |
Beta Was this translation helpful? Give feedback.
-
I'm trying to get this running with the onnx detector. I managed to get it running but it's not using my Invidia GPU and is HORRIBLE on CPU.... Is there a trick to getting it to use the GPU? It shows up in the system stats, it's just not being used... |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Describe the problem you are having
From the documentation, it appears that the ONXX detector will use OpenVINO when detected. I've configured my system to use the YOLOv9 model, but have noticed a big difference in CPU usage when using ONXX vs OpenVINO. With OpenVINO, it's averaging only 1-2% CPU usage, whereas with ONXX it spikes over 300% at times. I'm running Intel UHD Graphics 630.
What is the difference between using the openvino detector vs onxx detector (that uses OpenVINO anyway)?
Version
0.16.1-e664cb2
Frigate config file
docker-compose file or Docker CLI command
Relevant Frigate log output
Install method
Docker Compose
Object Detector
OpenVino
Screenshots of the Frigate UI's System metrics pages
Any other information that may be helpful
No response
Beta Was this translation helpful? Give feedback.
All reactions