[Detector Support]: How do I know if onnx is correctly using TensorRT? #20006
Replies: 2 comments 3 replies
-
Based on the documentation, when using the From your logs, you can see that the ONNX detector is loading successfully: The fact that you're using the D-FINE model is important to note. According to the documentation, D-FINE models are not supported with ROCm (1), and there's a warning that D-FINE models currently only run on OpenVINO in CPU mode, with GPUs failing to compile the model (2). For ONNX with Nvidia GPUs, the documentation states that Nvidia GPUs will automatically be detected and used with the ONNX detector in the Your system metrics showing GPU usage (45.3% detector GPU usage) suggests the GPU is being utilized for inference, which indicates the TensorRT execution provider is likely active rather than falling back to CPU-only execution. 📚 Sources: Have another question? Just tag @inkeep. |
Beta Was this translation helpful? Give feedback.
-
If you run Nvidia-Ami on the host and you see the detector process listed then it is using the GPU The CUDAExecutionProvider is always used as it is better in many ways for Frigates usage |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Describe the problem you are having
The docs mention that the
-tensorrt
image will automatically detect TensorRT:However, how do I know if it's really using TensorRT execution provider and not only the CUDA execution provider? Is there any way to check? Should I be seeing something in the logs?
Version
0.16-0
Frigate config file
docker-compose file or Docker CLI command
N/A
Relevant Frigate log output
Install method
Proxmox via TTeck Script
Object Detector
Other
Screenshots of the Frigate UI's System metrics pages
Any other information that may be helpful
No response
Beta Was this translation helpful? Give feedback.
All reactions