[Detector Support]: Warning log about onnxruntime #20519
-
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
|
The warning message you're seeing is normal and not a cause for concern (1). This warning appears when using ONNX with TensorRT and indicates that some memory copy operations are being added to the computation graph, which might have a minor performance impact but doesn't affect functionality (1). Your configuration is correct. When using the For ONNX with Nvidia GPUs, Nvidia GPUs will automatically be detected and used with the ONNX detector in the Regarding using both your Intel iGPU and Nvidia GPU simultaneously: multiple detectors can be defined assuming GPU resources are available (2). You could potentially configure both OpenVINO and ONNX detectors, but note that multiple detectors cannot be mixed for object detection (ex: OpenVINO and Coral EdgeTPU cannot be used for object detection at the same time) (3). This limitation applies to the primary object detection task, though it doesn't affect using hardware for accelerating other tasks such as semantic search (3). 📚 Sources:
Have another question? Just tag @inkeep. |
Beta Was this translation helpful? Give feedback.


The warning message you're seeing is normal and not a cause for concern (1). This warning appears when using ONNX with TensorRT and indicates that some memory copy operations are being added to the computation graph, which might have a minor performance impact but doesn't affect functionality (1).
Your configuration is correct. When using the
-tensorrtFrigate image with ONNX, TensorRT will automatically be detected and used as a detector when a supported ONNX model is configured (2). The fact that you're seeing GPU usage in your system metrics (as shown in your screenshots) confirms that the GPU is being utilized for inference, which indicates the TensorRT execution provider is active (1).