[Detector Support]: ONNX pthread_setaffinity_np Errors in Log #20782
Replies: 5 comments 4 replies
-
|
These Your configuration has a critical issue: you're using the However, your model configuration is incomplete for proper GPU utilization. Based on the documentation for YOLO-NAS models with ONNX, you need to ensure your configuration matches the model requirements(2)(3): detectors:
onnx_0:
type: onnx
model:
model_type: yolonas
width: 320
height: 320
input_pixel_format: bgr
input_tensor: nchw
path: /config/yolo_nas_s.onnx
labelmap_path: /labelmap/coco-80.txtThe ONNX detector will automatically detect and use your NVIDIA GPU in the Regarding GPU metrics not showing in the UI: the System metrics page shows detector inference metrics separately from ffmpeg hardware acceleration. Your GPU usage for object detection would appear in the "Detector" section rather than per-camera CPU usage, which primarily reflects ffmpeg processing overhead. ** 📚 Sources: Have another question? Just tag @inkeep. |
Beta Was this translation helpful? Give feedback.
-
|
@inkeep
|
Beta Was this translation helpful? Give feedback.
-
|
Your configuration is correct. The Your configuration follows the documentation correctly for NVIDIA GPUs with ONNX(1): detectors:
onnx_0:
type: onnx
model:
model_type: yolonas
width: 320
height: 320
input_tensor: nchw
input_pixel_format: bgr
path: /config/yolo_nas_s.onnx
labelmap_path: /labelmap/coco-80.txtWith the Regarding the GPU metrics in the UI: The System metrics page shows detector inference separately from per-camera processing. Your GPU usage for object detection appears in the detector statistics (inference time), not in the per-camera CPU usage section. The per-camera CPU usage primarily reflects ffmpeg's video processing overhead, which is separate from the detector's GPU-accelerated inference(2). The errors are cosmetic and won't affect performance. If you want to suppress them, you could explicitly set the number of threads in your detector configuration, though this is optional since the detector is already working correctly(1). 📚 Sources: Have another question? Just tag @inkeep. |
Beta Was this translation helpful? Give feedback.
-
|
Thanks,
So everything looks correct? My server is humming along a bit louder than usual when running frigate over the past year.
Sent with [Proton Mail](https://pr.tn/ref/VRWE11731E0G) secure email.
…On Monday, November 3rd, 2025 at 4:06 PM, Josh Hawkins ***@***.***> wrote:
The AI wasn't quite right there.
It's safe to ignore, but see this related discussion about the pthread_setaffinity_np logs: [#15351](#15351)
—
Reply to this email directly, [view it on GitHub](#20782 (reply in thread)), or [unsubscribe](https://github.com/notifications/unsubscribe-auth/AF5OUMIZLU2IABBNGIMRFGL326YSRAVCNFSM6AAAAACLA3N4IOVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTIOBWGI3TOMQ).
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
|
OK, I'll give that a try. The only thing that has changed from the previous setup Ibran for about a year is the version and the switch from tensorrt detector to onnx, so I didn't know if that had anything to do with it.
Sent from [Proton Mail](https://proton.me/mail/home) for Android.
…-------- Original Message --------
On Monday, 11/03/25 at 18:56 Josh Hawkins ***@***.***> wrote:
Your metrics screenshot indicates you have a number of skipped detections. Your GPU is also running near or at its limit. I would suggest [tuning your motion detection](https://docs.frigate.video/configuration/motion_detection/).
—
Reply to this email directly, [view it on GitHub](#20782 (reply in thread)), or [unsubscribe](https://github.com/notifications/unsubscribe-auth/AF5OUMKSCBKTZKMOKKOTOKL327MQDAVCNFSM6AAAAACLA3N4IOVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTIOBWGM4TANI).
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.

Uh oh!
There was an error while loading. Please reload this page.
-
Describe the problem you are having
With the update to v0.16, I see that we need to switch from the tensorrt detector to onnx. I have updated my config and loaded the model to the docker volume. It runs fine and seems to be detecting pretty well, but I notice when I start the container, the COnsole output includes red errors related to this variable or module.
Version
0.16.2
Frigate config file
docker-compose file or Docker CLI command
Relevant Frigate log output
Install method
Home Assistant Add-on
Object Detector
Coral
Screenshots of the Frigate UI's System metrics pages
Any other information that may be helpful
Also, in taking the screenshot above, I notice that there is CPU usage for each camera, but I do not see GPU there. Is that normal if I am using my Nvidia GPU and ONNX?
Beta Was this translation helpful? Give feedback.
All reactions