[Detector Support]: Detection hardware utilization split between iGPU and dGPU #20893
Replies: 2 comments 2 replies
-
|
The ability to specify a GPU for enrichments has been implemented for the next version of Frigate. You can offload ffmpeg decoding to another device by using manual ffmpeg arguments. @inkeep can demonstrate |
Beta Was this translation helpful? Give feedback.
-
|
You can offload ffmpeg video decoding to a specific device in Frigate by setting manual ffmpeg arguments for each camera input. In your Frigate configuration, inside the ffmpeg section for a camera, you can use the input_args and hwaccel_args fields to set hardware acceleration and specify device paths. For Intel hardware, for example, you can use the preset-intel-qsv-h264 or preset-vaapi presets for hardware acceleration(1)(2). To control which device is used, you can run Frigate in Docker and map only the desired device (such as /dev/dri/renderD129) into the container(1). You can also set custom ffmpeg arguments to specify the device node directly if needed. Example for mapping a specific device in docker-compose: services:
frigate:
...
devices:
- /dev/dri/renderD129:/dev/dri/renderD129And for manual ffmpeg arguments, you can use: ffmpeg:
inputs:
- path: rtsp://...
input_args: -hwaccel vaapi -hwaccel_device /dev/dri/renderD129 -hwaccel_output_format yuv420p
roles:
- detectFor more advanced setups (such as distributing load manually across multiple GPUs), this approach is supported, but you must be familiar with ffmpeg and hardware device management(4)(3). For enrichments (semantic search, face recognition, etc.), the ability to specify which GPU is used will be supported in the next version of Frigate(5). Currently, the GPU used for enrichments is detected automatically based on the Frigate Docker image and hardware present(5). 📚 Sources:
Have another question? Just tag @inkeep. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Describe the problem you are having
Running frigate in TRUENAS SCALE 25.10 on an old Dell Precision machine with a 1245v6 CPU with iGPU and an ARC PRO A40 GPU. I have "GPU" set under detector in the config file and I had good inference times. Then I switched to YOLOv9s 640x640 and my inference time skyrocketed from less than 10ms to about 70ms. I decided to check the following in 2 separate shell side by side so I could compare:
intel_gpu_top -d drm:/dev/dri/renderD128 (the iGPU)
intel_gpu_top -d drm:/dev/dri/renderD129 (the dGPU)
frigate.detecto is processing under both devices with minimal load on D129 but heavy intermittent load on D128
ffmpeg seems to be running on D129 only (which is what I want)
I guess I have more than 1 question related to this.
Version
0.16.2
Frigate config file
docker-compose file or Docker CLI command
Unable to acquire (truenas app, unmodified)Relevant Frigate log output
Install method
Docker Compose
Object Detector
OpenVino
Screenshots of the Frigate UI's System metrics pages
Any other information that may be helpful
I was using large size for the enrichments but changed to small to see if it made a difference, if it did it was maybe 3ms at most.
Beta Was this translation helpful? Give feedback.
All reactions