[HW Accel Support]: TWO NVIDIA GTX #19036
Unanswered
CurseStaff
asked this question in
Hardware Acceleration Support
Replies: 1 comment
-
should be doing this, many users have done similar to run on multiple GPUs |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Describe the problem you are having
Hi Frigate team,
I’m requesting help after reading all the threads about multiple NVIDIA GPU usage in Frigate. After many, many tests, the result is always the same:
🚫 Only one GPU is used.
My setup:
❌ Problem:
My GTX 1080 Ti (GPU 0) gets fully overloaded with decoding, causing major video artifacts:
So I successfully switched to the RTX 3060 Ti (GPU 1) using:
`
environment:
NVIDIA_VISIBLE_DEVICES: "1"
This works — Frigate now uses GPU 1 — but it's quickly saturated too with decoding.
Goal:
How can I distribute the decoding load between both GPUs?
What I’ve tried (unsuccessfully):
deploy:
resources:
reservations:
devices:
- driver: nvidia
device_ids: ['1', '0']
capabilities: [gpu]
2. Frigate config with hwaccel_device:
At camera level:
cameras:
maison_tgbt:
ffmpeg:
inputs:
- path: rtsp://...
roles:
- detect
hwaccel_args: preset-nvidia-h264
global_args: -hide_banner -loglevel warning -threads 2 -hwaccel_device 1
At global level:
ffmpeg:
global_args: -hide_banner -loglevel warning -threads 2 -hwaccel_device 1
hwaccel_args: preset-nvidia-h264
3. go2rtc custom ffmpeg commands:
go2rtc:
ffmpeg:
cuda0: "-fflags nobuffer -flags low_delay -timeout 5000000 -user_agent go2rtc/ffmpeg -hwaccel_device 0 -rtsp_transport tcp -i {input}"
cuda1: "-fflags nobuffer -flags low_delay -timeout 5000000 -user_agent go2rtc/ffmpeg -hwaccel_device 1 -rtsp_transport tcp -i {input}"
streams:
maison_tgbt:
- rtsp://admin:[email protected]:554/Streaming/Channels/001#input=cuda1#video=h264#raw=-fpsmax 10 -gpu 1#hardware=cuda
Despite all these attempts, decoding still uses only one GPU at a time, and the -hwaccel_device flag doesn’t seem to have any effect.
Is there any supported way to manually assign decoding to different GPUs, or to make Frigate balance the decoding load automatically?
Thanks for your help guys, and for the amazing project. 🙏
Version
0.16.0-63f9689
Frigate config file
docker-compose file or Docker CLI command
Relevant Frigate log output
Relevant go2rtc log output
FFprobe output from your camera
Install method
Docker Compose
Object Detector
Coral
Network connection
Wired
Camera make and model
hikvision , reolink
Screenshots of the Frigate UI's System metrics pages
Any other information that may be helpful
No response
Beta Was this translation helpful? Give feedback.
All reactions