[Config Support]: Config Validation Error with ONNX Detector on Nvidia/amd64 #20439
-
Describe the problem you are havingHello, I'm running into a persistent configuration validation error when trying to enable an ONNX detector for my Nvidia GPU and I'm hoping someone can spot what I'm missing. The Problem:
What I've Tried:
I have used multiple methods (cat, echo, nano) to create the I'm at a dead end. It feels like the config is correct according to the documentation, but the validator disagrees. Could this be a bug, or am I missing something obvious? Thank you for any help you can provide. Version0.16.1-e664cb2 Frigate config filemqtt:
enabled: False
detectors:
nvidia:
type: onnx
device: 0
ffmpeg:
hwaccel_args: preset-nvidia-h264
cameras:
camera_1:
ffmpeg:
inputs:
- path: rtsp://...
roles:
- detect
- record
detect:
width: 640
height: 480
record:
enabled: True
retain:
days: 7
camera_2:
ffmpeg:
inputs:
- path: rtsp://...
roles:
- detect
- record
detect:
width: 640
height: 480
record:
enabled: True
retain:
days: 7 Relevant Frigate log output*************************************************************
*** Your config file is not valid! ***
*** Please check the docs at ***
*** https://docs.frigate.video/configuration/ ***
*************************************************************
*************************************************************
*** Config Validation Errors ***
*************************************************************
Could not determine exact line number: 'onnx'
Message : Input should be a valid string
*************************************************************
*** End Config Validation Errors ***
*************************************************************
[INFO] Service Frigate exited with code 1 (by signal 0) Relevant go2rtc log outputNA Frigate statsNo response Operating systemDebian Install methodDocker Compose docker-compose file or Docker CLI commandservices:
frigate:
container_name: frigate
restart: unless-stopped
image: ghcr.io/blakeblackshear/frigate:stable
shm_size: 1g
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
environment:
- NVIDIA_VISIBLE_DEVICES=all
volumes:
- /etc/localtime:/etc/localtime:ro
- ./config:/config
- ./storage:/media/frigate
- type: tmpfs
target: /tmp/cache
tmpfs:
size: 1000000000
ports:
- "5011:5000" Object DetectorCPU (no coral) Screenshots of the Frigate UI's System metrics pagesNo response Any other information that may be helpfulNo response |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
The issue is with your Docker image and device configuration. For NVIDIA GPU support with ONNX, you need to use the Docker Image Fix: image: ghcr.io/blakeblackshear/frigate:stable-tensorrt Device Configuration: detectors:
nvidia:
type: onnx
device: '0' Missing Model Configuration: model:
model_type: yolonas
width: 320
height: 320
input_pixel_format: bgr
input_tensor: nchw
path: /config/yolo_nas_s.onnx
labelmap_path: /labelmap/coco-80.txt Why This Happens: The 📚 Sources: Have another question? Just tag @inkeep. |
Beta Was this translation helpful? Give feedback.
The issue is with your Docker image and device configuration. For NVIDIA GPU support with ONNX, you need to use the
-tensorrt
Frigate image (1). Your current configuration uses the standardstable
image which doesn't include the necessary TensorRT libraries for GPU acceleration.Docker Image Fix:
Change your docker-compose to use the TensorRT image:
(1)
Device Configuration:
The device parameter should be a string, not an integer (1). Update your detector configuration:
(1)
Missing Model Configuration:
You're also missing the model configuration section. For ONNX detectors, you need …