[Detector Support]: Updated to 16.0 and did all the convertion from nvidia tensorrt to onnx and for some reason cuda is not working. #19551
-
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 16 replies
-
Based on your logs, the CUDA initialization is failing with "CUDA_ERROR_UNKNOWN: unknown error" for both FFmpeg and ONNX runtime. This is a common issue when there's a mismatch between the CUDA runtime version in the container and your host driver. Key Issues Identified
Configuration IssuesYour configuration has some problems:
detectors:
onnx:
type: onnx
device: '0' # Add this line
model:
model_type: yolox
width: 416 # should match the imgsize set during model export
height: 416 # should match the imgsize set during model export
input_tensor: nchw
input_dtype: float_denorm # Note: float_denorm, not float
path: /config/model_cache/yolox_x.onnx
labelmap_path: /labelmap/coco-80.txt Recommended Solutions
The error pattern you're seeing is consistent with CUDA initialization failures, which typically occur when the container can't properly access the GPU hardware or there are driver/runtime mismatches. 📚 Sources: Have another question? Just tag @inkeep. |
Beta Was this translation helpful? Give feedback.
-
You are missing |
Beta Was this translation helpful? Give feedback.
-
I am also (after a few times) going to give the nvidia detector a go to see how it performs vs coral tpu. In the docs/release notes you say this:
However, then there is not an ONNX image. So that would make people assume you use ghcr.io/blakeblackshear/frigate:0.16.0 ... However, in this thread it says then you use ghcr.io/blakeblackshear/frigate:0.16.0-tensorrt Is that right? |
Beta Was this translation helpful? Give feedback.
You are missing
--gpus=all