[Detector Support]: Python segmentation fault on NVIDIA Tesla P4 #20592
Replies: 1 comment
-
|
The segmentation fault you're experiencing with the NVIDIA Tesla P4 appears to be related to compute capability limitations. The TensorRT detector has specific minimum hardware requirements that may not be met by your GPU(1). For Nvidia GPU support, the documentation specifies that TensorRT requires GPUs with a Compute Capability of 5.0 or greater, which generally correlates to Maxwell-era GPUs or newer(1). The Tesla P4, being a Pascal-based GPU, should technically meet this requirement, but older Tesla cards can have compatibility issues. Since you're using the ONNX detector with a Frigate+ model, you have a few options:
Based on similar segmentation fault issues reported with TensorRT configurations, the problem often relates to model configuration mismatches or hardware compatibility(4). Since your configuration works on an RTX 3070, this suggests the Tesla P4 may not be fully supported by the current TensorRT implementation. I recommend trying the default Frigate image first, which should automatically fall back to a compatible detector for your hardware. 📚 Sources:
Have another question? Just tag @inkeep. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Describe the problem you are having
When trying to use the ONNX detector with a Frigate+ YOLOv9 model on a NVIDIA Tesla P4, I get a segmentation fault error, see the logs.
Version
0.16.2
Frigate config file
docker-compose file or Docker CLI command
Relevant Frigate log output
Install method
Docker Compose
Object Detector
TensorRT
Screenshots of the Frigate UI's System metrics pages
Not available because Frigate crashes at startup.
Any other information that may be helpful
The same config does work on an NVIDIA RTX 3070, so maybe there's a compute capability limitation?
Beta Was this translation helpful? Give feedback.
All reactions