[HW Accel Support]: Trying to generate models for TensorRT #11322
Unanswered
b-rad15
asked this question in
Hardware Acceleration Support
Replies: 1 comment
-
this is usually due to missing libraries on host or incorrectly installed nvidia docker toolkit |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Describe the problem you are having
While trying to setup frigate with tensorrt inside podman I run into this issue below of a missing library while generating the models. /usr/local/lib/python3.9/dist-packages/nvidia/cudnn/lib/libcudnn_cnn_infer.so.8 exists and symlinking it next to libcuda.so.1 doesn't help. Is there something that I was supposed to install on the host to provide this? The host has a GTX 1070 that has a CDI setup that works for workloads in other containers and is detected by
nvidia-smi
inside the frigate container. #5015 seems like a similar issue but was fixed a while ago. The image I'm using isghcr.io/blakeblackshear/frigate:stable-tensorrt
.Version
Docker Image ID is 3fff86f943b6 (UI does not load to check there)
Frigate config file
docker-compose file or Docker CLI command
Relevant log output
FFprobe output from your camera
Operating system
Other Linux
Install method
Docker Compose
Network connection
Mixed
Camera make and model
N/A
Any other information that may be helpful
No response
Beta Was this translation helpful? Give feedback.
All reactions