Skip to content

Conversation

@githubnemo
Copy link
Collaborator

gptqmodel requires information about the compute capability of the system. The default is to look at the output of nvidia-smi but since there is no compute hardware on the docker image builder instance we have to hard-code the compute capability.

Since our CI runners use NVIDIA L4 which have a compute capability of 8.9 (according to https://developer.nvidia.com/cuda/gpus) we're using that.

In the future it might be worth extending this so that people using this docker image are using a gptqmodel version that supports higher compute cap. as well.

gptqmodel requires information about the compute capability of the system.
The default is to look at the output of `nvidia-smi` but since there is
no compute hardware on the docker image builder instance we have to
hard-code the compute capability.

Since our CI runners use NVIDIA L4 which have a compute capability of 8.9
(according to https://developer.nvidia.com/cuda/gpus) we're using that.

In the future it might be worth extending this so that people using
this docker image are using a gptqmodel version that supports higher
compute cap. as well.
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link

@ydshieh ydshieh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks

@githubnemo githubnemo merged commit 2cd96ed into huggingface:main Feb 3, 2026
4 of 12 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants