Replies: 1 comment
-
|
same, can't build AutoGPTQ |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Currently when trying to export a quantized model LLaMA Factory requires
auto_gptqto be installed.As per the project's page:
The problem I have because of this is that I can't build AutoGPTQ with the latest Pytorch in a CUDA 12.8 environment, as it gives a compilation error. On the other hand, I am able to build GPTQModel in that environment. Also, it seems LLaMA Factory tries to import
auto_gptqregardless of the quantization method chosen in the config.Beta Was this translation helpful? Give feedback.
All reactions