We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Describe the bug
I created a model with QuantizeConfig(bits=4, group_size=128, quant_method="qqq", format="qqq") and can successfully load it with the built-in model.
QuantizeConfig(bits=4, group_size=128, quant_method="qqq", format="qqq")
GPTQModel.load(self_made_qqq_model)
However, I do not manage to load the checkpoint in vllm. Even when loading through your library directly
GPTQModel.load(self_made_qqq_model,backend="vllm")
I get:
BACKEND.VLLM backend only supports FORMAT.GPTQ: actual = qqq
@Qubitium was this explicitly tested? https://github.com/ModelCloud/GPTQModel?tab=readme-ov-file#quantization-support
For reference: Support was added in #1402
The text was updated successfully, but these errors were encountered:
My version of GPTQModel is at the latest commit ca9d634db6a933cfae2c2d8e8be3fe78f76b802d (at time of opening the issue)
ca9d634db6a933cfae2c2d8e8be3fe78f76b802d
Sorry, something went wrong.
@jmkuebler Thanks for the bug report. Yes vLLM loading of QQQ support using GPTQModel config has not been added yet but we will do it soon.
We plan to add 1,2 more quantization algorithms into GPTQModel very soon and will add the appropriate vLLM loading hooks.
No branches or pull requests
Uh oh!
There was an error while loading. Please reload this page.
Describe the bug
I created a model with
QuantizeConfig(bits=4, group_size=128, quant_method="qqq", format="qqq")
and can successfully load it with the built-in model.However, I do not manage to load the checkpoint in vllm.
Even when loading through your library directly
I get:
@Qubitium was this explicitly tested? https://github.com/ModelCloud/GPTQModel?tab=readme-ov-file#quantization-support
For reference: Support was added in #1402
The text was updated successfully, but these errors were encountered: