-
Notifications
You must be signed in to change notification settings - Fork 261
[AWQ][Qwen3 VL] Add qwen3-vl-30b-a3b-Instruct-example #1947
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
[AWQ][Qwen3 VL] Add qwen3-vl-30b-a3b-Instruct-example #1947
Conversation
👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review. Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed. |
Summary of ChangesHello @JartX, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request adds a comprehensive example for quantizing the Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
Signed-off-by: JartX <[email protected]>
226c0e2
to
8c3f4b5
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request adds a new example script for quantizing the Qwen3-VL-30B-A3B-Instruct
model using AWQ. The script is a good addition, but I've found a few issues in the AWQ configuration that would prevent it from working correctly. Most critically, the ignore
list is too restrictive and disables smoothing altogether. Additionally, the mappings
are not correct for a Mixture-of-Experts model. I've also included some suggestions to improve code style and efficiency. Please review the detailed comments.
Signed-off-by: JartX <[email protected]>
Signed-off-by: JartX <[email protected]>
You can follow the thread here:
#1939 (comment)
After confirming with @dsikka that the issue when quantizing the model was a datasheet problem, and that the flickr30k dataset / processing you were trying also results in the same failure (which specifically comes from transformers, suggesting there may be a mismatch in the flickr30k processing and what is needed by the Qwen3 VL Model), I finished a functional model without a datafree pipeline, but a sequential one, as you requested. I proceed to publish, as promised, both the quantization script and the already uploaded model here: https://huggingface.co/jart25/Qwen3-VL-30B-A3B-Instruct-AWQ-8bit