-
Notifications
You must be signed in to change notification settings - Fork 261
Resolve mappings for Cohere2ForCausalLM, Glm4MoeForCausalLM, SeedOssForCausalLM, and Ernie4_5_MoeForCausalLM #1926
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review. Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed. |
Summary of ChangesHello @toncao, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request expands the compatibility of the AWQ (Activation-aware Weight Quantization) modifier by integrating several new large language models. By adding these models to the existing mapping configurations, the PR ensures that they can leverage established quantization strategies, which is crucial for optimizing model performance and efficiency. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request adds AWQ mappings for several new models. The changes are mostly correct, but I've identified a potential issue with the mappings for the Mixture-of-Experts (MoE) models. Glm4MoeForCausalLM
and Ernie4_5_MoeForCausalLM
are mapped to _default_mappings
, but they should likely use _moe_default_mappings
to correctly handle their expert layers, similar to other MoE models in this file. I've provided a suggestion to fix this. The other mappings seem appropriate.
Hi @toncao! Thanks for the PR! Have you tested and validated that these mappings are correct for these models? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @toncao , thanks for the contribution! Have you tried these? Gemini's change request looks appropriate to me
Hi @kylesayrs and @brian-dellabetta, I am more than happy to make the PR! And yes, those are the mappings that I used for my quantized models e.g., cpatonn/GLM-4.5-Air-AWQ-4bit, cpatonn/Seed-OSS-36B-Instruct-AWQ-4bit, and cpatonn/ERNIE-4.5-21B-A3B-Thinking-AWQ-4bit. GLM 4.5 does have mlp, shared_experts and experts layers, which the default mapping also matches with those, e.g., Please let me know if I don't make any senses. |
Ah, yes that's true even with the |
Thank you for the heads up. I am more than happy to contribute anything. |
…orCausalLM, and Ernie4_5_MoeForCausalLM (vllm-project#1926) This repo is to resolve mappings for Cohere2ForCausalLM, Glm4MoeForCausalLM, SeedOssForCausalLM, and Ernie4_5_MoeForCausalLM models. TEST PLAN: Local make test results: ``` ======================================================================== short test summary info ========================================================================= FAILED tests/llmcompressor/modeling/test_calib_deepseek_v3.py::test_calib_deepseekv3_module - torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 56.00 MiB. GPU 0 has a total capacity of 23.57 GiB of which 14.12 MiB is free. Including non-PyTorch mem... FAILED tests/llmcompressor/utils/test_helpers.py::test_disable_cache[MllamaForConditionalGeneration-meta-llama/Llama-3.2-11B-Vision-Instruct] - torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 MiB. GPU 0 has a total capacity of 23.57 GiB of which 54.12 MiB is free. Including non-PyTorch mem... FAILED tests/lmeval/test_lmeval.py::TestLMEval::test_lm_eval[None] - TypeError: argument should be a str or an os.PathLike object where __fspath__ returns a str, not 'NoneType' =========================================================== 3 failed, 242 passed, 4 skipped in 80.83s (0:01:20) =========================================================== ``` Co-authored-by: toncao <[email protected]> Co-authored-by: Brian Dellabetta <[email protected]> Signed-off-by: LeiZhang <[email protected]>
…orCausalLM, and Ernie4_5_MoeForCausalLM (vllm-project#1926) This repo is to resolve mappings for Cohere2ForCausalLM, Glm4MoeForCausalLM, SeedOssForCausalLM, and Ernie4_5_MoeForCausalLM models. TEST PLAN: Local make test results: ``` ======================================================================== short test summary info ========================================================================= FAILED tests/llmcompressor/modeling/test_calib_deepseek_v3.py::test_calib_deepseekv3_module - torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 56.00 MiB. GPU 0 has a total capacity of 23.57 GiB of which 14.12 MiB is free. Including non-PyTorch mem... FAILED tests/llmcompressor/utils/test_helpers.py::test_disable_cache[MllamaForConditionalGeneration-meta-llama/Llama-3.2-11B-Vision-Instruct] - torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 MiB. GPU 0 has a total capacity of 23.57 GiB of which 54.12 MiB is free. Including non-PyTorch mem... FAILED tests/lmeval/test_lmeval.py::TestLMEval::test_lm_eval[None] - TypeError: argument should be a str or an os.PathLike object where __fspath__ returns a str, not 'NoneType' =========================================================== 3 failed, 242 passed, 4 skipped in 80.83s (0:01:20) =========================================================== ``` Co-authored-by: toncao <[email protected]> Co-authored-by: Brian Dellabetta <[email protected]> Signed-off-by: LeiZhang <[email protected]>
This repo is to resolve mappings for Cohere2ForCausalLM, Glm4MoeForCausalLM, SeedOssForCausalLM, and Ernie4_5_MoeForCausalLM models.
TEST PLAN:
Local make test results: