Skip to content

Conversation

toncao
Copy link
Contributor

@toncao toncao commented Oct 14, 2025

This repo is to resolve mappings for Cohere2ForCausalLM, Glm4MoeForCausalLM, SeedOssForCausalLM, and Ernie4_5_MoeForCausalLM models.

TEST PLAN:
Local make test results:

======================================================================== short test summary info =========================================================================
FAILED tests/llmcompressor/modeling/test_calib_deepseek_v3.py::test_calib_deepseekv3_module - torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 56.00 MiB. GPU 0 has a total capacity of 23.57 GiB of which 14.12 MiB is free. Including non-PyTorch mem...
FAILED tests/llmcompressor/utils/test_helpers.py::test_disable_cache[MllamaForConditionalGeneration-meta-llama/Llama-3.2-11B-Vision-Instruct] - torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 MiB. GPU 0 has a total capacity of 23.57 GiB of which 54.12 MiB is free. Including non-PyTorch mem...
FAILED tests/lmeval/test_lmeval.py::TestLMEval::test_lm_eval[None] - TypeError: argument should be a str or an os.PathLike object where __fspath__ returns a str, not 'NoneType'
=========================================================== 3 failed, 242 passed, 4 skipped in 80.83s (0:01:20) ===========================================================

Copy link

👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review.

Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed.

Copy link
Contributor

Summary of Changes

Hello @toncao, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request expands the compatibility of the AWQ (Activation-aware Weight Quantization) modifier by integrating several new large language models. By adding these models to the existing mapping configurations, the PR ensures that they can leverage established quantization strategies, which is crucial for optimizing model performance and efficiency.

Highlights

  • New Model Mappings: Added AWQ mappings for Cohere2VisionForConditionalGeneration to utilize existing Cohere mapping configurations.
  • New Model Mappings: Included AWQ mappings for Llama4ForConditionalGeneration using the default mapping strategy.
  • New Model Mappings: Integrated AWQ mappings for Glm4MoeForCausalLM with the default mapping configurations.
  • New Model Mappings: Added AWQ mappings for SeedOssForCausalLM using the default mapping strategy.
  • New Model Mappings: Incorporated AWQ mappings for Ernie4_5_MoeForCausalLM with the default mapping configurations.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds AWQ mappings for several new models. The changes are mostly correct, but I've identified a potential issue with the mappings for the Mixture-of-Experts (MoE) models. Glm4MoeForCausalLM and Ernie4_5_MoeForCausalLM are mapped to _default_mappings, but they should likely use _moe_default_mappings to correctly handle their expert layers, similar to other MoE models in this file. I've provided a suggestion to fix this. The other mappings seem appropriate.

@kylesayrs
Copy link
Collaborator

Hi @toncao!

Thanks for the PR! Have you tested and validated that these mappings are correct for these models?

@dsikka dsikka added the awq For any issue / PR related to AWQ support label Oct 14, 2025
Copy link
Collaborator

@brian-dellabetta brian-dellabetta left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @toncao , thanks for the contribution! Have you tried these? Gemini's change request looks appropriate to me

@toncao
Copy link
Contributor Author

toncao commented Oct 14, 2025

Hi @kylesayrs and @brian-dellabetta, I am more than happy to make the PR!

And yes, those are the mappings that I used for my quantized models e.g., cpatonn/GLM-4.5-Air-AWQ-4bit, cpatonn/Seed-OSS-36B-Instruct-AWQ-4bit, and cpatonn/ERNIE-4.5-21B-A3B-Thinking-AWQ-4bit.

GLM 4.5 does have mlp, shared_experts and experts layers, which the default mapping also matches with those, e.g., re:.*gate_proj$ does conveniently match with re:.*mlp.gate_proj$, re:.*mlp.shared_experts.gate_proj$ and re:.*mlp.experts.*.gate_proj$ in their respective layers.

Please let me know if I don't make any senses.

@brian-dellabetta
Copy link
Collaborator

Hi @kylesayrs and @brian-dellabetta, I am more than happy to make the PR!

And yes, those are the mappings that I used for my quantized models e.g., cpatonn/GLM-4.5-Air-AWQ-4bit, cpatonn/Seed-OSS-36B-Instruct-AWQ-4bit, and cpatonn/ERNIE-4.5-21B-A3B-Thinking-AWQ-4bit.

GLM 4.5 does have mlp, shared_experts and experts layers, which the default mapping also matches with those, e.g., re:.*gate_proj$ does conveniently match with re:.*mlp.gate_proj$, re:.*mlp.shared_experts.gate_proj$ and re:.*mlp.experts.*.gate_proj$ in their respective layers.

Please let me know if I don't make any senses.

Ah, yes that's true even with the _moe_default_mappings in that file. Both mappings will resolve the same way in a Qwen MoE model. In that case, we should be fine to merge this in. I can leave a note about this in a future PR, will be looking at some updates to AWQModifier this month

@brian-dellabetta brian-dellabetta added the ready When a PR is ready for review label Oct 14, 2025
@toncao
Copy link
Contributor Author

toncao commented Oct 15, 2025

Thank you for the heads up. I am more than happy to contribute anything.

@brian-dellabetta brian-dellabetta enabled auto-merge (squash) October 16, 2025 20:28
@brian-dellabetta brian-dellabetta merged commit e6d8ad8 into vllm-project:main Oct 16, 2025
9 checks passed
zhanglei1172 pushed a commit to zhanglei1172/llm-compressor that referenced this pull request Oct 17, 2025
…orCausalLM, and Ernie4_5_MoeForCausalLM (vllm-project#1926)

This repo is to resolve mappings for Cohere2ForCausalLM,
Glm4MoeForCausalLM, SeedOssForCausalLM, and Ernie4_5_MoeForCausalLM
models.

TEST PLAN:
Local make test results:
```
======================================================================== short test summary info =========================================================================
FAILED tests/llmcompressor/modeling/test_calib_deepseek_v3.py::test_calib_deepseekv3_module - torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 56.00 MiB. GPU 0 has a total capacity of 23.57 GiB of which 14.12 MiB is free. Including non-PyTorch mem...
FAILED tests/llmcompressor/utils/test_helpers.py::test_disable_cache[MllamaForConditionalGeneration-meta-llama/Llama-3.2-11B-Vision-Instruct] - torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 MiB. GPU 0 has a total capacity of 23.57 GiB of which 54.12 MiB is free. Including non-PyTorch mem...
FAILED tests/lmeval/test_lmeval.py::TestLMEval::test_lm_eval[None] - TypeError: argument should be a str or an os.PathLike object where __fspath__ returns a str, not 'NoneType'
=========================================================== 3 failed, 242 passed, 4 skipped in 80.83s (0:01:20) ===========================================================
```

Co-authored-by: toncao <[email protected]>
Co-authored-by: Brian Dellabetta <[email protected]>
Signed-off-by: LeiZhang <[email protected]>
zhanglei1172 pushed a commit to zhanglei1172/llm-compressor that referenced this pull request Oct 17, 2025
…orCausalLM, and Ernie4_5_MoeForCausalLM (vllm-project#1926)

This repo is to resolve mappings for Cohere2ForCausalLM,
Glm4MoeForCausalLM, SeedOssForCausalLM, and Ernie4_5_MoeForCausalLM
models.

TEST PLAN:
Local make test results:
```
======================================================================== short test summary info =========================================================================
FAILED tests/llmcompressor/modeling/test_calib_deepseek_v3.py::test_calib_deepseekv3_module - torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 56.00 MiB. GPU 0 has a total capacity of 23.57 GiB of which 14.12 MiB is free. Including non-PyTorch mem...
FAILED tests/llmcompressor/utils/test_helpers.py::test_disable_cache[MllamaForConditionalGeneration-meta-llama/Llama-3.2-11B-Vision-Instruct] - torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 MiB. GPU 0 has a total capacity of 23.57 GiB of which 54.12 MiB is free. Including non-PyTorch mem...
FAILED tests/lmeval/test_lmeval.py::TestLMEval::test_lm_eval[None] - TypeError: argument should be a str or an os.PathLike object where __fspath__ returns a str, not 'NoneType'
=========================================================== 3 failed, 242 passed, 4 skipped in 80.83s (0:01:20) ===========================================================
```

Co-authored-by: toncao <[email protected]>
Co-authored-by: Brian Dellabetta <[email protected]>
Signed-off-by: LeiZhang <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

awq For any issue / PR related to AWQ support ready When a PR is ready for review

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants