Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AWQ Modifier Support #1177

Open
wants to merge 25 commits into
base: main
Choose a base branch
from
Open

AWQ Modifier Support #1177

wants to merge 25 commits into from

Conversation

brian-dellabetta
Copy link
Collaborator

@brian-dellabetta brian-dellabetta commented Feb 19, 2025

SUMMARY:
Addition of AWQModifier, based on AutoAWQ implementation.

Should be reviewed/merged in conjunction with neuralmagic/compressed-tensors#269

Replaces #181 and #824 (hence v3)

TEST PLAN:
Some unit tests included, but as this was mostly a port from AutoAWQ, we validated the code by ensuring we could reproduce the evaluation metrics in Table 4 of the paper. We achieve the following wikitext PPL scores:

Llama-2 7B Group 128:

  1. Paper: 5.60
  2. AutoAWQ: 5.615
  3. This implementation: 5.612
  4. we match what the paper reports for just RTN -- 5.73
  5. We get reasonable results for channel-wise -- 6.788. AutoAWQ errors out for this (setting "q_group_size": -1 in the quant_config), and results not reported in paper.

Llama-2 13B Group 128:

  1. We match the results of AutoAWQ and the results shown in the paper: 4.97
  2. We match what the paper reports for just RTN -- 4.984

NOTE: We are excluding the clipping logic in this implementation, if we want to add it we should add it as another modifier, they are mutually exclusive and the data model for AWQ doesn't align well with clipping. That might be the reason for the slight deviation of results reported in the paper and in our implementation

Copy link

👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review.

Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed.

@brian-dellabetta brian-dellabetta force-pushed the bdellabe/awq-modifier-v3 branch 2 times, most recently from 9273ef3 to 28f8bca Compare February 20, 2025 17:27
@brian-dellabetta brian-dellabetta changed the title Bdellabe/awq modifier v3 Bdellabe/Rtuli awq modifier v3 Mar 10, 2025
@brian-dellabetta brian-dellabetta marked this pull request as ready for review March 10, 2025 21:45
Comment on lines 48 to 50
# TODO this should only be added if v_proj/o_proj shapes match up
# should we check during validation and skip if this is not the case?
AWQMapping("re:.*v_proj", ["re:.*o_proj"]),
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the one TODO. The logic for this in AutoAWQ is to only add this mapping if the shapes line up correctly (logic here). This is the case for the llama 2 models i've been testing on, but not all of the tiny llama models. Any suggestion on how best to handle for both cases?

Copy link
Collaborator Author

@brian-dellabetta brian-dellabetta Mar 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PPL is 5.607 for llama 2-7B with this included, 5.614 when it isn't.

Copy link
Collaborator

@dsikka dsikka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we add evals comparing to GPTQ?

@brian-dellabetta
Copy link
Collaborator Author

Using the latest commit at this time, I am getting the following results via lm-eval.

deepseek-ai/DeepSeek-R1-Distill-Llama-8B:
 dense:
   #gsm flexible-extract, strict-match
   gsm8k: .6619, .6490
   wititext ppl: 15.4498
 awq+quant sym:
   gsm8k: .6376, .6217
   wititext ppl: 18.8623
 quant sym:
   gsm8k: .6732, .6543
   wititext ppl: 16.7398
meta-llama/Llama-2-7b-hf:
 dense:
   gsm8k: .1342, .1342
   wititext ppl: 8.7587
 awq+quant sym:
   gsm8k: .1024, .1001
   wititext ppl: 9.194
 quant sym:
   gsm8k: .1183, .1152
   wititext ppl: 9.311

@dsikka dsikka changed the title Bdellabe/Rtuli awq modifier v3 AWQ Modifier Support Mar 25, 2025
Signed-off-by: Brian Dellabetta <[email protected]>
Signed-off-by: Brian Dellabetta <[email protected]>
Signed-off-by: Brian Dellabetta <[email protected]>
Signed-off-by: Brian Dellabetta <[email protected]>
Signed-off-by: Brian Dellabetta <[email protected]>
Signed-off-by: Brian Dellabetta <[email protected]>
Signed-off-by: Brian Dellabetta <[email protected]>
Signed-off-by: Brian Dellabetta <[email protected]>
Signed-off-by: Brian Dellabetta <[email protected]>
Signed-off-by: Brian Dellabetta <[email protected]>
Signed-off-by: Brian Dellabetta <[email protected]>
Signed-off-by: Brian Dellabetta <[email protected]>
Signed-off-by: Brian Dellabetta <[email protected]>
@brian-dellabetta brian-dellabetta force-pushed the bdellabe/awq-modifier-v3 branch from 0df0b38 to 6ee0010 Compare March 26, 2025 21:42
Signed-off-by: Brian Dellabetta <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants