Skip to content

Commit 148efda

Browse files
[MXFp4] Add E2E Example Script for Llama3 (#2042)
# Summary - Requires: vllm-project/compressed-tensors#509 - Add script to generate an mxfp4 quantized model - This feature is currently experimental as support has not landed or tested in vLLM # Testing: Sample Model: - nm-testing/Meta-Llama-3-8B-Instruct-MXFP4 Sample Generation (Transformers): ```bash ========== SAMPLE GENERATION ============== <|begin_of_text|>Hello my name is Sophia and I am a 3rd year student at the University of California, Berkeley. I am a double major in Linguistics and Psychology, with a minor in Education. I am very interested in the way that language and culture interact, and I believe that education is the key to creating a more just and equitable society. I am a native speaker of English, and I have also studied Spanish, French, and Mandarin Chinese. I am very interested in the way that language can be used to bring ========================================== ``` Sample Config: ```yaml "quantization_config": { "config_groups": { "group_0": { "format": "mxfp4-pack-quantized", "input_activations": { "actorder": null, "block_structure": null, "dynamic": true, "group_size": 32, "num_bits": 4, "observer": null, "observer_kwargs": {}, "scale_dtype": "torch.uint8", "strategy": "group", "symmetric": true, "type": "float", "zp_dtype": null }, "output_activations": null, "targets": [ "Linear" ], "weights": { "actorder": null, "block_structure": null, "dynamic": false, "group_size": 32, "num_bits": 4, "observer": "minmax", "observer_kwargs": {}, "scale_dtype": "torch.uint8", "strategy": "group", "symmetric": true, "type": "float", "zp_dtype": null } } }, "format": "mxfp4-pack-quantized", } ``` --------- Signed-off-by: Dipika Sikka <[email protected]> Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
1 parent 560bb9c commit 148efda

File tree

2 files changed

+38
-0
lines changed

2 files changed

+38
-0
lines changed

experimental/README.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# Experimental Features
2+
3+
This folder aims to highlight features that are a work-in-progress or are supported in LLM Compressor and/or Compressed-Tensors but lack full support in downstream libraries like vLLM.

experimental/mxfp4/llama3_mxfp4.py

Lines changed: 35 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,35 @@
1+
from transformers import AutoModelForCausalLM, AutoTokenizer
2+
3+
from llmcompressor import oneshot
4+
from llmcompressor.modifiers.quantization import QuantizationModifier
5+
from llmcompressor.utils import dispatch_for_generation
6+
7+
MODEL_ID = "meta-llama/Meta-Llama-3-8B-Instruct"
8+
9+
# Load model.
10+
model = AutoModelForCausalLM.from_pretrained(MODEL_ID, torch_dtype="auto")
11+
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
12+
13+
# Configure the quantization algorithm and scheme.
14+
# In this case, we:
15+
# * quantize the weights to fp4 with per group 32 via ptq
16+
recipe = QuantizationModifier(targets="Linear", scheme="MXFP4", ignore=["lm_head"])
17+
18+
# Apply quantization.
19+
oneshot(model=model, recipe=recipe)
20+
21+
print("\n\n")
22+
print("========== SAMPLE GENERATION ==============")
23+
dispatch_for_generation(model)
24+
input_ids = tokenizer("Hello my name is", return_tensors="pt").input_ids.to(
25+
model.device
26+
)
27+
output = model.generate(input_ids, max_new_tokens=100)
28+
print(tokenizer.decode(output[0]))
29+
print("==========================================\n\n")
30+
31+
32+
# Save to disk in compressed-tensors format.
33+
SAVE_DIR = MODEL_ID.rstrip("/").split("/")[-1] + "-MXFP4"
34+
model.save_pretrained(SAVE_DIR, save_compressed=True)
35+
tokenizer.save_pretrained(SAVE_DIR)

0 commit comments

Comments
 (0)