Skip to content

TypeError: missing a required argument: 'input' during Multi-GPU Quantization of Qwen2.5-VL using AWQ` #1571

Open
@24kHandsome1201

Description

@24kHandsome1201

Bug Report

Title: TypeError: missing a required argument: 'input' during Multi-GPU Quantization of Qwen2.5-VL using AWQ.


Description:
While performing multi-GPU quantization using the AWQ modifier on the Qwen2.5-VL model, the following error occurs:

TypeError: missing a required argument: 'input'

The error happens when running the custom script for model calibration after setting up the necessary configurations and dataset.


Steps to Reproduce:

  1. Download the code from 0619 release.

  2. Navigate to the examples/multimodal_vision directory.

  3. Install the package using:

    pip install -e .
  4. Run the custom script provided (as shown below) to start the quantization process.

import base64
from io import BytesIO

import torch
from datasets import load_dataset
from awq.utils.qwen_vl_utils import process_vision_info
from transformers import AutoProcessor, Qwen2_5_VLForConditionalGeneration


from llmcompressor.transformers import oneshot
from llmcompressor.utils import dispatch_for_generation
from llmcompressor.modifiers.awq import AWQModifier

# Load model.
model_id = "/Qwen2.5-VL-7B-Instruct/"
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(model_id, torch_dtype="auto")
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)

# Oneshot arguments
DATASET_ID = "=/lmms-lab/flickr30k"
DATASET_SPLIT = "test[:512]"
NUM_CALIBRATION_SAMPLES = 16
MAX_SEQUENCE_LENGTH = 2048

# Load dataset and preprocess.
ds = load_dataset(DATASET_ID, split=DATASET_SPLIT)
ds = ds.shuffle(seed=42)

# Apply chat template and tokenize inputs.
def preprocess_and_tokenize(example):
    # preprocess
    buffered = BytesIO()
    example["image"].save(buffered, format="PNG")
    encoded_image = base64.b64encode(buffered.getvalue())
    encoded_image_text = encoded_image.decode("utf-8")
    base64_qwen = f"data:image;base64,{encoded_image_text}"
    messages = [
        {
            "role": "user",
            "content": [
                {"type": "image", "image": base64_qwen},
                {"type": "text", "text": "What does the image show?"},
            ],
        }
    ]
    text = processor.apply_chat_template(
        messages, tokenize=False, add_generation_prompt=True
    )
    image_inputs, video_inputs = process_vision_info(messages)

    # tokenize
    return processor(
        text=[text],
        images=image_inputs,
        videos=video_inputs,
        padding=False,
        max_length=MAX_SEQUENCE_LENGTH,
        truncation=True,
    )


ds = ds.map(preprocess_and_tokenize, remove_columns=ds.column_names)


# Define a oneshot data collator for multimodal inputs.
def data_collator(batch):
    assert len(batch) == 1
    return {key: torch.tensor(value) for key, value in batch[0].items()}


# Recipe
recipe = [
    AWQModifier(
        targets=["Linear"],
        scheme="W4A16",
        group_size=128,
        ignore=[
            "lm_head",
            "re:.*visual.blocks.*mlp.down_proj",
            "re:visual.*",
            "re:visual.blocks.*",
            "re:model.visual.*",
            "re:.*cross_attn.*",
            
        ],
        offload_cache=False
    )
]

# Perform oneshot
oneshot(
    model=model,
    tokenizer=model_id,
    dataset=ds,
    recipe=recipe,
    max_seq_length=MAX_SEQUENCE_LENGTH,
    num_calibration_samples=NUM_CALIBRATION_SAMPLES,
    trust_remote_code_model=True,
    data_collator=data_collator,
    sequential_targets=["Qwen2_5_VLDecoderLayer"],
)

# Confirm generations of the quantized model look sane.
print("========== SAMPLE GENERATION ==============")
dispatch_for_generation(model)
messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "http://images.cocodataset.org/train2017/000000231895.jpg",
            },
            {"type": "text", "text": "Please describe the animal in this image\n"},
        ],
    }
]
prompt = processor.apply_chat_template(messages, add_generation_prompt=True)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
    text=[prompt],
    images=image_inputs,
    videos=video_inputs,
    padding=False,
    max_length=MAX_SEQUENCE_LENGTH,
    truncation=True,
    return_tensors="pt",
).to("cuda")
output = model.generate(**inputs, max_new_tokens=100)
print(processor.decode(output[0], skip_special_tokens=True))
print("==========================================")


# Save to disk compressed.
SAVE_DIR = model_id.rstrip("/").split("/")[-1] + "-W4A16-G128"
model.save_pretrained(SAVE_DIR, save_compressed=True)
processor.save_pretrained(SAVE_DIR)

Expected Result:

  • The script should execute the quantization of the model on multiple GPUs without errors.
  • The model should generate a correct output after quantization.

Actual Result:

  • The following error occurs:

    TypeError: missing a required argument: 'input'
    

    This happens during the calibration phase while using the AWQ modifier for quantization.


Environment:

  • Operating System: Linux

  • GPU: NVIDIA L40S, 3 GPUs

  • PyTorch Version: (Please provide version if possible)

  • CUDA Version: (Please provide version if possible)

  • Library Versions:

    • llmcompressor
    • AWQ
    • transformers
    • torch

Logs:

  • The full stack trace is provided in the original error message above.
  • Here are relevant lines from the logs:
TypeError: missing a required argument: 'input'
...

Possible Cause:

  • The error seems to be related to the missing 'input' argument during the forward pass of a certain layer. This could be due to the specific model layer not receiving the required input or an issue with how the layers are being processed across multiple GPUs.

Suggestions:

  • Verify if the layer causing the error is being correctly loaded and if its input data is being passed correctly during multi-GPU execution.
  • Check the model’s layer definitions and ensure that all required arguments are being passed when performing multi-GPU quantization.
  • Investigate whether this issue occurs due to incorrect use of the AWQModifier or its configuration in the multi-GPU setup.

Metadata

Metadata

Labels

bugSomething isn't working

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions