Skip to content

Conversation

@nirda7
Copy link
Contributor

@nirda7 nirda7 commented Jan 13, 2025

This PR adds support to FP8 Quantization and Inference run on Intel Gaudi (HPU) using INC (Intel Neural Compressor).
Currently, quantization is validated only in Llama models.

Running Inference in FP8 with INC:
Specify quantization method "inc" and kv cache dtype "fp8_inc" as parameters to the the LLM object.
It will require to set an environment variable "QUANT_CONFIG" which will point to a 'JSON config file' (https://docs.habana.ai/en/latest/PyTorch/Inference_on_PyTorch/Quantization/Inference_Using_FP8.html#supported-json-config-file-options) in QUANTIZE mode. Make sure there are measurement files/scale files in the folder specified as the "dump_stats_path" in the json config file. (If none exists, scale files are generated during the inference run using the measurement files)
At the end of the run, the model executor's shutdown method must be called.

More information on vLLM quantization using INC will be shown here (added in this PR): https://github.com/vllm-project/vllm/blob/main/docs/source/features/quantization/inc.md

This PR adds a new flag "weights_load_device" which allows uploading the model's (unquantized) weights onto a different device than the device on which the model will run. If not provided the behavior is kept by using the device specified in the device config.

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

@mergify mergify bot added the documentation Improvements or additions to documentation label Jan 13, 2025
@mgoin mgoin self-requested a review January 13, 2025 19:12
@mergify mergify bot added the ci/build label Jan 22, 2025
@nirda7 nirda7 force-pushed the dev/hpu_fp8 branch 3 times, most recently from 5c04292 to ab1c832 Compare January 27, 2025 00:12
@nirda7 nirda7 force-pushed the dev/hpu_fp8 branch 2 times, most recently from d1662df to 8a2ce5f Compare February 6, 2025 15:13
@nirda7 nirda7 force-pushed the dev/hpu_fp8 branch 2 times, most recently from 58e7d72 to 0c1f134 Compare February 16, 2025 10:09
@mergify
Copy link

mergify bot commented Feb 19, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @nirda7.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Feb 19, 2025
@mergify mergify bot removed the needs-rebase label Feb 19, 2025
@nirda7 nirda7 requested a review from zhouyu5 February 24, 2025 14:35
@nirda7 nirda7 force-pushed the dev/hpu_fp8 branch 2 times, most recently from 67cd337 to a6feb86 Compare March 4, 2025 10:31
- [BitBLAS](bitblas.md)
- [GGUF](gguf.md)
- [GPTQModel](gptqmodel.md)
- [Inc](inc.md)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From the doc, seems like it should be INC

Suggested change
- [Inc](inc.md)
- [INC](inc.md)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done


BlockSize = Literal[1, 8, 16, 32, 64, 128]
CacheDType = Literal["auto", "fp8", "fp8_e4m3", "fp8_e5m2"]
CacheDType = Literal["auto", "fp8", "fp8_e4m3", "fp8_e5m2", "fp8_inc"]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Based on the comment here, can we remove the new cache dtype now? #12010 (comment)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since the HPU worker for v1 is moved to plugin and v0 will be deprecated soon. We want to make the map "fp8_inc to fp8_e4m3" being more visible.

image

Alternatively, do you think we can update the mapping function above conditional like:

STR_DTYPE_TO_TORCH_DTYPE = {
    "half": torch.half,
    "bfloat16": torch.bfloat16,
    "float": torch.float,
    "fp8": torch.uint8,
    "fp8_e4m3": torch.uint8 if not current_platform.is_support_fp8_e4m3() else torch.float8_e4m3fn
    "fp8_e5m2": torch.uint8,
    "int8": torch.int8,
}

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's okay, let's just keep fp8_inc then

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please add a header to this file explaining its purpose? It is rather confusing otherwise and this is a good place to define how this quant method works

model = initialize_model(vllm_config=vllm_config,
model_config=model_config)

logger.info("Loading weights on %s ...", load_device)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make this debug

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Comment on lines 150 to +152
# GGUF doesn't have config file
if model_config.quantization == "gguf":
return quant_cls.from_config({})
if model_config.quantization in ("gguf", "inc"):
return quant_cls()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this valid for gguf ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mergify
Copy link

mergify bot commented Jul 2, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @nirda7.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Jul 2, 2025
@mergify mergify bot removed the needs-rebase label Jul 8, 2025
@mergify
Copy link

mergify bot commented Jul 14, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @nirda7.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Jul 14, 2025
@xuechendi
Copy link
Contributor

@mgoin , please help to review the PR again, we have updated the codes and resolved most of your comments.
there is one comment will need to check your opinion:
#12010 (comment)

Copy link
Member

@mgoin mgoin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks for the iterations. Please resolve the merge conflict


BlockSize = Literal[1, 8, 16, 32, 64, 128]
CacheDType = Literal["auto", "fp8", "fp8_e4m3", "fp8_e5m2"]
CacheDType = Literal["auto", "fp8", "fp8_e4m3", "fp8_e5m2", "fp8_inc"]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's okay, let's just keep fp8_inc then

@mergify mergify bot removed the needs-rebase label Jul 15, 2025
@xuechendi
Copy link
Contributor

@robertgshaw2-redhat @simon-mo @WoosukKwon , We have got one approval from Michael
May you check the PR as well, and help to add "ready" tag if possible, thanks so much

@mgoin mgoin added quantization ready ONLY add when PR is ready to merge/full CI is needed labels Jul 16, 2025
@xuechendi
Copy link
Contributor

@mgoin , thanks for the Review, all CI are passed

@mgoin mgoin merged commit 01513a3 into vllm-project:main Jul 16, 2025
86 checks passed
x22x22 pushed a commit to x22x22/vllm that referenced this pull request Aug 5, 2025
… INC (Intel Neural Compressor) (vllm-project#12010)

Signed-off-by: Nir David <[email protected]>
Signed-off-by: Uri Livne <[email protected]>
Co-authored-by: Uri Livne <[email protected]>
Signed-off-by: x22x22 <[email protected]>
Pradyun92 pushed a commit to Pradyun92/vllm that referenced this pull request Aug 6, 2025
… INC (Intel Neural Compressor) (vllm-project#12010)

Signed-off-by: Nir David <[email protected]>
Signed-off-by: Uri Livne <[email protected]>
Co-authored-by: Uri Livne <[email protected]>
npanpaliya pushed a commit to odh-on-pz/vllm-upstream that referenced this pull request Aug 6, 2025
… INC (Intel Neural Compressor) (vllm-project#12010)

Signed-off-by: Nir David <[email protected]>
Signed-off-by: Uri Livne <[email protected]>
Co-authored-by: Uri Livne <[email protected]>
jinzhen-lin pushed a commit to jinzhen-lin/vllm that referenced this pull request Aug 9, 2025
… INC (Intel Neural Compressor) (vllm-project#12010)

Signed-off-by: Nir David <[email protected]>
Signed-off-by: Uri Livne <[email protected]>
Co-authored-by: Uri Livne <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]>
paulpak58 pushed a commit to paulpak58/vllm that referenced this pull request Aug 13, 2025
… INC (Intel Neural Compressor) (vllm-project#12010)

Signed-off-by: Nir David <[email protected]>
Signed-off-by: Uri Livne <[email protected]>
Co-authored-by: Uri Livne <[email protected]>
Signed-off-by: Paul Pak <[email protected]>
diegocastanibm pushed a commit to diegocastanibm/vllm that referenced this pull request Aug 15, 2025
… INC (Intel Neural Compressor) (vllm-project#12010)

Signed-off-by: Nir David <[email protected]>
Signed-off-by: Uri Livne <[email protected]>
Co-authored-by: Uri Livne <[email protected]>
Signed-off-by: Diego-Castan <[email protected]>
epwalsh pushed a commit to epwalsh/vllm that referenced this pull request Aug 27, 2025
… INC (Intel Neural Compressor) (vllm-project#12010)

Signed-off-by: Nir David <[email protected]>
Signed-off-by: Uri Livne <[email protected]>
Co-authored-by: Uri Livne <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ci/build documentation Improvements or additions to documentation ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants