-
Notifications
You must be signed in to change notification settings - Fork 26.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix optimum.quanto quantization call in cache_utils #34606
base: main
Are you sure you want to change the base?
Conversation
@@ -813,7 +813,8 @@ def _quantize(self, tensor, axis): | |||
if is_optimum_quanto_available(): | |||
from optimum.quanto import quantize_weight | |||
|
|||
qtensor = quantize_weight(tensor, self.qtype, axis, self.q_group_size) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you have a look @zucchini-nlp as you did the change in this PR. Looking at optimum-quanto source code, quantize_weight
do require to pass scale
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now when I look I think it is related to the version of optimum quanto. I see pre v0.24 had no scale
But after v0.24 we have to pass scale in before group size
@SunMarc if that is correct, prob we need to check the version also. Seems like a lot of checks but since the old quanto
should be removed in next v4.47 release, could be a workaround. WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good to me ! Also what was the issue with the prior implementation ? Was it to just simplify a bit the code ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You mean the prev PR I merged? It only made the code compatible with optimum-quanto v0.24 but I forgot there could be older versions. For the why optimum-quanto is changing its code, i have no idea. But would be nice if they wouldnt change it drastically anymore 😅
i guess you'll have more info about future maintaining plans in quanto :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@w3rew thanks for opening the PR, can you please update with suggested changes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure! Will look into it shortly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @dacorvo for visibility
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@zucchini-nlp optimum-quanto
0.2.5 release was synchronized with the switch from quanto
to optimum-quanto
in transformers
at the beginning of october, and the code in cache_utils.py
was correct.
It is your pull-request to align with 0.2.4 (a version that was never supported by transformers
) that was actually incorrect.
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
What does this PR do?
Fixes call to optimum.quanto.quantized_weight in QuantoQuantizedCache, which currently lacks
scale
andshift
parameters and thus fails. This was introduced by cac4a48 when migrating to optimum.quanto I think.Before submitting
Pull Request section?
to it if that's the case.
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
@SunMarc