-
Notifications
You must be signed in to change notification settings - Fork 27.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changing __repr__ in torchao to show quantized Linear #34202
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for figuring out the issue @MekkCyber ! Left a few comments
cc @SunMarc for review ! Thank you ! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM ! Thanks for fixing this ! Just a nit. Also rebase the PR to fix the CI.
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, quick q on perf!
@@ -46,6 +45,25 @@ def find_parent(model, name): | |||
return parent | |||
|
|||
|
|||
def _quantization_type(weight): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we want to put this on lru cache? Or is it smart enough to be fast ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would think it's smart enough to be fast, but I will try to do a benchmark to test that
We can merge in the mean time 🤗 |
What does this PR do?
When a model is quantized using TorchAO and then loaded, the representation of its Linear layers is expected to be different compared to the standard representation. This pull request (PR) modifies the representation of these Linear layers to match the format used in TorchAO's implementation : https://github.com/pytorch/ao/blob/main/torchao/quantization/quant_api.py
Before :
Linear(in_features=4096, out_features=4096, bias=False)
After :
Who can review?
cc @SunMarc