-
Notifications
You must be signed in to change notification settings - Fork 145
IPEX transformers upgrade to 4.55 #1467
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
IPEX transformers upgrade to 4.55 #1467
Conversation
Signed-off-by: Liu, Kaixuan <[email protected]>
Signed-off-by: Liu, Kaixuan <[email protected]>
Signed-off-by: Liu, Kaixuan <[email protected]>
Signed-off-by: Liu, Kaixuan <[email protected]>
Signed-off-by: Liu, Kaixuan <[email protected]>
Signed-off-by: Liu, Kaixuan <[email protected]>
Signed-off-by: Liu, Kaixuan <[email protected]>
Signed-off-by: Liu, Kaixuan <[email protected]>
Signed-off-by: Liu, Kaixuan <[email protected]>
@echarlaix @IlyasMoutawwakil , pls help review, thx |
Signed-off-by: Liu, Kaixuan <[email protected]>
Signed-off-by: Liu, Kaixuan <[email protected]>
|
||
from optimum.intel.utils.import_utils import is_ipex_version | ||
|
||
class IPEXLayer(CacheLayerMixin): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe IPEXCacheLayer for clarity
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
thanks for the update ! I'm not sure if I'm understanding correctly but I heard that ipex optimizations ae now upstreamed in pytorch, does that mean this integration should be deprecated ? |
@kaixuanliu can you please fix the inc tests as well ? i think they use the ipex integration and that's why they're failing (using old transformers) |
Signed-off-by: Liu, Kaixuan <[email protected]>
Yes, @IlyasMoutawwakil , we are retiring IPEX to PyTorch in a step-by-step way. First step is Out-Of-Box library like transformers, accelerate which we are done now. The second step is hardware acceleration libraries like optimum-intel, this step depends on the kernels libraries readiness on XPU(we plan to switch custom op from IPEX to kernels), and we are working w/ Daniel and others to enable XPU kernels(rmsnorm, flash-attention etc.), before the kernels are ready, we don't want to break |
Signed-off-by: Liu, Kaixuan <[email protected]>
Signed-off-by: Liu, Kaixuan <[email protected]>
What does this PR do?
Fixes # (issue)
Before submitting