Skip to content

Conversation

kaixuanliu
Copy link
Contributor

What does this PR do?

Fixes # (issue)

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you make sure to update the documentation with your changes?
  • Did you write any new necessary tests?

Signed-off-by: Liu, Kaixuan <[email protected]>
Signed-off-by: Liu, Kaixuan <[email protected]>
Signed-off-by: Liu, Kaixuan <[email protected]>
Signed-off-by: Liu, Kaixuan <[email protected]>
Signed-off-by: Liu, Kaixuan <[email protected]>
Signed-off-by: Liu, Kaixuan <[email protected]>
@kaixuanliu kaixuanliu changed the title Ipex transformers upgrade to 4.55 IPEX transformers upgrade to 4.55 Oct 11, 2025
@kaixuanliu
Copy link
Contributor Author

@echarlaix @IlyasMoutawwakil , pls help review, thx


from optimum.intel.utils.import_utils import is_ipex_version

class IPEXLayer(CacheLayerMixin):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe IPEXCacheLayer for clarity

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@IlyasMoutawwakil
Copy link
Member

thanks for the update ! I'm not sure if I'm understanding correctly but I heard that ipex optimizations ae now upstreamed in pytorch, does that mean this integration should be deprecated ?

@IlyasMoutawwakil
Copy link
Member

@kaixuanliu can you please fix the inc tests as well ? i think they use the ipex integration and that's why they're failing (using old transformers)

Signed-off-by: Liu, Kaixuan <[email protected]>
@yao-matrix
Copy link

thanks for the update ! I'm not sure if I'm understanding correctly but I heard that ipex optimizations ae now upstreamed in pytorch, does that mean this integration should be deprecated ?

Yes, @IlyasMoutawwakil , we are retiring IPEX to PyTorch in a step-by-step way. First step is Out-Of-Box library like transformers, accelerate which we are done now. The second step is hardware acceleration libraries like optimum-intel, this step depends on the kernels libraries readiness on XPU(we plan to switch custom op from IPEX to kernels), and we are working w/ Daniel and others to enable XPU kernels(rmsnorm, flash-attention etc.), before the kernels are ready, we don't want to break optimum-intel, so we will keep maintaining ipex integration before we fully switch to kernels. Does it make sense to you? Thx for your always support.

Signed-off-by: Liu, Kaixuan <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants