Skip to content

[xpu] [OOB] pip install -e ".[torch]" wrongly installed the CUDA version dependencies #42204

@Stonepia

Description

@Stonepia

System Info

When trying to install transformers with the following commands:

It installed the CUDA version of pytorch and related dependencies:

Collecting triton==3.5.1 (from torch>=2.2->transformers[torch])

Using cached torch-2.9.1-cp312-cp312-manylinux_2_28_x86_64.whl (899.7 MB)
Using cached nvidia_cublas_cu12-12.8.4.1-py3-none-manylinux_2_27_x86_64.whl (594.3 MB)
Using cached nvidia_cuda_cupti_cu12-12.8.90-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (10.2 MB)
Using cached nvidia_cuda_nvrtc_cu12-12.8.93-py3-none-manylinux2010_x86_64.manylinux_2_12_x86_64.whl (88.0 MB)
Using cached nvidia_cuda_runtime_cu12-12.8.90-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (954 kB)
Using cached nvidia_cudnn_cu12-9.10.2.21-py3-none-manylinux_2_27_x86_64.whl (706.8 MB)

As is shown above, the torch and triton package will override the local XPU packages. Thus causing errors when installing transformers with this way. Using python setup.py develop won't have this issue anyway.

Who can help?

@yao-matrix

Reproduction

cd transformers
pip install -e ".[torch]"

Expected behavior

Maybe we could have a USE_XPU=1 env flag for control?

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions