-
Notifications
You must be signed in to change notification settings - Fork 31.2k
Open
Labels
Description
System Info
When trying to install transformers with the following commands:
It installed the CUDA version of pytorch and related dependencies:
Collecting triton==3.5.1 (from torch>=2.2->transformers[torch])
Using cached torch-2.9.1-cp312-cp312-manylinux_2_28_x86_64.whl (899.7 MB)
Using cached nvidia_cublas_cu12-12.8.4.1-py3-none-manylinux_2_27_x86_64.whl (594.3 MB)
Using cached nvidia_cuda_cupti_cu12-12.8.90-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (10.2 MB)
Using cached nvidia_cuda_nvrtc_cu12-12.8.93-py3-none-manylinux2010_x86_64.manylinux_2_12_x86_64.whl (88.0 MB)
Using cached nvidia_cuda_runtime_cu12-12.8.90-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (954 kB)
Using cached nvidia_cudnn_cu12-9.10.2.21-py3-none-manylinux_2_27_x86_64.whl (706.8 MB)
As is shown above, the torch and triton package will override the local XPU packages. Thus causing errors when installing transformers with this way. Using python setup.py develop won't have this issue anyway.
Who can help?
Reproduction
cd transformers
pip install -e ".[torch]"Expected behavior
Maybe we could have a USE_XPU=1 env flag for control?