A warning message showing that MultiScaleDeformableAttention.so
is not found in /root/.cache/torch_extensions
if ninja
is installed with transformers
#35349
Labels
System Info
transformers
:4.47.1
torch
:2.5.1
timm
:1.0.12
ninja
:1.11.1.3
python
:3.10.14
pip
:23.0.1
torch
:nvidia-cuda-runtime-cu12==12.4.127
10.0.22631 Build 22631
)27.3.1, build ce12230
565.57.02
Who can help?
I am asking help for
DeformableDetrModel
vision models: @amyeroberts, @qubvel
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
/root/.cache/torch_extensions/py310_cu124/MultiScaleDeformableAttention/
is empty.The issue happens only when both
ninja
andtransformers
are installed. I believe that the following issue may be related to this issue:https://app.semanticdiff.com/gh/huggingface/transformers/pull/32834/overview
Expected behavior
It seems that ninja will let
DeformableDetrModel
throw unexpected error messages (despite that the script still works). That's may be because I am using a container without any compiler or CUDA preinstalled (the CUDA run time is installed bypip
).I think there should be a check that automatically turn of the
ninja
related functionalities even ifninja
is installed bypip
, as long as the requirements like compiler version, CUDA path, or something, are not fulfilled.The text was updated successfully, but these errors were encountered: