-
Notifications
You must be signed in to change notification settings - Fork 100
Diffuser test #2141
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Diffuser test #2141
Conversation
bitsandbytes==0.44 fails with
|
diffuser fails with bitsandbytes==0.45.0 diffuser works with bitsandbytes==0.45.1, but |
@kiya00 the error is fixed in the bitsandbytes main branch, but we're waiting for the 0.47 release to bump. (@lianakoleva isolated this in #2238 ) |
Hi @t-vi , thanks, I think it's fine to wait for 0.47 to merge this PR, I'll ask @IvanYashchuk on Monday |
What's the stack trace for this error? I don't see |
|
@pytest.mark.parametrize("model_id", hf_diffusers_unet2d_condition_model_ids) | ||
def test_hf_diffusers(model_id): | ||
from thunder.dynamo import thunderfx | ||
from diffusers import UNet2DConditionModel |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does the error go away if you patch the following function from bitsandbytes.triton.triton_utils import is_triton_available
?
from diffusers import UNet2DConditionModel | |
import bitsandbytes.triton.triton_utils | |
bitsandbytes.triton.triton_utils.is_triton_available = lambda: False | |
from diffusers import UNet2DConditionModel |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I found that in CI docker (pytorchlightning/lightning-thunder:ubuntu24.04-cuda12.6.3-cudnn-fe1.10.0-py3.10-pt_2.7.1-dev
), import bitsandbytes
fails and the test_quantization
is actually skipped(e.g. https://dev.azure.com/Lightning-AI/lightning/_build/results?buildId=236919&view=logs&j=3f274fac-2e11-54ca-487e-194c91f3ae9f&t=244491d3-5bd5-5b27-6d81-66bb4c7264ae), is that what we expected? @t-vi , I thought the test case was supposed to run?
root@4eb531844b44:/wayan/lightning-thunder# pip list|grep bitsan
bitsandbytes 0.44.1
root@4eb531844b44:/wayan/lightning-thunder# python
Python 3.10.18 (main, Jun 4 2025, 08:56:00) [GCC 13.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import bitsandbytes
Could not find the bitsandbytes CUDA binary at PosixPath('/usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cuda126.so')
The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.10/dist-packages/bitsandbytes/__init__.py", line 15, in <module>
from .nn import modules
File "/usr/local/lib/python3.10/dist-packages/bitsandbytes/nn/__init__.py", line 21, in <module>
from .triton_based_modules import (
File "/usr/local/lib/python3.10/dist-packages/bitsandbytes/nn/triton_based_modules.py", line 7, in <module>
from bitsandbytes.triton.int8_matmul_mixed_dequantize import (
File "/usr/local/lib/python3.10/dist-packages/bitsandbytes/triton/int8_matmul_mixed_dequantize.py", line 12, in <module>
from triton.ops.matmul_perf_model import early_config_prune, estimate_matmul_time
ModuleNotFoundError: No module named 'triton.ops'
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah. :( cc: @Borda
This reverts commit 988c911.
Before submitting
What does this PR do?
Fixes #2075 .
Adds coverage tests for HF diffusers
Needs #2122: The bitsandbytes version needs to be updated to support HF Diffusers, as was done in PR #2122 .
cc @Borda