Open
Description
I have tried to add Group offloading after this line of code. But still encountered errors:
The module 'CLIPTextModel' is group offloaded and moving it to cuda via `.to()` is not supported.
The module 'T5EncoderModel' is group offloaded and moving it to cuda via `.to()` is not supported.
The module 'FluxTransformer2DModel' is group offloaded and moving it to cuda via `.to()` is not supported.
The module 'AutoencoderKL' is group offloaded and moving it to cuda via `.to()` is not supported.
In total 128 samples
Evaluating with batch size 2
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/data3/shidi/deepcompressor/deepcompressor/app/diffusion/dataset/collect/calib.py", line 145, in <module>
collect(ptq_config, dataset=dataset)
File "/data3/shidi/deepcompressor/deepcompressor/app/diffusion/dataset/collect/calib.py", line 72, in collect
result_images = pipeline(prompts, generator=generators, **pipeline_kwargs).images
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/shidi/miniconda3/envs/deepcompressor/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/data/shidi/miniconda3/envs/deepcompressor/lib/python3.12/site-packages/diffusers/pipelines/flux/pipeline_flux.py", line 920, in __call__
noise_pred = self.transformer(
^^^^^^^^^^^^^^^^^
File "/data/shidi/miniconda3/envs/deepcompressor/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/shidi/miniconda3/envs/deepcompressor/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1857, in _call_impl
return inner()
^^^^^^^
File "/data/shidi/miniconda3/envs/deepcompressor/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1805, in inner
result = forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/shidi/miniconda3/envs/deepcompressor/lib/python3.12/site-packages/diffusers/hooks/hooks.py", line 148, in new_forward
output = function_reference.forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/shidi/miniconda3/envs/deepcompressor/lib/python3.12/site-packages/diffusers/hooks/hooks.py", line 148, in new_forward
output = function_reference.forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/shidi/miniconda3/envs/deepcompressor/lib/python3.12/site-packages/diffusers/models/transformers/transformer_flux.py", line 523, in forward
hidden_states = block(
^^^^^^
File "/data/shidi/miniconda3/envs/deepcompressor/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/shidi/miniconda3/envs/deepcompressor/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/shidi/miniconda3/envs/deepcompressor/lib/python3.12/site-packages/diffusers/models/transformers/transformer_flux.py", line 98, in forward
hidden_states = gate * self.proj_out(hidden_states)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/shidi/miniconda3/envs/deepcompressor/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/shidi/miniconda3/envs/deepcompressor/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data3/shidi/deepcompressor/deepcompressor/nn/patch/linear.py", line 42, in forward
out_splits = [linear(x_split.contiguous()) for linear, x_split in zip(self.linears, x_splits, strict=True)]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/shidi/miniconda3/envs/deepcompressor/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/shidi/miniconda3/envs/deepcompressor/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/shidi/miniconda3/envs/deepcompressor/lib/python3.12/site-packages/torch/nn/modules/linear.py", line 125, in forward
return F.linear(input, self.weight, self.bias)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm)
Metadata
Metadata
Assignees
Labels
No labels