Skip to content

device_map="auto" supported for diffusers pipelines? #11555

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
johannaSommer opened this issue May 14, 2025 · 0 comments
Open

device_map="auto" supported for diffusers pipelines? #11555

johannaSommer opened this issue May 14, 2025 · 0 comments
Labels
bug Something isn't working

Comments

@johannaSommer
Copy link
Contributor

johannaSommer commented May 14, 2025

Describe the bug

Hey dear diffusers team,

for DiffusionPipline, as I understand (hopefully correctly) from this part of the documentation, it should be possible to specify device_map="auto" when loading a pipeline with from_pretrained but this results in a value error saying that this is not supported.

However, the documentation on device placement currently states that only the "balanced" strategy is supported.

Is this possibly similar to #11432 and should be removed from the docstrings / documentation? Happy to help on this with a PR if it turns out to be a mistake in the documentation.

Thanks a lot for your hard work!

Reproduction

from diffusers import DiffusionPipeline
pipeline = DiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", device_map="auto")

or

from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", device_map="auto")

Logs

---------------------------------------------------------------------------
NotImplementedError                       Traceback (most recent call last)
Cell In[12], line 3
      1 from diffusers import StableDiffusionPipeline
----> 3 pipe = StableDiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", device_map="auto")

File ~/miniconda3/envs/pruna/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py:114, in validate_hf_hub_args.<locals>._inner_fn(*args, **kwargs)
    111 if check_use_auth_token:
    112     kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=has_token, kwargs=kwargs)
--> 114 return fn(*args, **kwargs)

File ~/miniconda3/envs/pruna/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py:745, in DiffusionPipeline.from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
    742     raise ValueError("`device_map` must be a string.")
    744 if device_map is not None and device_map not in SUPPORTED_DEVICE_MAP:
--> 745     raise NotImplementedError(
    746         f"{device_map} not supported. Supported strategies are: {', '.join(SUPPORTED_DEVICE_MAP)}"
    747     )
    749 if device_map is not None and device_map in SUPPORTED_DEVICE_MAP:
    750     if is_accelerate_version("<", "0.28.0"):

NotImplementedError: auto not supported. Supported strategies are: balanced

System Info

  • 🤗 Diffusers version: 0.33.1
  • Platform: Linux-5.15.0-139-generic-x86_64-with-glibc2.35
  • Running on Google Colab?: No
  • Python version: 3.10.16
  • PyTorch version (GPU?): 2.7.0+cu126 (True)
  • Flax version (CPU?/GPU?/TPU?): not installed (NA)
  • Jax version: not installed
  • JaxLib version: not installed
  • Huggingface_hub version: 0.30.2
  • Transformers version: 4.51.3
  • Accelerate version: 1.6.0
  • PEFT version: 0.15.2
  • Bitsandbytes version: 0.45.5
  • Safetensors version: 0.5.3
  • xFormers version: not installed
  • Accelerator: NVIDIA H100 PCIe, 81559 MiB
    NVIDIA H100 PCIe, 81559 MiB
  • Using GPU in script?: yes
  • Using distributed or parallel set-up in script?: yes

Who can help?

No response

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant