Skip to content

.from_pretrained torch_dtype="auto" argument not working a expected #11432

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
johannaSommer opened this issue Apr 28, 2025 · 3 comments · Fixed by #11513
Closed

.from_pretrained torch_dtype="auto" argument not working a expected #11432

johannaSommer opened this issue Apr 28, 2025 · 3 comments · Fixed by #11513
Labels
bug Something isn't working

Comments

@johannaSommer
Copy link
Contributor

Describe the bug

Hey dear diffusers team,

thanks a lot for all your hard work!

I would like to make use of the torch_dtype="auto" keyword argument when loading a model/pipeline as specified here, but the usage does not work as expected (see example below). Can you help me out with some guidance on how to use it correctly or let me know whether there is something wrong with the handling of this argument?

Thank you!

Reproduction

from diffusers import StableDiffusionPipeline

model = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", torch_dtype="auto")

Logs

Passed `torch_dtype` torch.float32 is not a `torch.dtype`. Defaulting to `torch.float32`.

System Info

  • 🤗 Diffusers version: 0.33.1
  • Platform: Linux-5.15.0-136-generic-x86_64-with-glibc2.35
  • Running on Google Colab?: No
  • Python version: 3.10.17
  • PyTorch version (GPU?): 2.7.0+cu126 (True)
  • Flax version (CPU?/GPU?/TPU?): not installed (NA)
  • Jax version: not installed
  • JaxLib version: not installed
  • Huggingface_hub version: 0.30.2
  • Transformers version: 4.51.3
  • Accelerate version: 1.6.0
  • PEFT version: 0.15.2
  • Bitsandbytes version: 0.45.5
  • Safetensors version: 0.5.3
  • xFormers version: not installed
  • Accelerator: NVIDIA H100 PCIe, 81559 MiB
  • Using GPU in script?: Yes
  • Using distributed or parallel set-up in script?: No

Who can help?

No response

@johannaSommer johannaSommer added the bug Something isn't working label Apr 28, 2025
@johannaSommer johannaSommer changed the title .from_pretrained() torch_dtype="auto" argument not working a expected .from_pretrained torch_dtype="auto" argument not working a expected Apr 28, 2025
@DN6
Copy link
Collaborator

DN6 commented Apr 28, 2025

Hi @johannaSommer. Sorry that looks like a typo in the docs. We currently do not support "auto" dtype. I'll open a PR to fix.

@johannaSommer
Copy link
Contributor Author

johannaSommer commented Apr 28, 2025

@DN6 thanks a lot for the clarification! As a heads up, when investigating this a little, I also found "auto" as a possible dtype in the docstring of ModelMixin.from_pretrained() If I can be of any help by creating a draft PR let me know!

@DN6
Copy link
Collaborator

DN6 commented May 1, 2025

@johannaSommer Yes! If you could open a PR we'd appreciate it 👍🏽

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants