Skip to content

Stable Diffusion WebUI Forge - Neo breaks when updating torch 2.8.0+cu128 to 2.9.1+cu130 #1513

@kajoken

Description

@kajoken

Package

Stable Diffusion WebUI Forge - Neo

When did the issue occur?

Updating the Package

What GPU / hardware type are you using?

Nvidia 5070 Ti

What happened?

Forge Neo package in Stability Matrix uses torch 2.8.0+cu128. After trying to update to 2.9.1+cu130 with --reinstall-torch Forge Neo breaks. Forge Neo standalone installation works fine with 2.9.1+cu130. Tested with Python 3.11.13 and 3.12.11, same results.

Console output

Python 3.12.11 (main, Jul 23 2025, 00:32:20) [MSC v.1944 64 bit (AMD64)]
Version: neo
Installing torch and torchvision
Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu130
Collecting torch==2.9.1+cu130
  Downloading https://download.pytorch.org/whl/cu130/torch-2.9.1%2Bcu130-cp312-cp312-win_amd64.whl.metadata (29 kB)
Collecting torchvision==0.24.1+cu130
  Downloading https://download.pytorch.org/whl/cu130/torchvision-0.24.1%2Bcu130-cp312-cp312-win_amd64.whl.metadata (6.1 kB)
Requirement already satisfied: filelock in d:\ai\data\packages\stable diffusion webui forge - neo\venv\lib\site-packages (from torch==2.9.1+cu130) (3.20.0)
Requirement already satisfied: typing-extensions>=4.10.0 in d:\ai\data\packages\stable diffusion webui forge - neo\venv\lib\site-packages (from torch==2.9.1+cu130) (4.15.0)
Requirement already satisfied: sympy>=1.13.3 in d:\ai\data\packages\stable diffusion webui forge - neo\venv\lib\site-packages (from torch==2.9.1+cu130) (1.14.0)
Requirement already satisfied: networkx>=2.5.1 in d:\ai\data\packages\stable diffusion webui forge - neo\venv\lib\site-packages (from torch==2.9.1+cu130) (3.6.1)
Requirement already satisfied: jinja2 in d:\ai\data\packages\stable diffusion webui forge - neo\venv\lib\site-packages (from torch==2.9.1+cu130) (3.1.6)
Requirement already satisfied: fsspec>=0.8.5 in d:\ai\data\packages\stable diffusion webui forge - neo\venv\lib\site-packages (from torch==2.9.1+cu130) (2025.12.0)
Requirement already satisfied: setuptools in d:\ai\data\packages\stable diffusion webui forge - neo\venv\lib\site-packages (from torch==2.9.1+cu130) (69.5.1)
Requirement already satisfied: numpy in d:\ai\data\packages\stable diffusion webui forge - neo\venv\lib\site-packages (from torchvision==0.24.1+cu130) (1.26.4)
Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in d:\ai\data\packages\stable diffusion webui forge - neo\venv\lib\site-packages (from torchvision==0.24.1+cu130) (12.0.0)
Requirement already satisfied: mpmath<1.4,>=1.1.0 in d:\ai\data\packages\stable diffusion webui forge - neo\venv\lib\site-packages (from sympy>=1.13.3->torch==2.9.1+cu130) (1.3.0)
Requirement already satisfied: MarkupSafe>=2.0 in d:\ai\data\packages\stable diffusion webui forge - neo\venv\lib\site-packages (from jinja2->torch==2.9.1+cu130) (2.1.5)
Downloading https://download.pytorch.org/whl/cu130/torch-2.9.1%2Bcu130-cp312-cp312-win_amd64.whl (1862.1 MB)
   ---------------------------------------- 1.9/1.9 GB 26.9 MB/s  0:00:59
Downloading https://download.pytorch.org/whl/cu130/torchvision-0.24.1%2Bcu130-cp312-cp312-win_amd64.whl (8.9 MB)
   ---------------------------------------- 8.9/8.9 MB 32.6 MB/s  0:00:00
Installing collected packages: torch, torchvision
  Attempting uninstall: torch
    Found existing installation: torch 2.8.0+cu128
    Uninstalling torch-2.8.0+cu128:
      Successfully uninstalled torch-2.8.0+cu128
  Attempting uninstall: torchvision
    Found existing installation: torchvision 0.23.0+cu128
    Uninstalling torchvision-0.23.0+cu128:
      Successfully uninstalled torchvision-0.23.0+cu128
   ---------------------------------------- 2/2 [torchvision]
Successfully installed torch-2.9.1+cu130 torchvision-0.24.1+cu130
Launching Web UI with arguments: --sage --cuda-malloc --cuda-stream --skip-python-version-check --fast-fp16 --reinstall-torch --gradio-allowed-path 'D:\AI\Data\Images'
Using cudaMallocAsync backend.
Total VRAM 16303 MB, total RAM 97849 MB
pytorch version: 2.9.1+cu130
allow_fp16_accumulation: True
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 5070 Ti : cudaMallocAsync
VAE dtype preferences: [torch.bfloat16, torch.float32] -> torch.bfloat16
CUDA Using Stream: True
Traceback (most recent call last):
  File "D:\AI\Data\Packages\Stable Diffusion WebUI Forge - Neo\venv\Lib\site-packages\diffusers\utils\import_utils.py", line 1016, in _get_module
    return importlib.import_module("." + module_name, self.__name__)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Data\Assets\Python\cpython-3.12.11-windows-x86_64-none\Lib\importlib\__init__.py", line 90, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1310, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
  File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 999, in exec_module
  File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
  File "D:\AI\Data\Packages\Stable Diffusion WebUI Forge - Neo\venv\Lib\site-packages\diffusers\models\autoencoders\__init__.py", line 1, in <module>
    from .autoencoder_asym_kl import AsymmetricAutoencoderKL
  File "D:\AI\Data\Packages\Stable Diffusion WebUI Forge - Neo\venv\Lib\site-packages\diffusers\models\autoencoders\autoencoder_asym_kl.py", line 23, in <module>
    from .vae import AutoencoderMixin, DecoderOutput, DiagonalGaussianDistribution, Encoder, MaskConditionDecoder
  File "D:\AI\Data\Packages\Stable Diffusion WebUI Forge - Neo\venv\Lib\site-packages\diffusers\models\autoencoders\vae.py", line 25, in <module>
    from ..unets.unet_2d_blocks import (
  File "D:\AI\Data\Packages\Stable Diffusion WebUI Forge - Neo\venv\Lib\site-packages\diffusers\models\unets\__init__.py", line 6, in <module>
    from .unet_2d import UNet2DModel
  File "D:\AI\Data\Packages\Stable Diffusion WebUI Forge - Neo\venv\Lib\site-packages\diffusers\models\unets\unet_2d.py", line 24, in <module>
    from .unet_2d_blocks import UNetMidBlock2D, get_down_block, get_up_block
  File "D:\AI\Data\Packages\Stable Diffusion WebUI Forge - Neo\venv\Lib\site-packages\diffusers\models\unets\unet_2d_blocks.py", line 36, in <module>
    from ..transformers.dual_transformer_2d import DualTransformer2DModel
  File "D:\AI\Data\Packages\Stable Diffusion WebUI Forge - Neo\venv\Lib\site-packages\diffusers\models\transformers\__init__.py", line 20, in <module>
    from .transformer_bria import BriaTransformer2DModel
  File "D:\AI\Data\Packages\Stable Diffusion WebUI Forge - Neo\venv\Lib\site-packages\diffusers\models\transformers\transformer_bria.py", line 14, in <module>
    from ..attention_dispatch import dispatch_attention_fn
  File "D:\AI\Data\Packages\Stable Diffusion WebUI Forge - Neo\venv\Lib\site-packages\diffusers\models\attention_dispatch.py", line 92, in <module>
    from sageattention import (
  File "D:\AI\Data\Packages\Stable Diffusion WebUI Forge - Neo\venv\Lib\site-packages\sageattention\__init__.py", line 1, in <module>
    from .core import sageattn, sageattn_varlen
  File "D:\AI\Data\Packages\Stable Diffusion WebUI Forge - Neo\venv\Lib\site-packages\sageattention\core.py", line 47, in <module>
    from .quant import per_block_int8 as per_block_int8_cuda
  File "D:\AI\Data\Packages\Stable Diffusion WebUI Forge - Neo\venv\Lib\site-packages\sageattention\quant.py", line 20, in <module>
    from . import _fused
ImportError: DLL load failed while importing _fused: The specified module could not be found.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "D:\AI\Data\Packages\Stable Diffusion WebUI Forge - Neo\venv\Lib\site-packages\diffusers\utils\import_utils.py", line 1016, in _get_module
    return importlib.import_module("." + module_name, self.__name__)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Data\Assets\Python\cpython-3.12.11-windows-x86_64-none\Lib\importlib\__init__.py", line 90, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 999, in exec_module
  File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
  File "D:\AI\Data\Packages\Stable Diffusion WebUI Forge - Neo\venv\Lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 47, in <module>
    from ..models import AutoencoderKL
  File "<frozen importlib._bootstrap>", line 1412, in _handle_fromlist
  File "D:\AI\Data\Packages\Stable Diffusion WebUI Forge - Neo\venv\Lib\site-packages\diffusers\utils\import_utils.py", line 1006, in __getattr__
    module = self._get_module(self._class_to_module[name])
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Data\Packages\Stable Diffusion WebUI Forge - Neo\venv\Lib\site-packages\diffusers\utils\import_utils.py", line 1018, in _get_module
    raise RuntimeError(
RuntimeError: Failed to import diffusers.models.autoencoders.autoencoder_kl because of the following error (look up to see its traceback):
DLL load failed while importing _fused: The specified module could not be found.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "D:\AI\Data\Packages\Stable Diffusion WebUI Forge - Neo\launch.py", line 52, in <module>
    main()
  File "D:\AI\Data\Packages\Stable Diffusion WebUI Forge - Neo\launch.py", line 48, in main
    start()
  File "D:\AI\Data\Packages\Stable Diffusion WebUI Forge - Neo\modules\launch_utils.py", line 505, in start
    import webui
  File "D:\AI\Data\Packages\Stable Diffusion WebUI Forge - Neo\webui.py", line 25, in <module>
    initialize.imports()
  File "D:\AI\Data\Packages\Stable Diffusion WebUI Forge - Neo\modules\initialize.py", line 49, in imports
    from modules import gradio_extensions, processing, ui  # noqa: F401
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Data\Packages\Stable Diffusion WebUI Forge - Neo\modules\processing.py", line 22, in <module>
    import modules.sd_models as sd_models
  File "D:\AI\Data\Packages\Stable Diffusion WebUI Forge - Neo\modules\sd_models.py", line 11, in <module>
    from backend.loader import forge_loader
  File "D:\AI\Data\Packages\Stable Diffusion WebUI Forge - Neo\backend\loader.py", line 7, in <module>
    from diffusers import DiffusionPipeline
  File "<frozen importlib._bootstrap>", line 1412, in _handle_fromlist
  File "D:\AI\Data\Packages\Stable Diffusion WebUI Forge - Neo\venv\Lib\site-packages\diffusers\utils\import_utils.py", line 1007, in __getattr__
    value = getattr(module, name)
            ^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Data\Packages\Stable Diffusion WebUI Forge - Neo\venv\Lib\site-packages\diffusers\utils\import_utils.py", line 1006, in __getattr__
    module = self._get_module(self._class_to_module[name])
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Data\Packages\Stable Diffusion WebUI Forge - Neo\venv\Lib\site-packages\diffusers\utils\import_utils.py", line 1018, in _get_module
    raise RuntimeError(
RuntimeError: Failed to import diffusers.pipelines.pipeline_utils because of the following error (look up to see its traceback):
Failed to import diffusers.models.autoencoders.autoencoder_kl because of the following error (look up to see its traceback):
DLL load failed while importing _fused: The specified module could not be found.
[W108 10:06:22.000000000 AllocatorConfig.cpp:28] Warning: PYTORCH_CUDA_ALLOC_CONF is deprecated, use PYTORCH_ALLOC_CONF instead (function operator ())

Version

2.15.5

What Operating System are you using?

Windows

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions