Description
Checklist
- The issue exists after disabling all extensions
- The issue exists on a clean installation of webui
- The issue is caused by an extension, but I believe it is caused by a bug in the webui
- The issue exists in the current version of the webui
- The issue has not been reported before recently
- The issue has been reported before but has not been fixed yet
What happened?
Running stable-diffusion-webui-amdgpu/webui.sh
enables it to launch as intended; however, after about 10-15 seconds, Segmentation fault
occurs, closing the connection.
Steps to reproduce the problem
- git cloned as instructed in original instructions
cd stable-diffusion-webui-amdgpu
python3.11 -m venv venv
source venv/bin/activate
pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm6.4
This step is crucial as the normal steps outlined here do not allow this forked version of Automatic1111 to recognize AMD GPUs. Following this steps ensures the--skip-torch-cuda-test
is not needed.- Since this is a WSL instance, extra steps need to be done to ensure WSL is using the correct torch that was just installed in step 5. Reference for this can be found in Use ROCm on on Radeon GPUs on page 36.
- Declare variable
location=$(pip show torch | grep Location | awk -F ": " '{print $2}')
- Change to directory using variable
cd ${location}/torch/lib/
. - Remove specified file(s)
rm libhsa-runtime64.so*
. - Change directory back to stable diffusion
cd /home/username/stable-diffusion-webui-amdgpu/
- Run webui
./webui.sh
What should have happened?
Ideally segmentation fault should not occur and the program should continue to run as normal. It's evident it sees my AMD GPU and is able to use it; however, whatever is causing the segmentation fault causes the script to end and closes out the connection. For testing, I was able to follow these steps with ComfyUI and got it working successfully by generating a few images using various checkpoints. I would much rather use lshqqytiger's fork of Automatic1111 as that is what I'm used to, and I like the UI and ease of extension access.
What browsers do you use to access the UI ?
Mozilla Firefox
Sysinfo
Console logs
################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye), Fedora 34+ and openSUSE Leap 15.4 or newer.
################################################################
################################################################
Running on username user
################################################################
################################################################
Repo already cloned, using it as install directory
################################################################
################################################################
python venv already activate or run without venv: /home/username/stable-diffusion-webui-amdgpu/venv
################################################################
################################################################
Launching launch.py...
################################################################
glibc version is 2.39
Cannot locate TCMalloc. Do you have tcmalloc or google-perftool installed on your system? (improves CPU memory usage)
Python 3.11.13 (main, Jun 4 2025, 08:57:30) [GCC 13.3.0]
Version: v1.10.1-amd-37-g721f6391
Commit hash: 721f6391993ac63fd246603735e2eb2e719ffac0
ROCm: agents=['gfx1100']
ROCm: version=6.4, using agent gfx1100
/home/username/stable-diffusion-webui-amdgpu/venv/lib/python3.11/site-packages/timm/models/layers/__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
/home/username/stable-diffusion-webui-amdgpu/venv/lib/python3.11/site-packages/pytorch_lightning/utilities/distributed.py:258: LightningDeprecationWarning: `pytorch_lightning.utilities.distributed.rank_zero_only` has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from `pytorch_lightning.utilities` instead.
rank_zero_deprecation(
Launching Web UI with arguments:
/home/username/stable-diffusion-webui-amdgpu/venv/lib/python3.11/site-packages/onnxruntime/capi/onnxruntime_validation.py:113: UserWarning: WARNING: failed to get cudart_version from onnxruntime build info.
warnings.warn("WARNING: failed to get cudart_version from onnxruntime build info.")
ONNX failed to initialize: module 'optimum.onnxruntime.modeling_diffusion' has no attribute 'ORTPipelinePart'
Loading weights [6ce0161689] from /home/username/stable-diffusion-webui-amdgpu/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Creating model from config: /home/username/stable-diffusion-webui-amdgpu/configs/v1-inference.yaml
Startup time: 72.3s (prepare environment: 102.4s, initialize shared: 1.6s, load scripts: 0.5s, create ui: 0.4s, gradio launch: 3.7s).
gio: http://127.0.0.1:7860/: Operation not supported
Applying attention optimization: Doggettx... done.
Model loaded in 26.4s (load weights from disk: 1.3s, create model: 0.8s, apply weights to model: 23.0s, apply half(): 0.1s, move model to device: 0.5s, load textual inversion embeddings: 0.1s, calculate empty prompt: 0.5s).
stable-diffusion-webui-amdgpu/webui.sh: line 304: 446 Segmentation fault (core dumped) "${python_cmd}" -u "${LAUNCH_SCRIPT}" "$@"
Additional information
Environment is WSL2 using Ubuntu 24.04.
AMD GPU is AMD Radeon RX 7900 XT running Driver Version 25.
Base OS is Windows 10 Professional x64 22H2.