Skip to content

[Bug]: TypeError: 'OnnxRawPipeline' object is not callable #598

Open
@Shamshoo

Description

@Shamshoo

Checklist

  • The issue exists after disabling all extensions
  • The issue exists on a clean installation of webui
  • The issue is caused by an extension, but I believe it is caused by a bug in the webui
  • The issue exists in the current version of the webui
  • The issue has not been reported before recently
  • The issue has been reported before but has not been fixed yet

What happened?

venv "F:\auto1111\stable-diffusion-webui-directml\venv\Scripts\Python.exe"
WARNING: ZLUDA works best with SD.Next. Please consider migrating to SD.Next.
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Version: v1.10.1-amd-31-ga31ef086
Commit hash: a31ef08
ROCm: agents=['gfx1101']
ROCm: version=6.1, using agent gfx1101
ZLUDA support: experimental
Using ZLUDA in F:\auto1111\stable-diffusion-webui-directml.zluda
WARNING: you should not skip torch test unless you want CPU to work.
No ROCm runtime is found, using ROCM_HOME='C:\Program Files\AMD\ROCm\6.1'
F:\auto1111\stable-diffusion-webui-directml\venv\lib\site-packages\timm\models\layers_init_.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
warnings.warn(f"Importing from {name} is deprecated, please import via timm.layers", FutureWarning)
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
F:\auto1111\stable-diffusion-webui-directml\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: pytorch_lightning.utilities.distributed.rank_zero_only has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from pytorch_lightning.utilities instead.
rank_zero_deprecation(
Launching Web UI with arguments: --use-zluda --precision full --no-half --upcast-sampling
ONNX: version=1.21.0 provider=CPUExecutionProvider, available=['AzureExecutionProvider', 'CPUExecutionProvider']
Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch().
Fetching 11 files: 100%|█████████████████████████████████████████████████████████████| 11/11 [00:00<00:00, 2717.80it/s]
Loading pipeline components...: 0%| | 0/6 [00:00<?, ?it/s]F:\auto1111\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\models\clip\feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead.
warnings.warn(
Startup time: 56.3s (prepare environment: 71.5s, initialize shared: 5.3s, list SD models: 0.3s, load scripts: 1.4s, initialize extra networks: 0.8s, create ui: 2.4s, gradio launch: 0.9s).
Loading pipeline components...: 33%|█████████████████▎ | 2/6 [00:00<00:00, 16.21it/s]Some weights of the model checkpoint were not used when initializing CLIPTextModel:
['text_model.embeddings.position_ids']
Loading pipeline components...: 100%|████████████████████████████████████████████████████| 6/6 [00:00<00:00, 9.90it/s]
You have disabled the safety checker for <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline'> by passing safety_checker=None. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at huggingface/diffusers#254 .
Applying attention optimization: Doggettx... done.
WARNING: ONNX implementation works best with SD.Next. Please consider migrating to SD.Next.
Olive implementation is experimental. It contains potentially an issue and is subject to change at any time.

Processing text_encoder
[2025-04-07 01:58:50,951] [WARNING] [config_utils.py:337:validate_config] Keys {'batch_size', 'dataloader_func'} are not part of LatencyUserConfig. Ignoring them.
Olive: Failed to run olive passes: model='v1-5-pruned-emaonly.safetensors', error=7 validation errors for RunConfig
engine -> output_name
extra fields not permitted (type=value_error.extra)
engine -> pass_flows
extra fields not permitted (type=value_error.extra)
passes -> optimize_CPUExecutionProvider
Invalid engine (type=value_error)
passes -> optimize_DmlExecutionProvider
Invalid engine (type=value_error)
passes -> optimize_CUDAExecutionProvider
Invalid engine (type=value_error)
passes -> optimize_ROCMExecutionProvider
Invalid engine (type=value_error)
passes -> quantization
Invalid engine (type=value_error)
ONNX: Failed to load ONNX pipeline: is_sdxl=False
ONNX: You cannot load this model using the pipeline you selected. Please check Diffusers pipeline in ONNX Runtime Settings.
ONNX: processing=StableDiffusionProcessingTxt2Img, pipeline=OnnxRawPipeline
*** Error completing request
*** Arguments: ('task(bmpnm4ga1ktrzfm)', <gradio.routes.Request object at 0x00000286A0473C70>, 'grass', '', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'PNDM', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
Traceback (most recent call last):
File "F:\auto1111\stable-diffusion-webui-directml\modules\call_queue.py", line 74, in f
res = list(func(*args, **kwargs))
File "F:\auto1111\stable-diffusion-webui-directml\modules\call_queue.py", line 53, in f
res = func(*args, **kwargs)
File "F:\auto1111\stable-diffusion-webui-directml\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "F:\auto1111\stable-diffusion-webui-directml\modules\txt2img.py", line 109, in txt2img
processed = processing.process_images(p)
File "F:\auto1111\stable-diffusion-webui-directml\modules\processing.py", line 849, in process_images
res = process_images_inner(p)
File "F:\auto1111\stable-diffusion-webui-directml\modules\processing.py", line 959, in process_images_inner
result = shared.sd_model(**kwargs)
TypeError: 'OnnxRawPipeline' object is not callable


WARNING: ONNX implementation works best with SD.Next. Please consider migrating to SD.Next.
ONNX: Failed to convert model: model='temp', error=[WinError 3] The system cannot find the path specified: 'F:\auto1111\stable-diffusion-webui-directml\models\ONNX\temp'
Fetching 11 files: 100%|███████████████████████████████████████████████████████████████████████| 11/11 [00:00<?, ?it/s]
Loading pipeline components...: 0%| | 0/6 [00:00<?, ?it/s]Some weights of the model checkpoint were not used when initializing CLIPTextModel:
['text_model.embeddings.position_ids']
Loading pipeline components...: 100%|████████████████████████████████████████████████████| 6/6 [00:00<00:00, 26.30it/s]
You have disabled the safety checker for <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline'> by passing safety_checker=None. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at huggingface/diffusers#254 .
ONNX: processing=StableDiffusionProcessingTxt2Img, pipeline=OnnxRawPipeline
*** Error completing request
*** Arguments: ('task(6zmnrq02qxdfhwr)', <gradio.routes.Request object at 0x000002869EBD33A0>, 'grass', '', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'PNDM', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
Traceback (most recent call last):
File "F:\auto1111\stable-diffusion-webui-directml\modules\call_queue.py", line 74, in f
res = list(func(*args, **kwargs))
File "F:\auto1111\stable-diffusion-webui-directml\modules\call_queue.py", line 53, in f
res = func(*args, **kwargs)
File "F:\auto1111\stable-diffusion-webui-directml\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "F:\auto1111\stable-diffusion-webui-directml\modules\txt2img.py", line 109, in txt2img
processed = processing.process_images(p)
File "F:\auto1111\stable-diffusion-webui-directml\modules\processing.py", line 849, in process_images
res = process_images_inner(p)
File "F:\auto1111\stable-diffusion-webui-directml\modules\processing.py", line 959, in process_images_inner
result = shared.sd_model(**kwargs)
TypeError: 'OnnxRawPipeline' object is not callable


WARNING: ONNX implementation works best with SD.Next. Please consider migrating to SD.Next.
F:\auto1111\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\models\clip\modeling_clip.py:282: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if seq_length > max_position_embedding:
F:\auto1111\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\modeling_attn_mask_utils.py:88: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if input_shape[-1] > 1 or self.sliding_window is not None:
F:\auto1111\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\modeling_attn_mask_utils.py:164: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if past_key_values_length > 0:
ONNX: Successfully exported converted model: submodel=text_encoder
F:\auto1111\stable-diffusion-webui-directml\venv\lib\site-packages\diffusers\models\unets\unet_2d_condition.py:1110: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if dim % default_overall_up_factor != 0:
F:\auto1111\stable-diffusion-webui-directml\venv\lib\site-packages\diffusers\models\downsampling.py:136: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert hidden_states.shape[1] == self.channels
F:\auto1111\stable-diffusion-webui-directml\venv\lib\site-packages\diffusers\models\downsampling.py:145: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert hidden_states.shape[1] == self.channels
F:\auto1111\stable-diffusion-webui-directml\venv\lib\site-packages\diffusers\models\upsampling.py:146: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert hidden_states.shape[1] == self.channels
F:\auto1111\stable-diffusion-webui-directml\venv\lib\site-packages\diffusers\models\upsampling.py:162: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if hidden_states.shape[0] >= 64:
F:\auto1111\stable-diffusion-webui-directml\venv\lib\site-packages\diffusers\models\unets\unet_2d_condition.py:1309: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if not return_dict:
ONNX: Successfully exported converted model: submodel=unet
F:\auto1111\stable-diffusion-webui-directml\venv\lib\site-packages\diffusers\models\autoencoders\autoencoder_kl.py:280: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if not return_dict:
ONNX: Successfully exported converted model: submodel=vae_encoder
F:\auto1111\stable-diffusion-webui-directml\venv\lib\site-packages\diffusers\models\autoencoders\autoencoder_kl.py:323: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if not return_dict:
ONNX: Successfully exported converted model: submodel=vae_decoder
Olive implementation is experimental. It contains potentially an issue and is subject to change at any time.

Processing text_encoder
[2025-04-07 02:09:56,212] [WARNING] [config_utils.py:337:validate_config] Keys {'batch_size', 'dataloader_func'} are not part of LatencyUserConfig. Ignoring them.
Olive: Failed to run olive passes: model='v1-5-pruned-emaonly.safetensors', error=7 validation errors for RunConfig
engine -> output_name
extra fields not permitted (type=value_error.extra)
engine -> pass_flows
extra fields not permitted (type=value_error.extra)
passes -> optimize_CPUExecutionProvider
Invalid engine (type=value_error)
passes -> optimize_DmlExecutionProvider
Invalid engine (type=value_error)
passes -> optimize_CUDAExecutionProvider
Invalid engine (type=value_error)
passes -> optimize_ROCMExecutionProvider
Invalid engine (type=value_error)
passes -> quantization
Invalid engine (type=value_error)
ONNX: Failed to load ONNX pipeline: is_sdxl=False
ONNX: You cannot load this model using the pipeline you selected. Please check Diffusers pipeline in ONNX Runtime Settings.
ONNX: processing=StableDiffusionProcessingTxt2Img, pipeline=OnnxRawPipeline
*** Error completing request
*** Arguments: ('task(wu7zbnhzwyvwuw0)', <gradio.routes.Request object at 0x000002869EBE6080>, 'grass', '', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'PNDM', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
Traceback (most recent call last):
File "F:\auto1111\stable-diffusion-webui-directml\modules\call_queue.py", line 74, in f
res = list(func(*args, **kwargs))
File "F:\auto1111\stable-diffusion-webui-directml\modules\call_queue.py", line 53, in f
res = func(*args, **kwargs)
File "F:\auto1111\stable-diffusion-webui-directml\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "F:\auto1111\stable-diffusion-webui-directml\modules\txt2img.py", line 109, in txt2img
processed = processing.process_images(p)
File "F:\auto1111\stable-diffusion-webui-directml\modules\processing.py", line 849, in process_images
res = process_images_inner(p)
File "F:\auto1111\stable-diffusion-webui-directml\modules\processing.py", line 959, in process_images_inner
result = shared.sd_model(**kwargs)
TypeError: 'OnnxRawPipeline' object is not callable


Steps to reproduce the problem

Followed olive/onnx support and while trying to generate an image I got this error.

rx 7800 xt

I only have these 2 Execution Providers
CPUExecutionProvider
AzureExecutionProvider

What should have happened?

Made a onnx model that generates images

What browsers do you use to access the UI ?

Mozilla Firefox

Sysinfo

sysinfo-2025-04-07-08-18.json

Console logs

venv "F:\auto1111\stable-diffusion-webui-directml\venv\Scripts\Python.exe"
WARNING: ZLUDA works best with SD.Next. Please consider migrating to SD.Next.
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr  5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Version: v1.10.1-amd-31-ga31ef086
Commit hash: a31ef08686915e63f56bb0a4543f0a429847aafb
ROCm: agents=['gfx1101']
ROCm: version=6.1, using agent gfx1101
ZLUDA support: experimental
Using ZLUDA in F:\auto1111\stable-diffusion-webui-directml\.zluda
WARNING: you should not skip torch test unless you want CPU to work.
No ROCm runtime is found, using ROCM_HOME='C:\Program Files\AMD\ROCm\6.1'
F:\auto1111\stable-diffusion-webui-directml\venv\lib\site-packages\timm\models\layers\__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
  warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
F:\auto1111\stable-diffusion-webui-directml\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: `pytorch_lightning.utilities.distributed.rank_zero_only` has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from `pytorch_lightning.utilities` instead.
  rank_zero_deprecation(
Launching Web UI with arguments: --use-zluda --precision full --no-half --upcast-sampling
ONNX: version=1.21.0 provider=CPUExecutionProvider, available=['AzureExecutionProvider', 'CPUExecutionProvider']
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Fetching 11 files: 100%|█████████████████████████████████████████████████████████████| 11/11 [00:00<00:00, 2717.80it/s]
Loading pipeline components...:   0%|                                                            | 0/6 [00:00<?, ?it/s]F:\auto1111\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\models\clip\feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead.
  warnings.warn(
Startup time: 56.3s (prepare environment: 71.5s, initialize shared: 5.3s, list SD models: 0.3s, load scripts: 1.4s, initialize extra networks: 0.8s, create ui: 2.4s, gradio launch: 0.9s).
Loading pipeline components...:  33%|█████████████████▎                                  | 2/6 [00:00<00:00, 16.21it/s]Some weights of the model checkpoint were not used when initializing CLIPTextModel:
 ['text_model.embeddings.position_ids']
Loading pipeline components...: 100%|████████████████████████████████████████████████████| 6/6 [00:00<00:00,  9.90it/s]
You have disabled the safety checker for <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline'> by passing `safety_checker=None`. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
Applying attention optimization: Doggettx... done.
WARNING: ONNX implementation works best with SD.Next. Please consider migrating to SD.Next.
Olive implementation is experimental. It contains potentially an issue and is subject to change at any time.

Processing text_encoder
[2025-04-07 01:58:50,951] [WARNING] [config_utils.py:337:validate_config] Keys {'batch_size', 'dataloader_func'} are not part of LatencyUserConfig. Ignoring them.
Olive: Failed to run olive passes: model='v1-5-pruned-emaonly.safetensors', error=7 validation errors for RunConfig
engine -> output_name
  extra fields not permitted (type=value_error.extra)
engine -> pass_flows
  extra fields not permitted (type=value_error.extra)
passes -> optimize_CPUExecutionProvider
  Invalid engine (type=value_error)
passes -> optimize_DmlExecutionProvider
  Invalid engine (type=value_error)
passes -> optimize_CUDAExecutionProvider
  Invalid engine (type=value_error)
passes -> optimize_ROCMExecutionProvider
  Invalid engine (type=value_error)
passes -> quantization
  Invalid engine (type=value_error)
ONNX: Failed to load ONNX pipeline: is_sdxl=False
ONNX: You cannot load this model using the pipeline you selected. Please check Diffusers pipeline in ONNX Runtime Settings.
ONNX: processing=StableDiffusionProcessingTxt2Img, pipeline=OnnxRawPipeline
*** Error completing request
*** Arguments: ('task(bmpnm4ga1ktrzfm)', <gradio.routes.Request object at 0x00000286A0473C70>, 'grass', '', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'PNDM', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "F:\auto1111\stable-diffusion-webui-directml\modules\call_queue.py", line 74, in f
        res = list(func(*args, **kwargs))
      File "F:\auto1111\stable-diffusion-webui-directml\modules\call_queue.py", line 53, in f
        res = func(*args, **kwargs)
      File "F:\auto1111\stable-diffusion-webui-directml\modules\call_queue.py", line 37, in f
        res = func(*args, **kwargs)
      File "F:\auto1111\stable-diffusion-webui-directml\modules\txt2img.py", line 109, in txt2img
        processed = processing.process_images(p)
      File "F:\auto1111\stable-diffusion-webui-directml\modules\processing.py", line 849, in process_images
        res = process_images_inner(p)
      File "F:\auto1111\stable-diffusion-webui-directml\modules\processing.py", line 959, in process_images_inner
        result = shared.sd_model(**kwargs)
    TypeError: 'OnnxRawPipeline' object is not callable

---
WARNING: ONNX implementation works best with SD.Next. Please consider migrating to SD.Next.
ONNX: Failed to convert model: model='temp', error=[WinError 3] The system cannot find the path specified: 'F:\\auto1111\\stable-diffusion-webui-directml\\models\\ONNX\\temp'
Fetching 11 files: 100%|███████████████████████████████████████████████████████████████████████| 11/11 [00:00<?, ?it/s]
Loading pipeline components...:   0%|                                                            | 0/6 [00:00<?, ?it/s]Some weights of the model checkpoint were not used when initializing CLIPTextModel:
 ['text_model.embeddings.position_ids']
Loading pipeline components...: 100%|████████████████████████████████████████████████████| 6/6 [00:00<00:00, 26.30it/s]
You have disabled the safety checker for <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline'> by passing `safety_checker=None`. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
ONNX: processing=StableDiffusionProcessingTxt2Img, pipeline=OnnxRawPipeline
*** Error completing request
*** Arguments: ('task(6zmnrq02qxdfhwr)', <gradio.routes.Request object at 0x000002869EBD33A0>, 'grass', '', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'PNDM', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "F:\auto1111\stable-diffusion-webui-directml\modules\call_queue.py", line 74, in f
        res = list(func(*args, **kwargs))
      File "F:\auto1111\stable-diffusion-webui-directml\modules\call_queue.py", line 53, in f
        res = func(*args, **kwargs)
      File "F:\auto1111\stable-diffusion-webui-directml\modules\call_queue.py", line 37, in f
        res = func(*args, **kwargs)
      File "F:\auto1111\stable-diffusion-webui-directml\modules\txt2img.py", line 109, in txt2img
        processed = processing.process_images(p)
      File "F:\auto1111\stable-diffusion-webui-directml\modules\processing.py", line 849, in process_images
        res = process_images_inner(p)
      File "F:\auto1111\stable-diffusion-webui-directml\modules\processing.py", line 959, in process_images_inner
        result = shared.sd_model(**kwargs)
    TypeError: 'OnnxRawPipeline' object is not callable

---
WARNING: ONNX implementation works best with SD.Next. Please consider migrating to SD.Next.
F:\auto1111\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\models\clip\modeling_clip.py:282: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if seq_length > max_position_embedding:
F:\auto1111\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\modeling_attn_mask_utils.py:88: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if input_shape[-1] > 1 or self.sliding_window is not None:
F:\auto1111\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\modeling_attn_mask_utils.py:164: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if past_key_values_length > 0:
ONNX: Successfully exported converted model: submodel=text_encoder
F:\auto1111\stable-diffusion-webui-directml\venv\lib\site-packages\diffusers\models\unets\unet_2d_condition.py:1110: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if dim % default_overall_up_factor != 0:
F:\auto1111\stable-diffusion-webui-directml\venv\lib\site-packages\diffusers\models\downsampling.py:136: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  assert hidden_states.shape[1] == self.channels
F:\auto1111\stable-diffusion-webui-directml\venv\lib\site-packages\diffusers\models\downsampling.py:145: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  assert hidden_states.shape[1] == self.channels
F:\auto1111\stable-diffusion-webui-directml\venv\lib\site-packages\diffusers\models\upsampling.py:146: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  assert hidden_states.shape[1] == self.channels
F:\auto1111\stable-diffusion-webui-directml\venv\lib\site-packages\diffusers\models\upsampling.py:162: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if hidden_states.shape[0] >= 64:
F:\auto1111\stable-diffusion-webui-directml\venv\lib\site-packages\diffusers\models\unets\unet_2d_condition.py:1309: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if not return_dict:
ONNX: Successfully exported converted model: submodel=unet
F:\auto1111\stable-diffusion-webui-directml\venv\lib\site-packages\diffusers\models\autoencoders\autoencoder_kl.py:280: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if not return_dict:
ONNX: Successfully exported converted model: submodel=vae_encoder
F:\auto1111\stable-diffusion-webui-directml\venv\lib\site-packages\diffusers\models\autoencoders\autoencoder_kl.py:323: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if not return_dict:
ONNX: Successfully exported converted model: submodel=vae_decoder
Olive implementation is experimental. It contains potentially an issue and is subject to change at any time.

Processing text_encoder
[2025-04-07 02:09:56,212] [WARNING] [config_utils.py:337:validate_config] Keys {'batch_size', 'dataloader_func'} are not part of LatencyUserConfig. Ignoring them.
Olive: Failed to run olive passes: model='v1-5-pruned-emaonly.safetensors', error=7 validation errors for RunConfig
engine -> output_name
  extra fields not permitted (type=value_error.extra)
engine -> pass_flows
  extra fields not permitted (type=value_error.extra)
passes -> optimize_CPUExecutionProvider
  Invalid engine (type=value_error)
passes -> optimize_DmlExecutionProvider
  Invalid engine (type=value_error)
passes -> optimize_CUDAExecutionProvider
  Invalid engine (type=value_error)
passes -> optimize_ROCMExecutionProvider
  Invalid engine (type=value_error)
passes -> quantization
  Invalid engine (type=value_error)
ONNX: Failed to load ONNX pipeline: is_sdxl=False
ONNX: You cannot load this model using the pipeline you selected. Please check Diffusers pipeline in ONNX Runtime Settings.
ONNX: processing=StableDiffusionProcessingTxt2Img, pipeline=OnnxRawPipeline
*** Error completing request
*** Arguments: ('task(wu7zbnhzwyvwuw0)', <gradio.routes.Request object at 0x000002869EBE6080>, 'grass', '', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'PNDM', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "F:\auto1111\stable-diffusion-webui-directml\modules\call_queue.py", line 74, in f
        res = list(func(*args, **kwargs))
      File "F:\auto1111\stable-diffusion-webui-directml\modules\call_queue.py", line 53, in f
        res = func(*args, **kwargs)
      File "F:\auto1111\stable-diffusion-webui-directml\modules\call_queue.py", line 37, in f
        res = func(*args, **kwargs)
      File "F:\auto1111\stable-diffusion-webui-directml\modules\txt2img.py", line 109, in txt2img
        processed = processing.process_images(p)
      File "F:\auto1111\stable-diffusion-webui-directml\modules\processing.py", line 849, in process_images
        res = process_images_inner(p)
      File "F:\auto1111\stable-diffusion-webui-directml\modules\processing.py", line 959, in process_images_inner
        result = shared.sd_model(**kwargs)
    TypeError: 'OnnxRawPipeline' object is not callable

---

Additional information

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    invalidThis doesn't seem rightonnxAbout ONNX

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions