-
Notifications
You must be signed in to change notification settings - Fork 198
Description
My predictions used to take 20-40 seconds to complete, but now most of them are freezing and not returning a response, regardless of how simple or complex the workflow is.
Currently I'm manually cancelling predictions that are still running after 2 minutes, otherwise I get charged until Replicate times out (30 minutes - is there a way to reduce this time, by the way?).
This is happening both via API and web form
I just tested this via the web form and got the same issue:
Model
The current latest version of this model: https://replicate.com/fofr/any-comfyui-workflow/versions/67ed4ba04ce0842446e16c428b1be131452815d01810861f71d171f63e8ba8f0
Input file:

Workflow
I can confirm this works locally, and sometimes it works with this model too:
{"3":{"inputs":{"seed":917379306164326,"steps":4,"cfg":1,"sampler_name":"dpmpp_sde","scheduler":"karras","denoise":0.4,"model":["4",0],"positive":["6",0],"negative":["13",0],"latent_image":["12",0]},"class_type":"KSampler","_meta":{"title":"KSampler"}},"4":{"inputs":{"ckpt_name":"dreamshaperXL_lightningDPMSDE.safetensors"},"class_type":"CheckpointLoaderSimple","_meta":{"title":"Load Checkpoint"}},"6":{"inputs":{"text":"a person smiling","clip":["4",1]},"class_type":"CLIPTextEncode","_meta":{"title":"CLIP Text Encode (Prompt)"}},"8":{"inputs":{"samples":["3",0],"vae":["4",2]},"class_type":"VAEDecode","_meta":{"title":"VAE Decode"}},"9":{"inputs":{"filename_prefix":"ComfyUI","images":["8",0]},"class_type":"SaveImage","_meta":{"title":"Save Image"}},"10":{"inputs":{"image":"input.jpg"},"class_type":"LoadImage","_meta":{"title":"Load Image"}},"11":{"inputs":{"image":["10",0]},"class_type":"FluxKontextImageScale","_meta":{"title":"FluxKontextImageScale"}},"12":{"inputs":{"pixels":["11",0],"vae":["4",2]},"class_type":"VAEEncode","_meta":{"title":"VAE Encode"}},"13":{"inputs":{"conditioning":["6",0]},"class_type":"ConditioningZeroOut","_meta":{"title":"ConditioningZeroOut"}}}
As you can see, very simple img2img workflow.
Other parameters:
- output_format: jpg
- output_quality: 100
- randomise_seeds: true
- force_reset_cache: false
- return_temp_files: false
Logs
====================================
Inputs uploaded to /tmp/inputs:
input.jpg
====================================
Checking inputs
✅ /tmp/inputs/input.jpg
====================================
Checking weights
✅ dreamshaperXL_lightningDPMSDE.safetensors exists in ComfyUI/models/checkpoints
====================================
Randomising seed to 400432014
Running workflow
[ComfyUI] got prompt
[ComfyUI]
[ComfyUI] 0%| | 0/4 [00:00<?, ?it/s]
[ComfyUI] 25%|██▌ | 1/4 [00:00<00:00, 5.05it/s]
[ComfyUI] 50%|█████ | 2/4 [00:00<00:00, 5.55it/s]
It's been stuck here for over 10 minutes at the time of writing.
I couldn't find a way to reproduce it every time. Sometimes I get several successful predictions in a row, sometimes 8 in 10 hang like this. Sometimes it hangs after 100%, right before returning the image.
I've tried different workflows too, more complex ones, etc.
If it helps, here's the log for a successful run using the exact same workflow, parameters, etc:
====================================
Inputs uploaded to /tmp/inputs:
input.jpg
====================================
Checking inputs
✅ /tmp/inputs/input.jpg
====================================
Checking weights
✅ dreamshaperXL_lightningDPMSDE.safetensors exists in ComfyUI/models/checkpoints
====================================
Randomising seed to 1331621550
Running workflow
[ComfyUI] got prompt
Executing node 10, title: Load Image, class type: LoadImage
Executing node 11, title: FluxKontextImageScale, class type: FluxKontextImageScale
Executing node 12, title: VAE Encode, class type: VAEEncode
Executing node 6, title: CLIP Text Encode (Prompt), class type: CLIPTextEncode
Executing node 13, title: ConditioningZeroOut, class type: ConditioningZeroOut
Executing node 3, title: KSampler, class type: KSampler
[ComfyUI]
[ComfyUI] 0%| | 0/4 [00:00<?, ?it/s]
[ComfyUI] 25%|██▌ | 1/4 [00:00<00:00, 6.53it/s]
[ComfyUI] 50%|█████ | 2/4 [00:00<00:00, 6.56it/s]
[ComfyUI] 75%|███████▌ | 3/4 [00:00<00:00, 6.70it/s]
Executing node 8, title: VAE Decode, class type: VAEDecode
Executing node 9, title: Save Image, class type: SaveImage
[ComfyUI] 100%|██████████| 4/4 [00:00<00:00, 7.77it/s]
[ComfyUI] Prompt executed in 1.04 seconds
outputs: {'9': {'images': [{'filename': 'ComfyUI_00001_.png', 'subfolder': '', 'type': 'output'}]}}
====================================
ComfyUI_00001_.png
I tried a few runs on the previous model version (fofr/any-comfyui-workflow:f552cf6bb263b2c7c547c3c7fb158aa4309794934bedc16c9aa395bee407744d
) but haven't come across the same issue yet, so I'm downgrading it until this is fixed.