-
Notifications
You must be signed in to change notification settings - Fork 59
Open
Description
I saw the line pipe.enable_model_cpu_offload() here https://github.com/instantX-research/InstantIR/blob/main/pipelines/sdxl_instantir.py#L113C13-L113C44 and tried the approach with the gradio app, but get the following error:
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)
What else needs to be done in the code to fix this error and make this work?
Metadata
Metadata
Assignees
Labels
No labels