Skip to content

[Bug]: Attempt to initialize CUDA on Apple Silicone #251

Open
@astesin

Description

@astesin

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Are you using the latest version of the extension?

  • I have the modelscope text2video extension updated to the lastest version and I still have the issue.

What happened?

After manually applying a (still uncommitted) fix for #243 and #248, I finally got the video generation initiated on my Mac Studio M2 Ultra, but it failed with the following stack trace:
Traceback (most recent call last): File "~/AI_Text-to-video/stable-diffusion-webui/extensions/sd-webui-text2video/scripts/t2v_helpers/render.py", line 30, in run vids_pack = process_modelscope(args_dict, args) File "~/AI_Text-to-video/stable-diffusion-webui/extensions/sd-webui-text2video/scripts/modelscope/process_modelscope.py", line 66, in process_modelscope pipe = setup_pipeline(args.model) File "~/AI_Text-to-video/stable-diffusion-webui/extensions/sd-webui-text2video/scripts/modelscope/process_modelscope.py", line 32, in setup_pipeline return TextToVideoSynthesis(get_model_location(model_name)) File "~/AI_Text-to-video/stable-diffusion-webui/extensions/sd-webui-text2video/scripts/modelscope/t2v_pipeline.py", line 114, in __init__ self.diffusion = Txt2VideoSampler(self.sd_model, shared.device, betas=betas) File "~/AI_Text-to-video/stable-diffusion-webui/extensions/sd-webui-text2video/scripts/samplers/samplers_common.py", line 102, in __init__ self.sampler = self.get_sampler(sampler_name, betas=self.betas) File "~/AI_Text-to-video/stable-diffusion-webui/extensions/sd-webui-text2video/scripts/samplers/samplers_common.py", line 152, in get_sampler sampler = Sampler.init_sampler(self.sd_model, betas=betas, device=self.device) File "~/AI_Text-to-video/stable-diffusion-webui/extensions/sd-webui-text2video/scripts/samplers/samplers_common.py", line 87, in init_sampler return self.Sampler(sd_model, betas=betas, **kwargs) File "~/AI_Text-to-video/stable-diffusion-webui/extensions/sd-webui-text2video/scripts/samplers/uni_pc/sampler.py", line 12, in __init__ self.register_buffer('alphas_cumprod', to_torch(model.alphas_cumprod)) File "~/AI_Text-to-video/stable-diffusion-webui/extensions/sd-webui-text2video/scripts/samplers/uni_pc/sampler.py", line 17, in register_buffer attr = attr.to(torch.device("cuda")) File "~/miniconda3/lib/python3.10/site-packages/torch/cuda/__init__.py", line 289, in _lazy_init raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled

Steps to reproduce the problem

  1. Use Mac Studio or another Apple Silicon SoC computer with MacOS Sonoma.
  2. Install Python 3.10.* (latest 3.10) with Miniconda, create an environment and fill it with the required libraries. Important note: We are installing compiled and optimized libraries for Apple Silicon. CUDA is obviously absent on Mac.
  3. Install Stable Diffusion WebUI from the repo, mine is version: [v1.10.1] (AUTOMATIC1111/stable-diffusion-webui@82a973c)  •  python: 3.10.16  •  torch: 2.1.2  •  xformers: N/A  •  gradio: 3.41.2  •  checkpoint: 6ce0161689
  4. Install the latest T2V extension from the repo
  5. Install models (I installed two ZeroScope models and a Videocrafter model).
  6. Apply fix for [Bug]: When I try to run ModelScope text2video it does nothing after pressing 'Generate'. #243 and Fix for 243 #248; otherwise, the "Generate" button will be replaced with two grey buttons, a UI will get stuck, and generation will not start.
  7. Launch WebUI with the correct command: python launch.py --skip-torch-cuda-test
  8. Give it some prompts and press "Generate." You will get the "error" mp4 displayed and a console log, as shown above.

What should have happened?

All other WebUI components and features work well on MacOS (text-to-image, etc.), meaning the torch is perfectly operational on Apple's SoC and GPU, it uses the "mps" device instead of "cuda", and the correct command line is
python launch.py --skip-torch-cuda-test so I guess that the sampler.py script should be aware of Apple SoC and use "mps" device like both WebUI and torch already do.

WebUI and Deforum extension Commit IDs

webui commit id - 82a973c04367123ae98bd9abdf80d9eda9b910e2
txt2vid commit id - 989f5cf

Torch version

torch 2.1.2 compiled for Apple SoC

What GPU were you using for launching?

Apple native GPU

On which platform are you launching the webui backend with the extension?

Local PC setup (Mac)

Settings

Image

Console logs

% python launch.py --skip-torch-cuda-test 2>&1 | tee stderr.log        
~/miniconda3/lib/python3.10/site-packages/timm/models/layers/__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
  warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
~/miniconda3/lib/python3.10/site-packages/huggingface_hub/file_download.py:797: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
  warnings.warn(
~/AI_Text-to-video/stable-diffusion-webui/extensions/sd-webui-text2video/scripts/text2vid.py:48: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead.
  with gr.Row(elem_id='t2v-core').style(equal_height=False, variant='compact'):
Python 3.10.16 (main, Dec 11 2024, 10:22:29) [Clang 14.0.6 ]
Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
Launching Web UI with arguments: --skip-torch-cuda-test
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
Loading weights [6ce0161689] from ~/AI_Text-to-video/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors
Creating model from config: ~/AI_Text-to-video/stable-diffusion-webui/configs/v1-inference.yaml
Traceback (most recent call last):
  File "~/AI_Text-to-video/stable-diffusion-webui/extensions/sd-webui-text2video/scripts/t2v_helpers/render.py", line 30, in run
    vids_pack = process_modelscope(args_dict, args)
  File "~/AI_Text-to-video/stable-diffusion-webui/extensions/sd-webui-text2video/scripts/modelscope/process_modelscope.py", line 66, in process_modelscope
    pipe = setup_pipeline(args.model)
  File "~/AI_Text-to-video/stable-diffusion-webui/extensions/sd-webui-text2video/scripts/modelscope/process_modelscope.py", line 32, in setup_pipeline
    return TextToVideoSynthesis(get_model_location(model_name))
  File "~/AI_Text-to-video/stable-diffusion-webui/extensions/sd-webui-text2video/scripts/modelscope/t2v_pipeline.py", line 114, in __init__
    self.diffusion = Txt2VideoSampler(self.sd_model, shared.device, betas=betas)
  File "~/AI_Text-to-video/stable-diffusion-webui/extensions/sd-webui-text2video/scripts/samplers/samplers_common.py", line 102, in __init__
    self.sampler = self.get_sampler(sampler_name, betas=self.betas)
  File "~/AI_Text-to-video/stable-diffusion-webui/extensions/sd-webui-text2video/scripts/samplers/samplers_common.py", line 152, in get_sampler
    sampler = Sampler.init_sampler(self.sd_model, betas=betas, device=self.device)
  File "~/AI_Text-to-video/stable-diffusion-webui/extensions/sd-webui-text2video/scripts/samplers/samplers_common.py", line 87, in init_sampler
    return self.Sampler(sd_model, betas=betas, **kwargs)
  File "~/AI_Text-to-video/stable-diffusion-webui/extensions/sd-webui-text2video/scripts/samplers/uni_pc/sampler.py", line 12, in __init__
    self.register_buffer('alphas_cumprod', to_torch(model.alphas_cumprod))
  File "~/AI_Text-to-video/stable-diffusion-webui/extensions/sd-webui-text2video/scripts/samplers/uni_pc/sampler.py", line 17, in register_buffer
    attr = attr.to(torch.device("cuda"))
  File "~/miniconda3/lib/python3.10/site-packages/torch/cuda/__init__.py", line 289, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

Additional information

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions