Can I load the models (e.g., wan2.2, lora, extra_model, inifniteTalk) required by the WanVideoModelLoaderMultiGPU node using Distorch2?
I want to do this because my GPU architecture (Pascal) is relatively old and doesn't support any attention mechanisms (e.g., SageAttention, flash_attention).
However, (UNet, CLIP, VAE) LoaderDisTorch2MultiGPU works perfectly. I really like this project. It makes many of my older, high-memory GPUs no longer useless (like a bloated vase without flowers).
So, Is it possible to provide a virtual_vram_gb slider or Expert Mode (more suitable for multiple older GPUs) for WanVideoModelLoaderMultiGPU use distributed torch?