Skip to content

Support for Non-CUDA GPUs #32

@jmiguelv

Description

@jmiguelv

Currently, BVQA has hardcoded CUDA references in several places throughout the codebase. This limits the tool to CUDA devices only.

Would it make sense to use PyTorch's device detection to automatically identify available compute devices (CUDA, MPS, CPU) and send models to the appropriate device? This would enable support for multiple compute backends while maintaining CUDA support.

For example, instead of hardcoded CUDA logic, we could use:

import torch

device = (
    torch.accelerator.current_accelerator().type
    if torch.accelerator.is_available()
    else "cpu"
)

model = AutoModelForCausalLM.from_pretrained(...).to(device)

I have tested this approach in the moondream describer.

Sub-issues

Metadata

Metadata

Labels

No labels
No labels

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions