Skip to content

GPU support through Work Queue #37

Open
@RyanGutenkunst

Description

@RyanGutenkunst

It would be ideal to support GPU acceleration of optimization through Work Queue. But it will be challenging, and there are multiple obstacles.

We could use t.specify_gpus(1) to indicate tasks for GPU execution. But we don't a priori know the proper number of GPU vs CPU tasks for efficient use of all resources. So to be efficient we would need to dynamically create tasks to fill in the queue as existing tasks finished, specifying them as GPU or CPU as necessary. This would require a significant rework of the Work Queue dadi-cli implementation. And it would require specifying the available resources in the Work Queue pool ahead of time, which isn't necessary now and seems contrary to the Work Queue philosophy.

Note that having each task try dadi.cuda_enabled(True) seems likely to lead to competition for limited GPUs, potentially slowing overall performance.

Note also that PythonTasks don't preserve state, so we can't simply run dadi.cuda_enabled(True) ahead of time.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions