You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It would be ideal to support GPU acceleration of optimization through Work Queue. But it will be challenging, and there are multiple obstacles.
We could use t.specify_gpus(1) to indicate tasks for GPU execution. But we don't a priori know the proper number of GPU vs CPU tasks for efficient use of all resources. So to be efficient we would need to dynamically create tasks to fill in the queue as existing tasks finished, specifying them as GPU or CPU as necessary. This would require a significant rework of the Work Queue dadi-cli implementation. And it would require specifying the available resources in the Work Queue pool ahead of time, which isn't necessary now and seems contrary to the Work Queue philosophy.
Note that having each task try dadi.cuda_enabled(True) seems likely to lead to competition for limited GPUs, potentially slowing overall performance.
Note also that PythonTasks don't preserve state, so we can't simply run dadi.cuda_enabled(True) ahead of time.
The text was updated successfully, but these errors were encountered:
It would be ideal to support GPU acceleration of optimization through Work Queue. But it will be challenging, and there are multiple obstacles.
We could use
t.specify_gpus(1)
to indicate tasks for GPU execution. But we don't a priori know the proper number of GPU vs CPU tasks for efficient use of all resources. So to be efficient we would need to dynamically create tasks to fill in the queue as existing tasks finished, specifying them as GPU or CPU as necessary. This would require a significant rework of the Work Queue dadi-cli implementation. And it would require specifying the available resources in the Work Queue pool ahead of time, which isn't necessary now and seems contrary to the Work Queue philosophy.Note that having each task try
dadi.cuda_enabled(True)
seems likely to lead to competition for limited GPUs, potentially slowing overall performance.Note also that PythonTasks don't preserve state, so we can't simply run
dadi.cuda_enabled(True)
ahead of time.The text was updated successfully, but these errors were encountered: