You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A dask version of the approach outlined in George's presentation is trivial to implement:
e.g. instead of:
results = []
for i in iterable:
results.append(my_function(i))
Something like:
import dask.bag as db
# this is needed if running on SPICE to correctly determine CPUs requested
if "SLURM_NTASKS" in os.environ.keys():
WORKERS = int(os.environ["SLURM_NTASKS"])
else:
WORKERS = os.cpu_count()
iter_bag = db.from_sequence(iterable)
results = iter_bag.map(my_function).compute(num_workers=WORKERS)
This is the approach that I've used for some time to do e.g. multiple MOOSE retrievals in parallel, parallel processing of multiple files on disk, and to break up some other calculation work (e.g. by year).
I have been investigating a little more with the use of dask.array for more sophisticated stuff that should be more efficient for some tasks with time / memory, but it will need to wait till I have my Antigua stuff out the way as it's a bit more involved (mainly in setting up the correct dask environment before hand, the code itself is in some cases even simpler), and my approaches using dask.bag are good enough for now
With reference to George Ford's ASKS presentation (attached below), I wonder if there is something that can be incorporated here...
Need to check that his suggested method is still up-to-date with respect to Dask etc. but I think the principal still holds... could be used in #17
20181107_GFord_parallel_python.pptx
The text was updated successfully, but these errors were encountered: