How to scale out to more workers for a fan-out / fan-in function? #2631
SaxonDruce
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I have a single orchestration function which fans out 100 activity functions in parallel, then collates the results.
Each activity does some heavy numeric processing, running for about a minute at 100% CPU single threaded.
Ideally, all 100 tasks would run in parallel and so the final output of the orchestration could be available in one minute (plus some overhead). I expect it to take a while for the workers to be scaled up and so it won't be quite as perfectly parallel as 100 tasks in 1 minute. However I am only seeing about 20 workers being allocated for the tasks, and so overall it takes about 5 minutes to get through the tasks.
Is there something I can do to allow the workers to scale up more quickly to meet the number of tasks?
I am using the Consumption plan, and so since each worker only has a single core, there isn't any benefit to allocating multiple tasks to a single worker at a time - each is using 100% CPU and so they will just cause CPU contention with each other. So I have set maxConcurrentActivityFunctions and maxConcurrentOrchestratorFunctions to 1, which did seem to help a bit.
I've run tests with varying numbers of tasks, and always end up with less workers than tasks, eg:
10 tasks: about 5 workers, about 2-3 minutes total run time
100 tasks: about 20 workers, about 5 minutes total run time
200 tasks: about 30 workers, about 7 minutes total run time
The following shows the number of workers I saw for 5 runs of the orchestration at each of these task sizes:
Is there anything else I can adjust to improve the throughput of the tasks?
Thanks,
Saxon
Beta Was this translation helpful? Give feedback.
All reactions