Description
I made a workflow having 3 parallel tasks each of which randomly fails, along with a task join all of them.
If all of the parallel tasks success, then the workflow will end. Otherwise, the workflow steps back to rerun the parallel tasks. The yaml code is as followed:
version: 1.0
description: loop parallel workflow.
input:
- x: {}
- y: {}
- z: {}
output:
- data:
x: <% ctx().x %>
y: <% ctx().y %>
z: <% ctx().z %>
tasks:
entrypoint:
next:
- do: parallel
parallel:
action: core.noop
next:
- do: random_failure1,random_failure2, random_failure3
random_failure1:
action: my_pack.random_failure
next:
- publish: x=<% result() %>
do: count_failure
random_failure2:
action: my_pack.random_failure
next:
- publish: y=<% result() %>
do: count_failure
random_failure3:
action: my_pack.random_failure
next:
- publish: z=<% result() %>
do: count_failure
count_failure:
join: all
action: my_pack.count_failure
input:
- x=<% ctx(x) %>
- y=<% ctx(y) %>
- z=<% ctx(z) %>
next:
- when: <% result().failure_count > 0 %>
do: tasks
In the first trial, the count_failure task starts after all of the 3 parallel task end and gets the context x, y, z updated by parallel tasks. Let s call the context x1, y1, z1.
However, in the second loop, the count_failure task starts after any of the 3 parallel task end. Assuming that random_failure1 ends first and count_failure will start immediately after that, with the context to be x2(updated by random_failure1), y1(not updated), z1(not updated). And when random_failure2 ends, another count_failure task starts with input like x2, y2, z1.
I am expecting the count_failure task also starts starts after all of the 3 parallel task end in the second loop, with input updated as x2, y2, z2.
Is this a bug, or the constraint of the graph based workflow in orquesta?