-
-
Notifications
You must be signed in to change notification settings - Fork 231
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding new choice to --on-error #1974
base: main
Are you sure you want to change the base?
Conversation
This pull request has been mentioned on Common Workflow Language Discourse. There might be relevant details there: https://cwl.discourse.group/t/how-to-fail-fast-during-parallel-scatter/868/5 |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #1974 +/- ##
==========================================
- Coverage 83.96% 80.39% -3.57%
==========================================
Files 46 46
Lines 8312 8396 +84
Branches 1959 1973 +14
==========================================
- Hits 6979 6750 -229
- Misses 854 1080 +226
- Partials 479 566 +87 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you, again, @AlexTate for your PR!
tests/test_parallel.py::test_on_error_kill
is unfortunately failing.
Thank you again for this contribution, @AlexTate ! Alas, the test is sometimes hanging in CI: https://github.com/common-workflow-language/cwltool/actions/runs/11496251742/job/31997507449?pr=1974#step:8:1983 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Almost there!
Check make diff_pydocstyle_report
or tox -e py312-pydocstyle
… runtimeContext.on_error = "kill", then the switch is activated. WorkflowKillSwitch is raised so it can be handled at the workflow and executor levels
…ch's status in the monitor function. The monitor function, up to this point, has been for gathering memory usage statistics via a timer thread. A second timer thread now monitors the kill switch.
…revent pending tasks from starting by simply draining the queue. This is a very loose policy, but since kill switch response is handled at the job level, any tasks that start after the kill switch is activated will take care of themselves and self terminate
… an executor. The workflow_eval_lock release had to be moved to the finally block in MultithreadedJobExecutor.run_jobs(). Otherwise, TaskQueue threads running MultithreadedJobExecutor._runner() will never join() because _runner() waits indefinitely for the workflow_eval_lock in its own finally block.
So that the runtime_context object can still be pickled. Other cleanups
…askQueue. This helps to better synchronize the kill switch event and avoid adding/executing tasks after the switch has been set. This approach is tighter than my previous draft, but a race condition still exists where a task might be started after the kill switch has been set and announced. If this happens then the leaked job's monitor function will kill it and the subprocess' lifespan will be a maximum of the monitor's timer interval (currently 1 second). So when this rare event happens, the console output will be potentially confusing since it will show a new job starting after the kill switch has been announced.
… when exiting due to kill switch. Those actions have been placed under a `finally` block so that they are executed by both the "switching" job and the "responding" jobs. However, some of these post actions added a lot of redundant and unhelpful terminal output when handling jobs killed DUE TO the kill switch. The verbose output obscured the error's cause which isn't helpful. Two new process statuses have been added in order to better handle the event: - indeterminant: a default value for processStatus. - killed: the job was killed due to the kill switch being set. This approach also means that partial outputs aren't collected from jobs that have been killed.
1) Once a job has been terminated, all other parallel jobs should also terminate. In this test, the runtime of the workflow indicates whether the kill switch has been handled correctly. If the kill switch is successful then the workflow's runtime should be significantly shorter than sleep_time. 2) Outputs produced by a successful step should still be collected. In this case, the completed step is make_array. To be frank, this test could be simplified by using a ToolTimeLimit requirement rather than process_roulette.cwl
…to this issue. Other changes were offered by the tool, but they are outside the scope of this issue.
…ve MultithreadedJobExecutor ignore allocated resources when deciding whether to run the next parallel job. The steps in this workflow aren't resource intensive, and delaying their execution on this basis will cause the test to fail.
…constraint. The current ResourceRequirement implementation doesn't allow {coresMin: 0}. However, this can still be achieved with a custom RuntimeContext.select_resources()
…matting compliance updates in test_parallel.py
…ble type checking on selectResources() because it's just an implementation detail for the test
…cessful steps when 1) these steps are upstream from a scattered subworkflow, 2) the workflow kill switch is activated by one of the scatter jobs, and 3) on_error==kill
…s() and WorkflowJob.job(). There isn't a need to use getdefault() when querying the value because a default is already set when RuntimeContext is constructed. The checked condition additionally applies to on_error==kill, so the logic can be simplified to on_error!=continue.
…sses. This can be VERY helpful while debugging, particularly when unraveling callback chains.
….receive_output(). Otherwise, MultithreadedJobExecutor.run_jobs() will likely stop iterating over the topmost WorkflowJob.job() before WorkflowJob.do_output_callback() is called to deliver the final workflow outputs.
…ill switch via a ToolTimeLimit requirement. It also uses a much longer timeout which will hopefully be sufficient for the CI server when it is congested.
…Step's repr string, and adding docstrings. Also adding docstring to parallel_steps() because pydocstyle yelled about it. At first it also yelled about object_from_state() and now it doesn't, so... I guess we'll see what the CI run says because I'm not familiar enough with this function to write a docstring for it.
…ent as an argument rather than an entire RuntimeContext, per @mr-c
Summary
This pull request introduces a new choice,
kill
, for the--on-error
parameter.Motivation
There currently isn't a way to have cwltool immediately stop parallel jobs when one of them fails. One might expect
--on-error stop
to accomplish this, but the help string is specific and accurate: "do not submit any more steps". Since scatter and subworkflow are treated as single "steps" within the parent workflow, this means cwltool is not wrong to wait for the rest of the step's parallel jobs to finish when--on-error stop
. However, sometimes individual scatter jobs take a long time to complete, so if one of them fails early on, cwltool might wait great lengths of time for the other scatter jobs to complete before terminating the workflow. With--on-error kill
, all running jobs are quickly notified and self-terminate upon one job's failure.Demonstration of the Issue
When running the following workflow with
cwltool --parallel --on-error stop
, the total runtime is ~33 seconds despite one of the scatterstep tasks terminating unexpectedly. Ideally the workflow would terminate immediately.--on-error kill
accomplishes that.Forum Post
https://cwl.discourse.group/t/how-to-fail-fast-during-parallel-scatter/868
Concerns
workflow_eval_lock.release()
had to be moved to the finally block inMultithreadedJobExecutor.run_jobs()
Are any important steps skipped inUpdate Nov 13: these post-subprocess tasks were moved into a finally block inJobBase._execute()
due toif runtimeContext.kill_switch.is_set(): return
? For that matter, shouldn't there be a finally block to contain some of these steps such as deleting runtime-generated files containing secrets?JobBase._execute()
to ensure that they aren't skipped by jobs setting or responding to the kill switch. See abc4c3f.The kill switch response in TaskQueue is fairly loose. Since the response is primarily handled at the job level, any tasks that start after the kill switch is activated will take care of themselves and self terminateUpdate Nov 13: TaskQueue response to the kill switch was tightened in b302eca. Still, a race condition exists where a job may be started within a narrow window of time after the kill switch has been set, but if that happens the leaked job will still self terminate within the monitor function's polling interval (currently 1 second).