You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As @dmgav mentioned yesterday, the unit tests take about 135 minutes to run. They split into three groups, to facility running each group in parallel in separate CI jobs of 45 minutes each.
I think it's worth evaluating this. More tests means more protection against accidental regressions. But it can also mean:
Changes are resource-intensive because they may involve updating many tests.
Development in general is resource intensive because the feedback loop is slow.
The trade-off is context dependent, but I have read that 10 minutes is a good target. Ophyd and Tiled tests run in 10 minutes. Bluesky (i.e. RunEngine) tests have crept up to 35 minutes; that may also need a closer look.
For a young project, it's especially important to be selective about what is tested and what is left flexible, open to future rethinking. Well chosen tests can protect against important regressions while keeping the weight (number, runtime, code size) of tests reasonable. Jim Pavarski made this point in a DOEpy lecture: it's like pinning a dead frog for study in a bio lab: want some ability to move the frog around, while keeping key points still. As an example, he mentioned that testing the exist formatting of a string repr, like this:
assertex.__repr__() ==repr, "Error representation is printed incorrectly"
may be too strict and confining.
One possible approach, which @dmgav raised in conversation, is simply more parallelism---split the tests into more groups---which could reduce the wall time in CI. But I think the total runtime / weight of the test suite is of interest.
It may be that significant speed-ups could be made by avoiding starting real (i.e. networked servers) and subprocesses, which happens a lot in this test suite.
The text was updated successfully, but these errors were encountered:
As @dmgav mentioned yesterday, the unit tests take about 135 minutes to run. They split into three groups, to facility running each group in parallel in separate CI jobs of 45 minutes each.
I think it's worth evaluating this. More tests means more protection against accidental regressions. But it can also mean:
The trade-off is context dependent, but I have read that 10 minutes is a good target. Ophyd and Tiled tests run in 10 minutes. Bluesky (i.e. RunEngine) tests have crept up to 35 minutes; that may also need a closer look.
For a young project, it's especially important to be selective about what is tested and what is left flexible, open to future rethinking. Well chosen tests can protect against important regressions while keeping the weight (number, runtime, code size) of tests reasonable. Jim Pavarski made this point in a DOEpy lecture: it's like pinning a dead frog for study in a bio lab: want some ability to move the frog around, while keeping key points still. As an example, he mentioned that testing the exist formatting of a string repr, like this:
bluesky-queueserver/bluesky_queueserver/manager/tests/test_comms.py
Lines 66 to 67 in 7de5539
may be too strict and confining.
One possible approach, which @dmgav raised in conversation, is simply more parallelism---split the tests into more groups---which could reduce the wall time in CI. But I think the total runtime / weight of the test suite is of interest.
It may be that significant speed-ups could be made by avoiding starting real (i.e. networked servers) and subprocesses, which happens a lot in this test suite.
The text was updated successfully, but these errors were encountered: