You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is a greedy algorithm: as soon as a postcondition fails, then we can discard all potential sequentializations with the failing command as the next one in the sequence. This happens often enough to make the search reasonably fast in practice. As a further optimization, we memoize the search function on the remaining parallel branches and the test case state. This is useful, for example, when searching for a sequentialization of [A, B] and [C, D], if both [A, C] and [C, A] are possible prefixes, and they lead to the same test state—for then we need only try to sequentialize [B] and
[D] once. We memoize the non-interference test in a similar way, and these optimizations give an appreciable, though not dramatic, speed-up in our experiments—of about 20%. With these optimizations, generating and running parallel tests is acceptably fast.
This is for a model-based framework like STM, meaning we have an oracle to decide "lead to the same test state" for us [...]
#266 reminded me of a couple of model-based memorizations that
STM
could potentially benefit from.[...]
It reminds me of an optimization from the original QuickCheck race-condition paper:
This is for a model-based framework like
STM
, meaning we have an oracle to decide "lead to the same test state" for us [...]Originally posted by @jmid in #266 (comment)
The text was updated successfully, but these errors were encountered: