Better automate comparing two Miri revisions with ./miri bench
#3999
Labels
A-dev
Area: working on Miri as a developer
C-enhancement
Category: a PR with an enhancement or an issue tracking an accepted enhancement
Currently to figure out the impact of a Miri change on our benchmarks, one has to run
./miri bench
twice and then manually compare the results. We should have a way to automate that... but I have no clue how to do that.hyperfine supports the case of having multiple benchmark commands and then it will print comparisons between them, but that doesn't work for us since we want to gather the two measurements separately, and then compare after the fact. We probably want
./miri bench
to dump the results into a file, and then have a way to compare two such files. But does hyperfine even have a way to dump the results in machine-readable form? And can we avoid having to implement the comparison code ourselves?The text was updated successfully, but these errors were encountered: