Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better automate comparing two Miri revisions with ./miri bench #3999

Open
RalfJung opened this issue Oct 28, 2024 · 0 comments
Open

Better automate comparing two Miri revisions with ./miri bench #3999

RalfJung opened this issue Oct 28, 2024 · 0 comments
Labels
A-dev Area: working on Miri as a developer C-enhancement Category: a PR with an enhancement or an issue tracking an accepted enhancement

Comments

@RalfJung
Copy link
Member

Currently to figure out the impact of a Miri change on our benchmarks, one has to run ./miri bench twice and then manually compare the results. We should have a way to automate that... but I have no clue how to do that.

hyperfine supports the case of having multiple benchmark commands and then it will print comparisons between them, but that doesn't work for us since we want to gather the two measurements separately, and then compare after the fact. We probably want ./miri bench to dump the results into a file, and then have a way to compare two such files. But does hyperfine even have a way to dump the results in machine-readable form? And can we avoid having to implement the comparison code ourselves?

@RalfJung RalfJung added C-enhancement Category: a PR with an enhancement or an issue tracking an accepted enhancement A-dev Area: working on Miri as a developer labels Oct 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-dev Area: working on Miri as a developer C-enhancement Category: a PR with an enhancement or an issue tracking an accepted enhancement
Projects
None yet
Development

No branches or pull requests

1 participant