-
Notifications
You must be signed in to change notification settings - Fork 252
Refactor, extend benchmarks and incorporate into CI #4882
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Love this! Most of the code in https://github.com/CliMA/Oceananigans.jl/blob/main/benchmark/src/Benchmarks.jl is to help keep the benchmark scripts short so it'll probably still be useful if we plan on having multiple types of benchmarks. Some things to think about:
|
I am hoping we can have a system that allows us to benchmark all important situations with just a few scripts which can be run in CI to ensure they do not go stale. Do you think the existing benchmarks (the ones we care about) will incorporate into a new framework? |
|
We also need to change the |
|
@giordano how should we add a new "CanonicalSimulations" or whatever package to the repo? |
|
Could be a project under |
Thinking something like... |
Only thing is that when you install the package with Pkg, and then you load it, you'd always refer to it by |
Right, trying to scope this... I think when used externally we will mostly use this package to extract the canonical models. I am not sure how much benchmarking tools specifically will be needed in the package |
This PR refactors the
/benchmarkdirectory and incorporates the benchmark scripts into CI. It removestest/benchmark_tests.jl.Our ultimate objective is to run a curated set of benchmarks regularly, and develop a graphic that tracks the evolution of the benchmarks over time and as performance optimization work is incorporated. This will ensure that the benchmark scripts stay up to date with current API. The benchmark scripts should also be runnable standalone for manual benchmarking. Eventually, we would also like to upload profile artifacts for inspection.
For now, I have moved the existing scripts in
benchmarksinto an "archive" folder. I think we should delete these, since they will be superceded. However, one question is how/whether we should also incorporate the code inbenchmarks/src. I think this code is useful and nice for graphical display, but I also think there is a benefit to having simple benchmarks that consists as single scripts. @ali-ramadhan perhaps you can weigh in here since you developed that code originally.cc @simone-silvestri @giordano