Idea: Make Benchmark.net more suitable for single (long run) benchmarks. #2972
Danielku15
started this conversation in
Ideas
Replies: 1 comment 3 replies
-
|
Maybe something like |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I have some benchmark/performance tests which are fairly long-running and are more of a wider integration performance test (aka. "profiling" benchmark) than a the classical "execute often" benchmark. These benchmarks run fairly long and with a variety of parameters.
I know it might be a niche need but nevertheless such tests have their place and value. With the
MemoryDiagnoserandEventPipeProfilerconfigured I get meaningful results and reports out of this benchmarks which I can use to check the general application performance over time. Yes, there might be side-effects and results have to be interpreted more dedicated, but still they can be used for trends (think nightly test rounds).Due to the nature of these tests being "expensive", I want to ensure every test-case runs exactly once:
Benchmark.net provides a great foundation for this and it allows me to keep those kind of benchmark close to the more traditional ones which are show-running & often repeated executions to measure the performance of specific logic.
Now to the main problem in the current library behaviors: Despite my config of
[SimpleJob(RunStrategy.ColdStart, warmupCount: 0, launchCount: 1, iterationCount: 1)]every benchmark gets executed 4 (!) times.Unless I miss some configuration I could use, it would be great if Benchmark.net could honor such single-execution setups in some way:
Both new features would obviously require special configuration as users need to be aware of the impact (e.g. profiler overhead etc). Still I think they would be a valuable addition to maintain some general application performance related tests using Benchmark.net.
Not using BDN for such tests means: users lose a variety of nice features for doing repetitive application performance testing (parameterization, reports, profiling,..) but they also they need to be maintained "somewhere else" compared to the traditional benchmarks.
I'm aware that such requests have been stated in the past, but typically they were typically closed with "you are doing benchmarking, you should execute multiple times". That's why I tried to give deeper insight in why this would make sense to have.
Let me know what you think.
Beta Was this translation helpful? Give feedback.
All reactions