-
Notifications
You must be signed in to change notification settings - Fork 1.8k
[chore][.github] Add Go benchmarks workflow #14160
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
0d1d2f1 to
99e20b1
Compare
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #14160 +/- ##
==========================================
- Coverage 92.27% 92.25% -0.02%
==========================================
Files 658 658
Lines 41184 41184
==========================================
- Hits 38001 37993 -8
- Misses 2179 2184 +5
- Partials 1004 1007 +3 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
99e20b1 to
7439589
Compare
c3a189c to
cc704a6
Compare
cc704a6 to
b7f73a3
Compare
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
|
Hello @mx-psi ! Thank you for all the feedback, we'll make the fixes ASAP and discuss them in dedicated issues. The simplest solution to this would be to use a matrix when running the benchmarks, which would also come with the benefit of parallelizing the runs, more info here on how to do this using the action, adapting it to your wrapper should be quite quick. If you don't want to use a matrix, the second best solution is to do one single We'll keep you updated on the other issues for more scoped discussions! |
Based on the last comment and on a local run, I think this could work, though it will probably run out of memory. Let's see
ce4e36b to
2ba8276
Compare
|
@GuillaumeLagrange Thank you for the comment! Thanks also for the prompt replies on the Github issues :) Let me reply to your comments inline
That's helpful to know, thank you! Based on the documentation I assumed that running it multiple times was the only way of supporting multiple modules since I (mistakenly) assumed that only This may allow me to simplify things a lot, thanks for the pointer!!
I suspect we would hit Github actions limits in terms of the number of workflows that can be run if we go down this route but it's definitely something we can consider for performance/disk usage, even if we only do it partially |
|
You could also go for an hybrid solution where you shard benches in groups and parallelize to be less extreme in terms of number of runners! I'll update the codspeed docs in the github actions section to explicitly mention the "one |
|
Hey @mx-psi, we've fixed the issue. We're still in the middle of the review and last-minute changes but if you want to try it you can install it from this branch: I've tested it on our fork and it worked: https://github.com/AvalancheHQ/opentelemetry-collector/actions/runs/19370101140/job/55423469254 (see the results here: https://codspeed.io/AvalancheHQ/opentelemetry-collector/runs/6917526da8f3100a25b43f5e) I reduced the benchmark time from 3s (default) to 1s (via the We'll merge this next week, let us know if you have further questions. Have a nice weekend! |
|
Thank you! We discussed this among approvers and maintainers and we are excited to try this out, I have subscribed to the releases on codspeed-go and I'll give it a try once it is out :) |
Description
Adds https://codspeed.io/dashboard CI job to visualize Go benchmarks.
Link to tracking issue
Updates #14111