Skip to content

Conversation

@mx-psi
Copy link
Member

@mx-psi mx-psi commented Nov 13, 2025

Description

Adds https://codspeed.io/dashboard CI job to visualize Go benchmarks.

Link to tracking issue

Updates #14111

@codecov
Copy link

codecov bot commented Nov 13, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 92.25%. Comparing base (fa20a0c) to head (2ba8276).
⚠️ Report is 2 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main   #14160      +/-   ##
==========================================
- Coverage   92.27%   92.25%   -0.02%     
==========================================
  Files         658      658              
  Lines       41184    41184              
==========================================
- Hits        38001    37993       -8     
- Misses       2179     2184       +5     
- Partials     1004     1007       +3     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@mx-psi mx-psi force-pushed the mx-psi/test-out-codspeed branch from 99e20b1 to 7439589 Compare November 13, 2025 13:33
@codspeed-hq
Copy link

codspeed-hq bot commented Nov 13, 2025

CodSpeed Performance Report

Congrats! CodSpeed is installed 🎉

🆕 118 new benchmarks were detected.

You will start to see performance impacts in the reports once the benchmarks are run from your default branch.

Detected benchmarks


ℹ️ Only the first 20 benchmarks are displayed. Go to the app to view all benchmarks.

@mx-psi mx-psi force-pushed the mx-psi/test-out-codspeed branch from c3a189c to cc704a6 Compare November 14, 2025 12:24
@mx-psi mx-psi force-pushed the mx-psi/test-out-codspeed branch from cc704a6 to b7f73a3 Compare November 14, 2025 12:31
@codspeed-hq

This comment was marked as outdated.

@mx-psi

This comment was marked as outdated.

@GuillaumeLagrange
Copy link

Hello @mx-psi !

Thank you for all the feedback, we'll make the fixes ASAP and discuss them in dedicated issues.
One thing that I spot here is that you are calling codspeed run multiple times within the same job, which we unfortunately do not support, and do not plan to support.

The simplest solution to this would be to use a matrix when running the benchmarks, which would also come with the benefit of parallelizing the runs, more info here on how to do this using the action, adapting it to your wrapper should be quite quick.

If you don't want to use a matrix, the second best solution is to do one single codspeed run invocation, with a command that will run all the benches you want to bench.

We'll keep you updated on the other issues for more scoped discussions!

Based on the last comment and on a local run, I think this could work, though it will probably run out of memory. Let's see
@mx-psi mx-psi force-pushed the mx-psi/test-out-codspeed branch from ce4e36b to 2ba8276 Compare November 14, 2025 15:46
@mx-psi
Copy link
Member Author

mx-psi commented Nov 14, 2025

@GuillaumeLagrange Thank you for the comment! Thanks also for the prompt replies on the Github issues :) Let me reply to your comments inline

One thing that I spot here is that you are calling codspeed run multiple times within the same job, which we unfortunately do not support, and do not plan to support.

That's helpful to know, thank you! Based on the documentation I assumed that running it multiple times was the only way of supporting multiple modules since I (mistakenly) assumed that only go invocations were supported as an argument to codspeed run. I am trying out running a single codspeed run invocation (see 2ba8276).

This may allow me to simplify things a lot, thanks for the pointer!!

The simplest solution to this would be to use a matrix when running the benchmarks, which would also come with the benefit of parallelizing the runs, more info here on how to do this using the action, adapting it to your wrapper should be quite quick.

I suspect we would hit Github actions limits in terms of the number of workflows that can be run if we go down this route but it's definitely something we can consider for performance/disk usage, even if we only do it partially

@GuillaumeLagrange
Copy link

You could also go for an hybrid solution where you shard benches in groups and parallelize to be less extreme in terms of number of runners!

I'll update the codspeed docs in the github actions section to explicitly mention the "one codspeed run invocation` per job limitation.

@not-matthias
Copy link

Hey @mx-psi, we've fixed the issue. We're still in the middle of the review and last-minute changes but if you want to try it you can install it from this branch: cod-1694-allow-configuring-cleanup-of-temporary-files

I've tested it on our fork and it worked: https://github.com/AvalancheHQ/opentelemetry-collector/actions/runs/19370101140/job/55423469254 (see the results here: https://codspeed.io/AvalancheHQ/opentelemetry-collector/runs/6917526da8f3100a25b43f5e)

I reduced the benchmark time from 3s (default) to 1s (via the -benchtime parameter) to avoid having the CI run for a long time. You can do this as well at the cost of reducing the number of iterations per benchmark. Alternatively you can also run multiple packages with shards.

We'll merge this next week, let us know if you have further questions. Have a nice weekend!

@mx-psi
Copy link
Member Author

mx-psi commented Nov 18, 2025

Thank you! We discussed this among approvers and maintainers and we are excited to try this out, I have subscribed to the releases on codspeed-go and I'll give it a try once it is out :)

@mx-psi mx-psi marked this pull request as ready for review November 19, 2025 15:44
@mx-psi mx-psi requested review from a team, bogdandrutu, dmathieu and dmitryax as code owners November 19, 2025 15:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants