Skip to content

Seeking feedback: improved PR benchmark workflow #121

Open
@mweberxyz

Description

@mweberxyz

After having spent time looking at the implementation of the PR benchmark workflow and in-tree benchmarks in fastify/point-of-view, fastify/fastify, and fastify/workflows, I've managed to pull it all together. Hopefully the result is more usable with less code duplication, and provides better actionable information to PR reviewers.

I feel like things are in a stable place now, so I'm seeking feedback on if this is going to be valuable to reviewers, my approach, and the changes to each of the three repos needed. Open to any and all feedback before I spent any more time on it.

Demo

fastify

CleanShot 2024-02-25 at 15 38 45@2x

point-of-view

CleanShot 2024-02-25 at 14 32 26@2x

Required changes

fastify/workflows: mweberxyz@9cee011
fastify/fastify: mweberxyz/fastify@053330e
fastify/point-of-view: mweberxyz/point-of-view@07b17c8

Sample PRs

fastify/fastify: PR from fork

mweberxyz/fastify#3

Merges code from a fork into my fork, to demonstrate that the "base" benchmark are run against the target of the PR. Additionally, it shows warnings in the comment because the "head" aka PR branch does not run the parser.js correctly and all requests return 404s.

fastify/point-of-view: PR from same repo with performance degredation

mweberxyz/point-of-view#5

Merges code from a branch into the default branch of the same fork. It reverts a performance improvement, to demonstrate what it looks like when a PR really tanks performance.

Approach

  • Everything needed to run, parse, send comments, and remove the benchmark label is contained in the re-usable workflow
    • Re-usable workflows and custom actions each come with their pros and cons. In the end, I decided to keep the entirety of the logic in a re-usable workflow for ease of maintenance, though I admit the JS-in-YAML is a bit unwieldy.
    • Some benefits are we don't need to pass around GITHUB_TOKEN, we avoid a build step, and it fits in better with the rest of the workflows already defined in this repo
  • Each file in the input benchmarks-dir directory is executed (except any file specified in the input files-to-ignore), then autocannon is run 3 times* for 5 seconds* each against each file, taking the maximum result of mean req/s of the three runs
    • Following conclusion of benchmark runs, a table is sent to the PR as a comment
    • Any results that differ by 5% or more are bolded
  • Autocannon is loaded via npx
  • Autocannon's --on-port is used to spawn benchmark scripts
    • removes the need for the logic currently in fastify/point-of-view/benchmark.js
  • Selection of node versions is moved to the node-versions input
    • the current fastify/workflows workflow uses different versions of Node for benchmarks than is currently implemented in fastify/fastify
  • Static outputs removed, results moved to GHA artifacts
    • in part due to the previous, but also to keep history of these runs over time
  • If any benchmark needs to add autocannon arguments, they can be defined in a comment in the benchmark file itself
    • example: examples/benchmark/parser.js in fastify/fastify

Lessons

  • fix(plugins-benchmark-pr): run comparison benchmark against target #120 still has potentially incorrect logic
    • When commits are added to main after the PR is created, the github.event.pull_request.base.sha is not updated. That is to say, when running the base benchmarks, they always run against the main commit as-of-the-time the PR was created.
    • Fixed in POC by using the github.event.pull_request.base.ref instead

Future work

  • Test error and failure states more extensively
    • Add input-configurable timeout-minutes to the benchmark steps
    • Correctly handle when a PR adds or removes a benchmark
  • Experiment with self-hosted runners as a strategy to reduce run-to-run variance
  • Make benchmark run and benchmark duration input-configurable (see * above in Approach)
  • Factor out logic used in GHA to be usable by developers locally
    • Would be nice as a developer to run npm run benchmark and see the same type of output

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions