Skip to content

Add automated benchmarks, stress testing, and other analyses #457

Open
@jpsamaroo

Description

@jpsamaroo

As Dagger is a very complicated set of interacting components and APIs, it would be very useful to be able to track Dagger's performance, scalability, and latency over time to ensure that we don't introduce unexpected regressions, and to be able to make claims about performance and suitability with some confidence.

To that end, I believe it would be valuable to, on every merge to master:

  • Run the full benchmark suite on various configurations
  • Stress-test under various configurations to find broken or buggy behavior
  • Perform automated profiling to find the current set of performance hotspots
  • Track precompile and loading latency

To make the collected information useful, we should automatically export the associated data to some persistent storage (say, S3) in raw form, together with any generated plots or aggregate metrics. We can use something like https://github.com/SciML/SciMLBenchmarks.jl/blob/84462b8f1e5c974df9f396ca4d9b4900e1108a21/.buildkite/run_benchmark.yml to upload to S3, and then provide a script or code to download and analyze this data.

An extra bonus would be to publish this data to https://daggerjl.ai/ so that we can show off our performance gains over time.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions