Skip to content

Parcels-code/parcels-benchmarks

Repository files navigation

parcels-benchmarks

Pixi Badge

This repository houses performance benchmarks for Parcels.

Development instructions

  • install Pixi
  • pixi install
  • pixi run benchmarks # not functional yet

You can run the linting with pixi run lint

Important

Keep in mind: The platform which you run the benchmarks is very important as some platforms may not have access to the data required by the benchmarks, and performance will change based on the compute and memory resources of the platform.

Running benchmarks

Presently, to run benchmarks, you can do pixi run python ./run_benchmarks . This main program will take care of downloading all necessary data required for all benchmarks and will attempt to capture all relevant system information, including cpu, memory, and disk details.

Note

When capturing disk information, we only consider the physical hardware that is hosting the benchmark data. The location of the benchmark data can be changed using the os_cache_parentdir field defined in the benchmarks.json.

Running the benchmarks will append results to the benchmark_results.jsonl file included in this repository.

Contributing benchmark runs

Parcels developers and maintainers currently track benchmark performance in a centralized Google Sheet. Visualizations of the results can be seen in the (public) Parcels Benchmarks Lookerstudio report. The aim of this data collection is to provide Parcels users with a feel for the expected runtimes and memory requirements for running various Parcels benchmarks. Additionally, tracking this data over time helps Parcels developers ensure performance regressions are not introduced as development progresses.

Developers and maintainers with write access to the centralized Google Sheet can push their benchmark results by running the benchmarks locally with

pixi run python ./run_benchmarks --upload

Adding benchmarks

You can add benchmarks by including a python script in the benchmarks/ subdirectory. Additionally, you will need to add the benchmark details to benchmarks.json. Each benchmark entry has the following items :

{
      "name": "MOi-curvilinear", # Name of the benchmark
      "path": "benchmarks/benchmark_moi_curvilinear.py", # Path, relative to the root directory of this repository to the benchmark script
      "file": "Parcels_Benchmarks_MOi_data.zip", # Path, relative to the data_url, to the .zip file containing the benchmark data
      "known_hash": "f7816d872897c089eeb07a4e32b7fbcc96a0023ef01ac6c3792f88d8d8893885" # Pooch hash of the zip file (currently unused and not required)
},

Data availability

Data for the benchmarks is hosted at...

About

Repository for real-world benchmarks for Parcels v4 vs v3

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •  

Languages