You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
Currently, performance benchmarks are run locally in an IDE or with a script. It would be nice to have a certain documented way to do this (#763) to make sharing and reproducing these benchmarks easier. This will also help towards more advanced things like regressing testing.
Describe the solution you'd like
airspeed velocity (asv) seems to be the best pick and has documentation and existing examples. Caching and loading small and large datasets to make a meaningful analysis is probably the next step.
choosing a framework
caching synthetic blobs datasets of various sizes for quick testing
adding some small quick tests like loading the SpatialData library and reading and writing a dataset
Describe alternatives you've considered
This discussion have been done for many other projects. Here are the most popular frameworks:
Not immediately a priority, but it would be nice to know the direction to work toward. Thoughts on asv, @LucaMarconato@Czaki? I tried it out and it seems suitable.
The text was updated successfully, but these errors were encountered:
The main problem of benchmarks is reproducibility.
The benchmark output is really fragile to the machine that is used to run it.
This is why in napari we use asv, and based on it, I make a PR to napari-spatialdata. Then asv runs on two commits and compares it, making it independent of changes of machine.
So if benchmarks are traced using external service (like pytest-benchmark + codespeed), there may be multiple false positive/negative reports because of changes in machine used for tests.
The obvious workaround for this is to have the own separate machine only for benchmark running that will have disabled automated updates. And each time there is an update of something on the machine (ex. NumPy version, Python version) there is also recalculation of performance results for historical measure points.
Is your feature request related to a problem? Please describe.
Currently, performance benchmarks are run locally in an IDE or with a script. It would be nice to have a certain documented way to do this (#763) to make sharing and reproducing these benchmarks easier. This will also help towards more advanced things like regressing testing.
Describe the solution you'd like
airspeed velocity (asv) seems to be the best pick and has documentation and existing examples. Caching and loading small and large datasets to make a meaningful analysis is probably the next step.
Describe alternatives you've considered
This discussion have been done for many other projects. Here are the most popular frameworks:
Not immediately a priority, but it would be nice to know the direction to work toward. Thoughts on asv, @LucaMarconato @Czaki? I tried it out and it seems suitable.
The text was updated successfully, but these errors were encountered: