You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It could be useful to have benchmarks that run with all (or most) solvers, going through different kinds of problems, with different domains, and different sizes to show the performance difference and feasability of all the solvers in good_lp.
It could also be considered as additional testcases to run
I made this issue mostly as a tracking issue for future work implementing this. I don't currently have time to add this but a good start would be actually finding testcases to run with the model/results
Maybe we could take some tests from repositories of other solvers? maybe a bunch from each solver. Or if anyone knows where to get a dataset of models to run in benchmarks
Activity
lovasoa commentedon Dec 2, 2024
That would be great! Do you want to make it?
Specy commentedon Dec 2, 2024
I made this issue mostly as a tracking issue for future work implementing this. I don't currently have time to add this but a good start would be actually finding testcases to run with the model/results
Maybe we could take some tests from repositories of other solvers? maybe a bunch from each solver. Or if anyone knows where to get a dataset of models to run in benchmarks
lovasoa commentedon Dec 2, 2024
Maybe use https://sites.google.com/usc.edu/distributional-miplib/home ?
Specy commentedon Dec 2, 2024
That is definitely a place where to get some problems.
We also need some continuous only so we can include clarabel too