Description
I'm currently working on a fairly big dataset (1414 buildings), and it's going pretty slowly, average 1 minute per building (even with Gurobi).
I've ran the program twice, for Gurobi (which was getting stuck every 80 or so buildings) and SCIP solver which got stuck (again) on the 400th building,
processing 400/1414 building...
- num added vertical planes: 7
- num initial planes: 14
It kept computing this one building for hours. The processor was still working (it isn't idle). I don't think i have any control at this point.
After multiple hours, I've decided to clip the dataset into smaller batches for better control and will attempt running the dataset this way. If there are any parameters I could adjust/check, I'll gladly learn about them.
I think any of the following improvements could be useful, especially if attempting to work on big datasets:
- A user input to skip the current building being processed.
- A way to identify (in GUI), which building the software considers f.e 400th, so it's possible to exclude such problematic cases from the dataset.
- A way for the software to save it's own progress/save the partial results, so you can resume from a certain building.