-
Notifications
You must be signed in to change notification settings - Fork 90
Open
Labels
enhancementNew feature or requestNew feature or request
Description
Currently, there are no tests that guarantee solver convergence. Defining a benchmark and running convergence tests on a rolling basis would help ensure that solver behavior remains consistent with expectations.
Ideally, these tests would be managed via a parser file that, given specific inputs (problem, model, solver), automatically generates a Python script to be executed. A test is considered successful if the script runs without errors and the resulting trained model achieves performance above a predefined threshold.
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or request