Skip to content

add qp support for highs #3531

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 28 commits into
base: main
Choose a base branch
from

Conversation

quantresearch1
Copy link
Contributor

Fixes #3381 .

Summary/Motivation:

HiGHS can solve quadratic programming (QP) models, which contain an objective term x.T @ Q @ x where the Hessian matrix Q is positive semi-definite.

Changes proposed in this PR:

  • Allow quadratic terms in the objective by changing quadratic to True in generate_standard_repn

Legal Acknowledgement

By contributing to this software project, I have read the contribution guide and agree to the following terms and conditions for my contribution:

  1. I agree my contributions are submitted under the BSD license.
  2. I represent I am authorized to make the contributions and grant the license. If my employer has rights to intellectual property that includes these contributions, I represent that I have received permission to make contributions and grant the required license on behalf of that employer.

Copy link
Contributor

@emma58 emma58 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@quantresearch1, thank you for the contribution! As written, this does not yet add QP support for Highs, though: By changing the quadratic flag to True in generate_standard_repn, you move the nonlinear term in your objective from repn.nonlinear_expr to repn.quadratic_vars and repn.quadratic_coefs (and hence bypass the error in line 622 of appsi/solvers/highs.py) . To properly send quadratic expressions to Highs, you will need to parse this part of the StandardRepn object and send the result to Highs.

(Your test appears to get lucky--it is actually sending a constant objective to Highs, but the variables happen to land on the values you are expecting. You can see this by adding an assertion that the results object returned by opt.solve(m) agrees with your expected objective value: self.assertEqual(results.best_feasible_objective, 2).)

@quantresearch1
Copy link
Contributor Author

@emma58 Thanks for the advice. Looking into the Highs C++ code, I did not find a quadratic equivalent to changeColCost so I think each coefficient update would have to be a call to passHessian which is not ideal in a persistent setting (assuming many updates). I will think about this a bit more and reraise if I can come up with a decent solution

raise unittest.SkipTest


class TestBugs(unittest.TestCase):
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mrmundt I have copied the test cases that are passing from the legacy appsi_highs. I am not sure if you wanted to leave the test cases for a later stage, let me know if I should remove them (I do need the new qp test cases though)


self._solver_model.changeObjectiveSense(sense)
self._solver_model.changeColsCost(n, indices, costs)
self._mutable_objective = _MutableObjective(
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the idea here is we collect all the mutable objective terms and update them once through _mutable_objective.update()

self.col_idx = col_idx


class _MutableObjective:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@emma58 I have tried to emulate the new gurobi_persistent and use a similar objective data structure. However, highs quadratic objective handling is too different so I could not make it fit as nicely as I would have hoped. To avoid always calling passHessian at each update I first check against the last set of coefficients, let me know if you think there's a better way

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't think of a better way. I think this is okay.

@quantresearch1
Copy link
Contributor Author

I need to look into the highs unit tests failure, they look genuine

@blnicho blnicho requested a review from michaelbynum March 25, 2025 19:13
@quantresearch1
Copy link
Contributor Author

quantresearch1 commented Mar 27, 2025

I need to look into the highs unit tests failure, they look genuine

I have had a look at the remaining failures and I think it's unrelated to my PR:

  • osx
    INFO: The following extensions were downloaded:
    [FAIL] gjh
    [ OK ] mcpp
  • linux
    Message: 'Significant delay observed waiting to join reader threads, possible output stream deadlock'
    Arguments: ()
    Significant delay observed waiting to join reader threads, possible output stream deadlock
    ------- generated xml file: /home/runner/work/pyomo/pyomo/TEST-pyomo.xml -------
    =========================== short test summary info ============================
    FAILED pyomo/contrib/solver/tests/solvers/test_solvers.py::TestSolvers::test_domain_0_gurobi - RuntimeError: TeeStream: deadlock observed joining reader threads

@jsiirola
Copy link
Member

I need to look into the highs unit tests failure, they look genuine

I have had a look at the remaining failures and I think it's unrelated to my PR:

I believe that those two test failures will be resolved by #3537.

@quantresearch1 quantresearch1 requested a review from emma58 March 28, 2025 00:04
@quantresearch1
Copy link
Contributor Author

@jsiirola Thanks, I have synced with main and that has helped with a few failures. I am just left with this one failure for python 3.13 on linux. Not clear to me if this branch is the culprit but will try to install linux over the weekend and debug. Let me know if you think it's unrelated to the branch
self = <pyomo.common.tee.capture_output object at 0x7f900cbbd6d0>, et = None
ev = None, tb = None

def __exit__(self, et, ev, tb):
    # Check that we were nested correctly
    FAIL = []
    if self.tee.STDOUT is not sys.stdout:
        FAIL.append('Captured output does not match sys.stdout.')
    if self.tee.STDERR is not sys.stderr:
        FAIL.append('Captured output does not match sys.stderr.')
    # Exit all context managers.  This includes
    #  - Restore any file descriptors we commandeered
    #  - Close / join the TeeStream
    #  - Close any opened files
    FAIL.extend(self._exit_context_stack(et, ev, tb))
    sys.stdout, sys.stderr = self.old
    self.old = None
    self.tee = None
    self.output_stream = None
    if FAIL:
      raise RuntimeError("\n".join(FAIL))

E RuntimeError: TeeStream: deadlock observed joining reader threads

pyomo/common/tee.py:380: RuntimeError
----------------------------- Captured stderr call -----------------------------
Significant delay observed waiting to join reader threads, possible output stream deadlock
TeeStream: deadlock observed joining reader threads
------- generated xml file: /home/runner/work/pyomo/pyomo/TEST-pyomo.xml -------
=========================== short test summary info ============================
FAILED pyomo/contrib/solver/tests/solvers/test_solvers.py::TestSolvers::test_equality_4_highs - RuntimeError: TeeStream: deadlock observed joining reader threads

@mrmundt mrmundt self-requested a review April 8, 2025 18:39
Copy link

codecov bot commented Apr 15, 2025

Codecov Report

Attention: Patch coverage is 96.47059% with 3 lines in your changes missing coverage. Please review.

Project coverage is 88.73%. Comparing base (0b58294) to head (9234f32).
Report is 103 commits behind head on main.

Files with missing lines Patch % Lines
pyomo/contrib/solver/solvers/highs.py 96.47% 3 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3531      +/-   ##
==========================================
- Coverage   88.73%   88.73%   -0.01%     
==========================================
  Files         890      890              
  Lines      102194   102268      +74     
==========================================
+ Hits        90682    90747      +65     
- Misses      11512    11521       +9     
Flag Coverage Δ
builders 26.61% <8.23%> (-0.01%) ⬇️
default 84.85% <96.47%> (?)
expensive 34.00% <8.23%> (?)
linux 86.25% <96.47%> (-2.23%) ⬇️
linux_other 86.25% <96.47%> (+<0.01%) ⬆️
osx 76.20% <96.47%> (+0.01%) ⬆️
win 84.74% <96.47%> (-0.01%) ⬇️
win_other 84.74% <96.47%> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@quantresearch1
Copy link
Contributor Author

Hello,

I see this was scheduled to be released in April but then removed from the release. Were there any issues or concerns I should look into ?

@mrmundt
Copy link
Contributor

mrmundt commented May 13, 2025

Hi @quantresearch1 - we are waiting for one of our other devs to review this. We've have a startlingly small amount of time lately. Thank you for your patience!

@quantresearch1
Copy link
Contributor Author

Understood, thanks for letting me know

Copy link
Contributor

@michaelbynum michaelbynum left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall, this looks great. I have one major concern and a couple of minor concerns.


for ndx, coef in enumerate(self.quadratic_coefs):
current_val = value(coef.expr)
if current_val != self.last_quadratic_coef_values[ndx]:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the set of quadratic variables changes (e.g., the objective changes from x**2 + y**2 to x**2 + z**2) will ndx be in self.last_quadratic_coef_values?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, this should be fine because _set_objective should have been called when the objective changed. Can you just make sure there is a test that validates this? There might already be one. I don't remember.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have changed that logic. The test that deletes x1 should cover this now.

)

if status != highspy.HighsStatus.kOk:
logger.warning(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this should be an exception. Thoughts?

Copy link
Contributor Author

@quantresearch1 quantresearch1 Jun 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thinking a bit more about it, pretty much every single function in highspy returns a HighsStatus (https://github.com/ERGO-Code/HiGHS/blob/364c83a51e44ba6c27def9c8fc1a49b1daf5ad5c/highs/Highs.h#L962),
However, if I look at the current usage in this file, we don't capture the status when calling changeRowBounds, changeCoeff, changeColCost, etc... Just for the sake of consistency, there's an argument to remove this check

self.col_idx = col_idx


class _MutableObjective:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't think of a better way. I think this is okay.


mutable_quadratic_coefficients.append(
_MutableQuadraticCoefficient(
expr=coef, row_idx=v1_ndx, col_idx=v2_ndx
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think you should store the variable indices here. If a variable gets removed from the model, these indices will become incorrect. Note how the _MutableObjectiveCoefficient stores the variable id and the _pyomo_var_to_solver_var_map. That way, it can get the correct index at the time update is called.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I realize this is very convoluted, but I'm not sure how to avoid it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be great to add a test for this scenario.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah yes you're right, let me add one.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have changed the code such that the dictionary index is the id of variable 1 and 2, I think that should address your concern. These indices only get turned to row/col indices when we build the hessian. I am not sold on the Coefficient class having the map itself, it's definitely a model property in my mind.

del m.x1
results = opt.solve(m)
self.assertAlmostEqual(m.x2.value, 1, places=4)
self.assertAlmostEqual(results.incumbent_objective, 1, 4)
Copy link
Contributor Author

@quantresearch1 quantresearch1 Jun 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@michaelbynum following your suggestion, I have added a test that deletes one of the variables. GurobiPersistent is failing this test as it finds the incumbent objective to be 2. I have not really played around with deleting variables before, is it the same as fixing that variable to 0 ? Because if I remove x1 from 2* x1^2 + x2^2 -x1*x2 I would expect the objective to be 1 since we have x2 >= 1 as a constraint. Not sure what I am missing

@blnicho blnicho moved this from Todo to Review In Progress in July 2025 Release Jun 27, 2025
@@ -18,7 +18,6 @@
from pyomo.common.log import LoggingIntercept
from pyomo.common.tee import capture_output
from pyomo.contrib.appsi.solvers.highs import Highs
from pyomo.contrib.appsi.base import TerminationCondition
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This import was necessary - the lack of it is what's causing your test failures currently.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you, not sure what happened there, let me fix

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am at work right now and unfortunately they have locked down pushes from both personal AND enterprise accounts... I will fix it once home

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Review In Progress
Development

Successfully merging this pull request may close these issues.

Cannot solve quadratic problem with Highs solver
6 participants