-
Notifications
You must be signed in to change notification settings - Fork 174
Starts work on MPAX support #166
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @PTNobel , great work on this PR and thank you for taking it on! I’ve left a few questions, mostly because I’m not very familiar with CvypyLayer
. Please feel free to dismiss any of them if they seem too naive.
if len(problem.parameters()) != len(parameters): | ||
raise ValueError | ||
P, c, A, data = extract_linops_as_csr(problem, solver, kwargs) | ||
Pjax, cjax, Ajax = BCSR.from_scipy_sparse(P.reduced_mat), BCSR.from_scipy_sparse(c), BCSR.from_scipy_sparse(A.reduced_mat) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How often will the from_scipy_sparse
method be called? Does it happen only when defining the layer, or every time during the training process?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only when defining the layer!
vec = jnp.hstack( | ||
list(itertools.chain( | ||
(params[i].reshape(-1, order='F') for _, i in param_id_to_orig_order.items()), | ||
(jnp.ones(1),)))) # TODO(PTNobel): Add batching |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah. Batch solving should be the key feature in this PR.
else: | ||
raise ValueError('Invalid MPAX algorithm') | ||
solver = alg(warm_start=self.warm_start, **options) | ||
self.solver = jax.jit(solver.optimize) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we don't need reuse jit during the training process? since the size of the problems should be the same.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't follow, will JAX auto optimize problems of the same size?
self.solver = jax.jit(solver.optimize) | ||
self.output_slices = output_slices | ||
|
||
def jax_to_data(self, quad_obj_values, lin_obj_values, con_values): # TODO: Add broadcasting (will need jnp.tile to tile structures) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Which part of the model can be differentiated in CvxpyLayer? Matrix A, or vector b, c?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All of it
@ZedongPeng this is initial code for the MPAX support. It doesn't work yet, hoping to continue it this week.