|
13 | 13 | **ProxSuite** is a collection of open-source, numerically robust, precise, and efficient numerical solvers (e.g., LPs, QPs, etc.) rooted in revisited primal-dual proximal algorithms.
|
14 | 14 | Through **ProxSuite**, we aim to offer the community scalable optimizers that deal with dense, sparse, or matrix-free problems. While the first targeted application is Robotics, **ProxSuite** can be used in other contexts without limits.
|
15 | 15 |
|
16 |
| -**ProxSuite** is actively developped and supported by the [Willow](https://www.di.ens.fr/willow/) and [Sierra](https://www.di.ens.fr/sierra/) research groups, joint research teams between [Inria](https://www.inria.fr/en), [École Normale Supérieure de Paris](https://www.ens.fr) and [Centre National de la Recherche Scientifique](https://www.cnrs.fr) localized in France. |
| 16 | +**ProxSuite** is actively developed and supported by the [Willow](https://www.di.ens.fr/willow/) and [Sierra](https://www.di.ens.fr/sierra/) research groups, joint research teams between [Inria](https://www.inria.fr/en), [École Normale Supérieure de Paris](https://www.ens.fr) and [Centre National de la Recherche Scientifique](https://www.cnrs.fr) localized in France. |
17 | 17 |
|
18 | 18 | **ProxSuite** is already integrated into:
|
19 | 19 | - [CVXPY](https://www.cvxpy.org/) modeling language for convex optimization problems,
|
|
160 | 160 | \end{align}
|
161 | 161 | $$
|
162 | 162 |
|
163 |
| -where $x \in \mathbb{R}^n$ is the optimization variable. The objective function is defined by a positive semidefinite matrix $H(\theta) \in \mathcal{S}^n_+$ and a vector $g(\theta) \in \mathbb{R}^n$. The linear constraints are defined by the equality-contraint matrix $A(\theta) \in \mathbb{R}^{n_\text{eq} \times n}$ and the inequality-constraint matrix $C(\theta) \in \mathbb{R}^{n_\text{in} \times n}$ and the vectors $b \in \mathbb{R}^{n_\text{eq}}$, $l(\theta) \in \mathbb{R}^{n_\text{in}}$ and $u(\theta) \in \mathbb{R}^{n_\text{in}}$ so that $b_i \in \mathbb{R},~ \forall i = 1,...,n_\text{eq}$ and $l_i \in \mathbb{R} \cup \{ -\infty \}$ and $u_i \in \mathbb{R} \cup \{ +\infty \}, ~\forall i = 1,...,n_\text{in}$. |
| 163 | +where $x \in \mathbb{R}^n$ is the optimization variable. The objective function is defined by a positive semidefinite matrix $H(\theta) \in \mathcal{S}^n_+$ and a vector $g(\theta) \in \mathbb{R}^n$. The linear constraints are defined by the equality-constraint matrix $A(\theta) \in \mathbb{R}^{n_\text{eq} \times n}$ and the inequality-constraint matrix $C(\theta) \in \mathbb{R}^{n_\text{in} \times n}$ and the vectors $b \in \mathbb{R}^{n_\text{eq}}$, $l(\theta) \in \mathbb{R}^{n_\text{in}}$ and $u(\theta) \in \mathbb{R}^{n_\text{in}}$ so that $b_i \in \mathbb{R},~ \forall i = 1,...,n_\text{eq}$ and $l_i \in \mathbb{R} \cup \{ -\infty \}$ and $u_i \in \mathbb{R} \cup \{ +\infty \}, ~\forall i = 1,...,n_\text{in}$. |
164 | 164 |
|
165 | 165 | **QPLayer** is able to learn more structured architectures. For example, $\theta$ can consists only in learning some elements of $A$ while letting $b$ fixed (see e.g., the [example](https://github.com/Simple-Robotics/proxsuite/blob/devel/examples/python/qplayer_sudoku.py) about how to include QPLayer into a learning pipeline). **QPLayer** can also differentiates over LPs. **QPLayer** allows for parallelized calculus over CPUs, and is interfaced with **PyTorch**.
|
166 | 166 | ### Citing **QPLayer**
|
167 | 167 |
|
168 |
| -If you are using **QPLayer** for your work, we encourage you to [cite the related paper](https://inria.hal.science/hal-04133055/file/QPLayer_Preprint.pdf). |
| 168 | +If you are using **QPLayer** for your work, we encourage you to [cite the related paper](https://inria.hal.science/hal-04133055v2/). |
169 | 169 | ## Installation procedure
|
170 | 170 |
|
171 | 171 | Please follow the installation procedure [here](https://github.com/Simple-Robotics/proxsuite/blob/devel/doc/5-installation.md).
|
0 commit comments