Skip to content

Clarification: DiffMPC Does Not Require Large KKT Inversion — Runtime Comparison in PDP May Be Misleading #5

Open
@BinghengNUS

Description

@BinghengNUS

Hi Dr. Jin

Thanks for your impactful work on Pontryagin Differentiable Programming (PDP). I wanted to offer a quick technical clarification: the PDP paper and the PhD thesis mention that DiffMPC (Amos et al., NeurIPS 2018) requires inverting a large KKT matrix and has O(T^2) complexity. In fact, DiffMPC computes gradients via fixed-point differentiation using Riccati recursion, with linear-time complexity when computing scalar loss gradients — and no full KKT inversion is required, which is further illustrated in a recent paper (Differentiable Robust MPC, RSS 2024).

This distinction matters since PDP is often cited as faster. While PDP offers important advantages when full Jacobians are needed, DiffMPC is actually more efficient in typical scalar-gradient settings. I hope this helps clarify the comparison for future readers and practitioners.

Please feel free to correct me if I’ve misunderstood any part of the comparison.
Best,
Dr. Bingheng Wang

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions