We present a differentiable predictive control (DPC) methodology for learning dynamics and constrained control policies for unknown nonlinear systems. DPC poses a data-driven approximate solution to multiparametric programming problems emerging from explicit nonlinear model predictive control (MPC). Contrary to approximate MPC, DPC does not require supervision by an expert controller. Instead, a dynamics model is learned from system measurements, and the control law is then optimized offline via constrained deep learning. DPC method is based on two sequential steps, i) system identification using a constrained neural state-space model, and ii) optimization of closed-loop dynamics with explicit neural control law. The proposed method allows us to directly optimize the control laws by backpropagating the gradients of the MPC loss function and constraints through the differentiable closed-loop system dynamics model. We show that DPC can learn stabilizing constrained neural control laws for linear systems with unstable dynamics. Moreover, we pose sufficient conditions for asymptotic stability of closed-loop system dynamics with neural feedback laws. We assess the performance of the proposed DPC method in simulation case studies and demonstrate that DPC scales linearly with problem size, compared to exponential scalability of explicit MPC based on classical multiparametric programming solvers. Toward the end of the talk, we discuss limitations of the approach and potential research directions.
Joint work with Aaron Tuor, Elliot Skomski, Soumya Vasisht and Draguna Vrabie.
Join at imt.lu/seminar