You are here

Encounters between machine learning and Linear Parameter-Varying systems

18 June 2014
San Francesco - Cappella Guinigi
The theory of linear parameter-varying (LPV) systems has become an attractive framework to model nonlinear, time-varying and position-dependent dynamics commonly exhibited by real systems. It is a general mark of LPV models that the signal relations are linear as in the LTI case, but the model parameters are functions of a measurable time-varying signal called the scheduling variable. Using scheduling variables as changing operating conditions or endogenous/free signals of the plant, enables to embed nonlinear and time-varying phenomena in a linear representation. However, despite the advances of LPV control, identification of LPV systems is still not well developed as there are numerous fundamental difficulties present in this model class. A number of center problems evolve around the posed priors on the assumed model class. For example, accurate parametric identification of LPV systems requires an optimal prior selection of a set of functional dependencies for the parameterization of the model coefficients. Inaccurate selection leads to structural bias while over-parameterization results in a variance increase of the estimates. This corresponds to the classical bias-variance trade-off, but with a significantly larger degree of freedom and sensitivity in the LPV case. Hence, it is attractive to estimate the underlying model structure of LPV systems based on measured data, i.e., to learn the underlying dependencies of the model coefficients together with the model order. In this talk, a brief overview will be given how encounters of the LPV framework with machine learning tools (e.g., Least-Squares Support-Vector-Machines and Gaussian Processes) have enabled to provide efficient solutions for these central problems.
Units: 
DYSCO