Andreas Themelis

Andreas Themelis photo

I am a Ph.D. student at IMT Lucca (Italy) in the track of Computer, Decision and Systems Science (CDSS), XXIX cycle, in the Dynamical Systems, Control, and Optimization (DYSCO) research group under Prof. Alberto Bemporad, and at KU Leuven (Belgium), Department of Electrical Engineering (ESAT) in the Center for Dynamical Systems, Signal Processing and Data Analytics (STADIUS) research group under Prof. Panagiotis Patrinos.

 

Education

I received my Bachelor Degree in Mathematics in April 2010 with thesis "Representations of the General Linear Group and Young tableaux" supervised by Prof. Giorgio Ottaviani, and my Master Degree in Mathematics in April 2013 with thesis "Development of the theory of probability “foldings” for the study of the convergence of stochastic processes" supervised by Prof. Alberto Gandolfi, both from the University of Florence (Italy).

 

Overview & research interests

Today's world is experiencing an always increasing demand for efficiency and speed in solving problems. The definition of “problem” in this context is broad as it sounds, as it is meant to cover fields ranging among engineering (control, embedded MPC, automotive industries, signal processing, image analysis, …), data-driven sciences (machine learning, data mining, statistics, …), and finance (investment management, markets forecasting, …) to name a few. All these fields are themselves broad, as they serve as ground frameworks of real world applications: cars, computers, hospital equipment, bank systems and so on.
The unifying framework of all these “problem” instances lies in the mathematical formulation, namely the minimization of a “cost” (function), and Optimization is the science that addresses such problems.

The challenge of my research is to derive optimization methods that are suitable for most applications, yet without trading-off performances. Due to my background in pure mathematics I mainly work on the theoretical aspects which involve advanced mathematics and are fundamental for deriving convergence guarantees and best performances under the least conservative assumptions, aiming at providing competitive optimization schemes applicable to the widest possible range of real-world problems.

 

Publications

Below, a list of papers (to be) published or currently under revision. A more complete list inclusive of talks and other works can be found on my Google Scholar page.

Journal papers

  1. A. Themelis, L. Stella and P. Patrinos. Douglas-Rachford splitting and ADMM for nonconvex optimization: new convergence results and accelerated versions. ArXiv: https://arxiv.org/abs/1709.05747
    (accepted for publication in IEEE Transactions on Automatic Control on July 2017)
  2. L. Stella, A. Themelis and P. Patrinos. Newton-type alternating minimization algorithm for convex optimization (2017).
    (accepted for publication in IEEE Transactions on Automatic Control on July 2017)
  3. A. Themelis and P. Patrinos. SuperMann: a superlinearly convergent algorithm for finding fixed points of nonexpansive operators, ArXiv e-prints, (2016). ArXiv: https://arxiv.org/abs/1609.06955
    (submitted to the IEEE Transactions on Automatic Control Journal on January 2017)
  4. A. Themelis, L. Stella and P. Patrinos. Forward-backward envelope for the sum of two nonconvex functions: further properties and nonmonotone line-search algorithms, ArXiv e-prints, (2016). ArXiv: http://arxiv.org/abs/1606.06256
    (submitted to the SIAM Journal on Optimization on June 2016)
  5. L. Stella, A. Themelis and P. Patrinos. Forward-backward quasi-Newton methods for nonsmooth optimization problems, Computational Optimization and Applications (2017). http://link.springer.com/article/10.1007/s10589-017-9912-y

Conference proceedings

  1. L. Stella, A. Themelis, P. Sopasakis and P. Patrinos. A simple and efficient algorithm for nonlinear model predictive control. To appear in the Proceedings of the 56th IEEE Conference on Decision and Control (CDC), Melbourne, Australia, 2017.
  2. P. Sopasakis, A. Themelis, J. Suykens and P. Patrinos. A primal-dual line search method and applications in image processing. To appear in the Proceedings of the 25th European Signal Processing Conference (EUSIPCO) pp. 1100-1104, Kos, Greece, 2017.
  3. A. Themelis, S. Villa, P. Patrinos and A. Bemporad. Stochastic gradient methods for stochastic model predictive control, Proceedings of the 2016 European Control Conference (ECC) pp. 154-159, Aalborg, Denmark, 2016. doi: 10.1109/ECC.2016.7810279. URL: http://ieeexplore.ieee.org/document/7810279/

 

Related software

Below, a list of softwares developed by others based on (some of) the publications above. More info at github repository KUL-ForBES and Ph.D. Lorenzo Stella's page.
 

ForBES (Forward-Backward Envelope Solver)

by Lorenzo Stella and Panos Patrinos

Based on
        Newton-type alternating minimization algorithm for convex optimization
        Forward-backward envelope for the sum of two nonconvex functions: further properties and nonmonotone line-search algorithms

Description
MATLAB solver for nonsmooth optimization, contains a library of mathematical functions to formulate problems arising in control, machine learning, image and signal processing.
 

SuperSCS (Superlinear Splitting Conic Solver)

by Pantelis Sopasakis and Panos Patrinos

Based on
        SuperMann: a superlinearly convergent algorithm for finding fixed points of nonexpansive operators

Description
SuperSCS is being developed on top of SCS, a splitting conic solver for convex optimization problems based on
B. O'Donoghue, E. Chu, N. Parikh and S. Boyd. Conic Optimization via Operator Splitting and Homogeneous Self-Dual Embedding, JOTA vol. 169 pp. 1042-1068 (2016) https://link.springer.com/article/10.1007/s10957-016-0892-3

SuperSCS implements the SuperMann scheme on the fixed-point iterations of SCS. Thanks to the online preconditioning of (limited-memory) quasi-Newton directions, it reaches high accuracy solutions with superlinear convergence rates.