You are here

The power of memory: the decimation scheme for symmetric matrix factorization

17 April 2024
11:00 am
San Francesco Complex - Sagrestia

Factorizing a noisy matrix with a rank comparable to its dimension into two matrices is a notoriously hard and unsolved problem in high-dimensional inference. Matrix factorization is encountered in several applications such as sparse coding, recommendation systems, image and video denoising and restoration.

Given how much we rely on artificial intelligence algorithms, to which matrix factorization is crucial, it is of paramount importance to have a predictive theory for its accuracy. In the last decade this problem has eluded every attempt to compute its fundamental limits, i.e. the best achievable factorization accuracies according to information Theory.

In our work we propose an alternative procedure, that we called “decimation”, that maps matrix factorization into a sequence of neural network models for associative memory. A celebrated example is the Hopfield model, where the storage of “memory patterns”, typically spin configurations, is achieved by making them ground states of a suitable energy function. Each of these neural networks is affected by the ability to recall patterns of the preceding ones. Although sub-optimal in general, the decimation scheme offers the advantage of completely analysable performances. Finally, I shall exhibit an “oracle” algorithm based on simulated annealing on the neural networks energies, which shows performances that match our theoretical predictions, and beats other algorithms that were the state-of-the-art prior to our work.


Join at:

Francesco Camilli, ICTP Trieste