Tu sei qui

Limitation of capabilities of shallow networks

14 ottobre 2015
San Francesco - Via della Quarquonia 1 (Classroom 1 )
Limitations of shallow (one-hidden-layer) feedforward networks to represent efficiently finite mappings will be discussed. It will be shown that almost any uniformly randomly chosen mapping on a sufficiently large finite domain cannot be tractably represented by a shallow network with perceptrons or kernel units. Existential probabilistic result will be illustrated by a concrete example of a class of such functions constructed using quasi-random sequences. Analogies with central paradox of coding theory and the "no-free-lunch theorem" will be discussed.
relatore: 
Kurkova, Vera
Units: 
DYSCO