Tu sei qui

An alternative approach to solving problems by multilayer neural network

10 luglio 2014
San Francesco - Via della Quarquonia 1 (Classroom 1 )
In joint work with Kurkova and Vogt, we showed that n-unit perceptron networks, using the Heaviside function and restricted to the cube of dimension d, have the "approximative compactness" property in Lp. This means that any sequence of network-functions which converges in distance to a target function must actually converge subsequentially to another perceptron network function which achieves the minimum distance to the target. This result was applied in work with Kurkova and Sanguineti to optimize functionals. The theory extends to more general normed linear spaces and I further showed that a singleton target can be replaced by compact subsets (of those functions which are sufficiently near the target and have sufficient smoothness). Here we consider the possibility of introducing additional layers to the network so that the finitely-valued function produced by the Heaviside network can be modified to make it a better fit to the target set.
Units: 
DYSCO