24 March 2014
Ex Boccherini - Piazza S. Ponziano 6 (Conference Room )
In supervised detection problems, when the dimensionality of the training set or the number of available features increases, disregarding the correlation among features and data structure results in a lengthy training phase, and possibly poor performance in detection. An efficient and optimal way of choosing and combining the most discriminative features from the available pool has not been devised yet, and the sub-optimal methods proposed are computationally cumbersome and practically intractable when thousands of features have to be managed. At variance with common approaches that require first the extraction of a number of features and then the training of a classifier, we show that it is possible to automatically and incrementally design optimal (weak) classifiers within a boosting framework. The advantage of the proposed method is that it fully exploits the local correlation and structure in the training data. Moreover, when testing speed is an issue, the learnt weak classifiers can be restricted to be discriminative convolutional kernels. After presenting the approach, we show the performance of this general-use method for the enhancement and detection of vessels on a retinal images dataset.