You are here

Confidence intervals for machine learning methods

29 April 2024
9:00 am
San Francesco Complex - classroom 1

Recently a lot of attention has been given to making sure that predictions given by powerful machine learning methods also come with an indication of their accuracy. When we view for example a neural network as a model, it would also be important to determine how accurately we can estimate this model (this would lead to confidence intervals instead of prediction intervals). Most methods in the current literature try to ascertain how the predictor or estimator is distributed, in order to give information about its accuracy. We propose a new approach based on hypothesis testing, using the classical duality between confidence intervals and hypothesis testing. This approach is better able to see where for example the neural network is not sure about its estimates. Our approach is as yet very expensive in the amount of extra calculations needed, so we hope that other researcher will pick up on our method to make it more efficient. It is already competitive in case it is very important to for example determine whether a particular scan gives rise to a high chance of a certain disease: in this case one could apply our method to see how certain the outcome of the model is. This is joint work with Laurens Sluijterman and Tom Heskes.

 

Join at: imt.lu/aula1

relatore: 
Eric Cator, Radboud University
Units: 
NETWORKS