Cross-validation and bootstrapping are unreliable in a small sample classification

A. Isaksson, M. Wallman, H. Göransson, and M.G. Gustafsson. Pattern Recognition Letters, 2008, 29(14), 1960-1965.


The interest in statistical classification for critical applications such as diagnoses of patient samples based on supervised learning is rapidly growing. To gain acceptance in applications where the subsequent decisions have serious consequences, e.g. choice of cancer therapy, any such decision support system must come with a reliable performance estimate. Tailored for small sample problems, cross-validation (CV) and bootstrapping (BTS) have been the most commonly used methods to determine such estimates in virtually all branches of science for the last 20 years. Here, we address the often overlooked fact that the uncertainty in a point estimate obtained with CV and BTS is unknown and quite large for small sample classification problems encountered in biomedical applications and elsewhere. To avoid this fundamental problem of employing CV and BTS, until improved alternatives have been established, we suggest that the final classification performance always should be reported in the form of a Bayesian confidence interval obtained from a simple holdout test or using some other method that yields conservative measures of the uncertainty.

Keywords: Supervised classification; Performance estimation; Confidence interval


  • A. Isaksson, Department of Medical Sciences, Uppsala University, Academic Hospital
  • H. Göransson, Department of Medical Sciences, Uppsala University, Academic Hospital
  • M. Wallman, Fraunhofer-Chalmers Centre
  • M.G. Gustafsson, Department of Engineering Sciences, Uppsala University

Photo credits: Nic McPhee