Leave One Out Cross Validation

Cross validation is one of many metrics for estimating out-of-sample error for predictive models. There are many flavors of cross validation hold-out, k-fold, leave one out (LOOCV), etc. I whipped up a neat little visualization script in R to help understand LOOCV. In Figure 1 the visual that shows how LOOCV works. The model is trained on the green data points and tested on the red data points. Each row in Figure 1 is a fold. LOOCV is exactly how it sounds, one element of n observations is left out and used for testing. The remaining n-1 observations are then used to train the model. The average error is calculated across n models. Using LOOCV method reduces the model bias, but can be computational taxing for large number of observations or complex models as n models are trained.

LOOCV Visualization
Figure 1: LOOCV Visualization

Below is the code used to create Figure 1.