Cross ValidationAnother common model comparison technique is cross validation. Like information criteria, cross-validation can be used to compare nonnested models, and penalize a model for overfitting. The difference is that cross validation evaluates a model in the context of out-of-sample forecast performance, rather than in-sample fit.In standard cross-validation, data is split at random into a training set and a test set. Model coefficients are estimated with the training set, then used to predict response values in the test set. Training and test sets are shuffled at random, and the process is carried out repeatedly. Small prediction errors, on average, across all of the test sets, indicate good forecast performance for the model predictors. There is no need to adjust for the number of coefficients, as in information criteria, since different data are used for fitting and estimation. Overfitting becomes apparent in the forecast performance.