<< Chapter < Page | Chapter >> Page > |
Suppose we are trying to select among several different models for a learning problem. For instance, we might be using a polynomialregression model , and wish to decide if should be 0, 1, ..., or 10. How can we automatically select amodel that represents a good tradeoff between the twin evils of bias and variance Given that we said in the previous set of notes that bias and variance are two very differentbeasts, some readers may be wondering if we should be calling them “twin” evils here. Perhaps it'd be better to think of them as non-identical twins.The phrase “the fraternal twin evils of bias and variance” doesn't have the same ring to it, though. ? Alternatively, suppose we want toautomatically choose the bandwidth parameter for locally weighted regression, or the parameter for our -regularized SVM. How can we do that?
For the sake of concreteness, in these notes we assume we have some finite set of models that we're trying to select among. For instance, in our first example above, the model would be an -th order polynomial regression model. (The generalization to infinite is not hard. If we are trying to choose from an infinite set of models, say corresponding to the possible values of the bandwidth , we may discretize and consider only a finite number of possible values for it. More generally, most of the algorithms described herecan all be viewed as performing optimization search in the space of models, and we can perform this search over infinite modelclasses as well. ) Alternatively, if we are trying to decide between using an SVM,a neural network or logistic regression, then may contain these models.
Let's suppose we are, as usual, given a training set . Given what we know about empirical risk minimization,here's what might initially seem like a algorithm, resulting from using empirical risk minimization for model selection:
This algorithm does not work. Consider choosing the order of a polynomial. The higher the order of the polynomial, the betterit will fit the training set , and thus the lower the training error. Hence, this method will always select a high-variance, high-degree polynomialmodel, which we saw previously is often poor choice.
Here's an algorithm that works better. In hold-out cross validation (also called simple cross validation ), we do the following:
By testing on a set of examples that the models were not trained on, we obtain a better estimate of each hypothesis 's true generalization error, and can then pick the one with the smallest estimated generalization error.Usually, somewhere between of the data is used in the hold out cross validation set, and 30% is a typical choice.
Notification Switch
Would you like to follow the 'Machine learning' conversation and receive update notifications?