<< Chapter < Page Chapter >> Page >

Competing goals: the bias-variance tradeoff

We ended the previous lecture with a brief discussion of overfitting. Recall that, given a set of n data points, D n , and a space of functions (or models ) F , our goal in solving the learning from data problem is to choose a function f ^ n F which minimizes the expected risk E R ( f ^ n ) , where the expectation is being taken over the distribution P X Y on the data points D n . One approach to avoiding overfitting is to restrict F to some subset of all measurable function. To gauge the performance of a given f in this case, we examine the difference between the expected risk of f and the Bayes' risk (called the excess risk ).

E R ( f ^ n ) - R * = E [ R ( f ^ n ) ] - inf f F R ( f ) estimation error + inf f F R ( f ) - R * approximation error

The approximation error term quantifies the performance hit incurred by imposing restrictions on F . The estimation error term is due to the randomness of the training data, and it expresses how well the chosen function f ^ n will perform in relation to the best possible f in the class F . This decomposition into stochastic and approximation errors is similar to the bias-variancetradeoff which arises in classical estimation theory. The approximation error is like a bias squared term, and the estimationerror is like a variance term. By allowing the space F to be large When we say F is large, we mean that | F | , the number of elements in F , is large. we can make the approximation error as small as we want at the cost of incurring a large estimationerror. On the other hand, if F is very small then the approximation error will be large, but the estimation error may be very small.This tradeoff is illustrated in [link] .

Illustration of tradeoff between estimation and approximation errors as a function of the size (complexity) of the F .

Why is this the case? We do not know the true distribution P X Y on the data, so instead of minimizing the expected risk of we design a predictor by minimizing the empirical risk:

f ^ n = arg min f F R ^ n ( f ) , R ^ n ( f ) = 1 n i = 1 n ( f ( X i ) , Y i ) .

If F is very large then R ^ n ( f ) can be made arbitrarily small and the resulting f ^ n can “overfit” to the data since R ^ n ( f ) is not a good estimator of the true risk R ( f ^ n ) .

Illustration of empirical risk and the problem of overfitting to the data.

The behavior of the true and empirical risks, as a function of the size (or complexity ) of the space F , is illustrated in [link] . Unfortunately, we can't easily determine whether we are over or underfitting just by looking at the empirical risk.

Strategies to avoid overfitting

Picking

f ^ n = arg min f F R ^ n ( f )

is problematic if F is large. We will examine two general approaches to dealing with this problem:

  1. Restrict the size or dimension of F (e.g., restrict F to the set of all lines, or polynomials with maximum degree d ). This effectively places an upper bound on the estimation error, but ingeneral it also places a lower bound on the approximation error.
  2. Modify the empirical risk criterion to include an extra cost associated with each model (e.g., higher cost for more complexmodels):
    f ^ n = arg min f F R ^ n ( f ) + C ( f ) .
    The cost is designed to mimic the behavior of the estimation error so that the model selection procedure avoids models with a estimation error.Roughly this can be interpreted as trying to balance the tradeoff illustrated in [link] . Procedures of this type are often called complexity penalization methods.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Statistical learning theory. OpenStax CNX. Apr 10, 2009 Download for free at http://cnx.org/content/col10532/1.3
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Statistical learning theory' conversation and receive update notifications?

Ask