<< Chapter < Page Chapter >> Page >

Preliminaries

In this set of notes, we begin our foray into learning theory. Apart from being interesting and enlightening in its own right, this discussion will also help ushone our intuitions and derive rules of thumb about how to best apply learning algorithms in different settings. We will also seek to answer a few questions: First, can wemake formal the bias/variance tradeoff that was just discussed? The will also eventually lead us to talk about model selection methods, which can,for instance, automatically decide what order polynomial to fit to a training set. Second, in machine learning it's really generalization errorthat we care about, but most learning algorithms fit their models to the training set. Why should doing well on the training set tell us anythingabout generalization error? Specifically, can we relate error on the training set to generalization error? Third and finally, are there conditions underwhich we can actually prove that learning algorithms will work well?

We start with two simple but very useful lemmas.

Lemma. (The union bound). Let A 1 , A 2 , ... , A k be k different events (that may not be independent). Then

P ( A 1 A k ) P ( A 1 ) + ... + P ( A k ) .

In probability theory, the union bound is usually stated as an axiom (and thus we won't try to prove it), but it also makes intuitive sense: The probability ofany one of k events happening is at most the sums of the probabilities of the k different events.

Lemma. (Hoeffding inequality) Let Z 1 , ... , Z m be m independent and identically distributed (iid) random variables drawn from a Bernoulli( Φ ) distribution. I.e., P ( Z i = 1 ) = Φ , and P ( Z i = 0 ) = 1 - Φ . Let Φ = ( 1 / m ) i = 1 m Z i be the mean of these random variables, and let any γ > 0 be fixed. Then

P ( | Φ - Φ ^ | > γ ) 2 exp ( - 2 γ 2 m )

This lemma (which in learning theory is also called the Chernoff bound ) says that if we take Φ ^ —the average of m Bernoulli( Φ ) random variables—to be our estimate of Φ , then the probability of our being far from the true value is small, so long as m is large. Another way of saying this is that if you have a biased coin whosechance of landing on heads is Φ , then if you toss it m times and calculate the fraction of times that it came up heads, that will be agood estimate of Φ with high probability (if m is large).

Using just these two lemmas, we will be able to prove some of the deepest and most important results in learning theory.

To simplify our exposition, let's restrict our attention to binary classification in which the labels are y { 0 , 1 } . Everything we'll say here generalizes to other, including regression and multi-classclassification, problems.

We assume we are given a training set S = { ( x ( i ) , y ( i ) ) ; i = 1 , ... , m } of size m , where the training examples ( x ( i ) , y ( i ) ) are drawn iid from some probability distribution D . For a hypothesis h , we define the training error (also called the empirical risk or empirical error in learning theory) to be

ε ^ ( h ) = 1 m i = 1 m 1 { h ( x ( i ) ) y ( i ) } .

This is just the fraction of training examples that h misclassifies. When we want to make explicit the dependence of ε ^ ( h ) on the training set S , we may also write this a ε ^ S ( h ) . We also define the generalization error to be

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Machine learning. OpenStax CNX. Oct 14, 2013 Download for free at http://cnx.org/content/col11500/1.4
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Machine learning' conversation and receive update notifications?

Ask