<< Chapter < Page Chapter >> Page >

Recap: classifier design

Given a set of training data { X i , Y i } i = 1 n and a finite collection of candidate functions F , select f ^ n F that (hopefully) is a good predictor for future cases. That is

f n ^ = arg min f F R ^ n ( f )

where R ^ n ( f ) is the empirical risk. For any particular f F , the corresponding empirical risk is defined as

R ^ n ( f ) = 1 n i = 1 n 1 { f ( X i ) Y i } .

Hoeffding's inequality

Hoeffding's inequality (Chernoff's bound in this case) allows us to gauge how close R ^ n ( f ) is to the true risk of f , R ( f ) , in probability

P ( | R ^ n ( f ) - R ( f ) | ϵ ) 2 e - 2 n ϵ 2 .

Since our selection process involves deciding among all f F , we would like to gauge how close the empirical risks are to theirexpected values. We can do this by studying the probability that one or more of the empirical risks deviates significantly from itsexpected value. This is captured by the probability

P max f F | R ^ n ( f ) - R ( f ) | ϵ .

Note that the event

max f F | R ^ n ( f ) - R ( f ) | ϵ

is equivalent to union of the events

f F | R ^ n ( f ) - R ( f ) | ϵ .

Therefore, we can use Bonferonni's bound (aka the “union of events” or “union” bound) to obtain

P max f F | R ^ n ( f ) - R ( f ) | ϵ = P f F | R ^ n ( f ) - R ( f ) | ϵ f F P ( | R ^ n ( f ) - R ( f ) | ϵ ) f F 2 e - 2 n ϵ 2 = 2 | F | e - 2 n ϵ 2

where | F | is the number of classifiers in F . In the proof of Hoeffding's inequality we also obtained a one-sided inequality thatimplied

P ( R ( f ) - R ^ n ( f ) ϵ ) e - 2 n ϵ 2

and hence

P max f F R ( f ) - R ^ n ( f ) ϵ | F | e - 2 n ϵ 2 .

We can restate the inequality above as follows, For all f F and for all δ > 0 with probability at least 1 - δ

R ( f ) R ^ n ( f ) + log | F | + log ( 1 / δ ) 2 n .

This follows by setting δ = | F | e - 2 n ϵ 2 and solving for ϵ . Thus with a high probability ( 1 - δ ) , the true risk for all f F is bounded by the empirical risk of f plus a constant that depends on δ > 0 , the number of training samples n, and the size F . Most importantly the bound does not depend on the unknown distribution P X Y . Therefore, we can call this a distribution-free bound.

Error bounds

We can use the distribution-free bound above to obtain a bound on the expected performance of the minimum empirical riskclassifier

f ^ n = arg min f F R ^ n ( f ) .

We are interested in bounding

E [ R ( f ^ n ) ] - min f F R ( f )

the expected risk of f ^ n minus the minimum risk for all f F . Note that this difference is always non-negative since f ^ n is at best as good as

f * = arg min f F R ( f ) .

Recall that f F and δ > 0 , with probability at least 1 - δ

R ( f ) R ^ n ( f ) + C ( F , n , δ )

where

C ( F , n , δ ) = log | F | + log ( 1 / δ ) 2 n .

In particular, since this holds for all f F including f ^ n ,

R ( f ^ n ) R ^ n ( f ^ n ) + C ( F , n , δ )

and for any other f F

R ( f ^ n ) R ^ n ( f ) + C ( F , n , δ )

since R ^ n ( f ^ n ) R ^ n ( f ) f F . In particular,

R ( f ^ n ) R ^ n ( f * ) + C ( F , n , δ )

where f * = arg min f F R ( f ) .

Let Ω denote the set of events on which the above inequality holds. Then by definition

P ( Ω ) 1 - δ .

We can now bound E [ R ( f ^ n ) ] - R ( f * ) as follows

E [ R ( f ^ n ) ] - R ( f * ) = E [ R ( f ^ n ) - R ^ n ( f * ) + R ^ n ( f * ) - R ( f * ) ] = E [ R ( f ^ n ) - R ^ n ( f * ) ]

since E [ R ^ n ( f * ) ] = R ( f * ) . The quantity above is bounded as follows.

E [ R ( f ^ n ) - R ^ n ( f * ) ] = E [ R ( f ^ n ) - R ^ n ( f * ) | Ω ] P ( Ω ) + E [ R ( f ^ n ) - R ^ n ( f * ) | Ω ] P ( Ω ¯ ) E [ R ( f ^ n ) - R ^ n ( f * ) | Ω ] + δ

since P ( Ω ) 1 , 1 - P ( Ω ) δ and R ( f ^ n ) - R ^ n ( f * ) 1

E [ R ( f ^ n ) - R ^ n ( f * ) | Ω ] E [ R ( f ^ n ) - R ^ n ( f ^ n ) | Ω ] C ( F , n , δ ) .

Thus

E [ R ( f ^ n ) - R ^ n ( f * ) ] C ( F , n , δ ) + δ .

So we have

E [ R ( f ^ n ) ] - min f F R ( f ) log | F | + log ( 1 / δ ) 2 n + δ , δ > 0 .

In particular, for δ = 1 / n , we have

E [ R ( f ^ n ) ] - min f F R ( f ) log | F | + log n 2 n + 1 n log | F | + log n + 2 n , since x + y 2 x + y , x , y > 0 .

Application: histogram classifier

Let F be the collection of all classifiers with M equal volume cells. Then | F | = 2 M , and the histogram classification rule

f ^ n = arg min f F 1 n i = 1 n 1 { f ( X i ) Y i }

satisfies

E [ R ( f ^ n ) ] - min f F R ( f ) M log 2 + 2 + log n n

which suggests the choice M = log 2 n (balancing M log 2 with log n ), resulting in

E [ R ( f ^ n ) ] - min f F R ( f ) = O log n n .

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Statistical learning theory. OpenStax CNX. Apr 10, 2009 Download for free at http://cnx.org/content/col10532/1.3
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Statistical learning theory' conversation and receive update notifications?

Ask