<< Chapter < Page Chapter >> Page >

Instructor (Andrew Ng): Oh, okay. Let me just – well, let me write that down on this board. So actually – actually let me think – [inaudible] fit this in here. So epsilon hat of H is the training error of the hypothesis H. In other words, given the hypothesis – a hypothesis is just a function, right mapped from X or Ys – so epsilon hat of H is given the hypothesis H, what’s the fraction of training examples it misclassifies? And generalization error of H, is given the hypothesis H if I sample another example from my distribution scripts D, what’s the probability that H will misclassify that example? Does that make sense?

Student: [Inaudible]?

Instructor (Andrew Ng): Oh, okay. And H hat is the hypothesis that’s chosen by empirical risk minimization. So when I talk about empirical risk minimization, is the algorithm that minimizes training error, and so epsilon hat of H is the training error of H, and so H hat is defined as the hypothesis that out of all hypotheses in my class script H, the one that minimizes training error epsilon hat of H. Okay?

All right. Yeah?

Student: [Inaudible] H is [inaudible]a member of typical H, [inaudible] family right?

Instructor (Andrew Ng): Yes it is.

Student: So what happens with the generalization error [inaudible]?

Instructor (Andrew Ng): I’ll talk about that later. So let me tie all these things together into a theorem. Let there be a hypothesis class given with a finite set of K hypotheses, and let any M delta be fixed. Then – so I fixed M and delta, so this will be the error bound form of the theorem, right? Then, with probability at least one minus delta. We have that. The generalization error of H hat is less than or equal to the minimum over all hypotheses in set H epsilon of H, plus two times, plus that. Okay?

So to prove this, well, this term of course is just epsilon of H star. And so to prove this we set gamma to equal to that – this is two times the square root term. To prove this theorem we set gamma to equal to that square root term. Say that again?

Student: [Inaudible].

Instructor (Andrew Ng): Wait. Say that again?

Student: [Inaudible].

Instructor (Andrew Ng): Oh, yes. Thank you. That didn’t make sense at all. Thanks. Great. So set gamma to that square root term, and so we know equation one, right, from the previous board holds with probability one minus delta. Right. Equation one was the uniform conversions result right, that – well, IE. This is equation one from the previous board, right, so set gamma equal to this we know that we’ll probably use one minus delta this uniform conversions holds, and whenever that holds, that implies – you know, I guess – if we call this equation “star” I guess. And whenever uniform conversions holds, we showed again, on the previous boards that this result holds, that generalization error of H hat is less than two – generalization error of H star plus two times gamma. Okay? And so that proves this theorem.

So this result sort of helps us to quantify a little bit that bias variance tradeoff that I talked about at the beginning of – actually near the very start of this lecture. And in particular let’s say I have some hypothesis class script H, that I’m using, maybe as a class of all linear functions and linear regression, and logistic regression with just the linear features. And let’s say I’m considering switching to some new class H prime by having more features. So lets say this is linear and this is quadratic, so the class of all linear functions and the subset of the class of all quadratic functions, and so H is the subset of H prime. And let’s say I’m considering – instead of using my linear hypothesis class – let’s say I’m considering switching to a quadratic hypothesis class, or switching to a larger hypothesis class. Then what are the tradeoffs involved? Well, I proved this only for finite hypothesis classes, but we’ll see that something very similar holds for infinite hypothesis classes too. But the tradeoff is what if I switch from H to H prime, or I switch from linear to quadratic functions. Then epsilon of H star will become better because the best hypothesis in my hypothesis class will become better.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Machine learning. OpenStax CNX. Oct 14, 2013 Download for free at http://cnx.org/content/col11500/1.4
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Machine learning' conversation and receive update notifications?

Ask