<< Chapter < Page | Chapter >> Page > |
This set of notes presents the Support Vector Machine (SVM) learning algorithm. SVMs are among the best (and many believe are indeed the best) “off-the-shelf”supervised learning algorithm. To tell the SVM story, we'll need to first talk about margins and the idea of separating data with a large “gap.” Next, we'll talk about the optimal marginclassifier, which will lead us into a digression on Lagrange duality. We'll also see kernels, which give a way to apply SVMs efficiently in very highdimensional (such as infinite-dimensional) feature spaces, and finally, we'll close off the story with the SMO algorithm, which gives an efficient implementationof SVMs.
We'll start our story on SVMs by talking about margins. This section will give the intuitions about margins and about the “confidence” of our predictions; these ideaswill be made formal in "Functional and geometric margins" .
Consider logistic regression, where the probability $p(y=1|x;\theta )$ is modeled by ${h}_{\theta}\left(x\right)=g\left({\theta}^{T}x\right)$ . We would then predict “1” on an input $x$ if and only if ${h}_{\theta}\left(x\right)\ge 0.5$ , or equivalently, if and only if ${\theta}^{T}x\ge 0$ . Consider a positive training example ( $y=1$ ). The larger ${\theta}^{T}x$ is, the larger also is ${h}_{\theta}\left(x\right)=p(y=1|x;w,b)$ , and thus also the higher our degree of “confidence” that the label is 1. Thus, informally we can think of ourprediction as being a very confident one that $y=1$ if ${\theta}^{T}x\gg 0$ . Similarly, we think of logistic regression as making a very confident prediction of $y=0$ , if ${\theta}^{T}x\ll 0$ . Given a training set, again informally it seems that we'd have found a good fit to thetraining data if we can find $\theta $ so that ${\theta}^{T}{x}^{\left(i\right)}\gg 0$ whenever ${y}^{\left(i\right)}=1$ , and ${\theta}^{T}{x}^{\left(i\right)}\ll 0$ whenever ${y}^{\left(i\right)}=0$ , since this would reflect a very confident (and correct) set of classifications for all the training examples. This seems to be a nice goal toaim for, and we'll soon formalize this idea using the notion of functional margins.
For a different type of intuition, consider the following figure, in which x's represent positive training examples, o's denote negative training examples,a decision boundary (this is the line given by the equation ${\theta}^{T}x=0$ , and is also called the separating hyperplane ) is also shown, and three points have also been labeled A, B and C.
Notice that the point A is very far from the decision boundary. If we are asked to make a predictionfor the value of $y$ at A, it seems we should be quite confident that $y=1$ there. Conversely, the point C is very close to the decision boundary, and while it's on theside of the decision boundary on which we would predict $y=1$ , it seems likely that just a small change to the decision boundary could easily have caused our prediction to be $y=0$ . Hence, we're much more confident about our prediction at A than at C. The point B liesin-between these two cases, and more broadly, we see that if a point is far from the separating hyperplane, then we may be significantly more confident in our predictions.Again, informally we think it'd be nice if, given a training set, we manage to find a decision boundary that allows us to make all correct and confident (meaning far from thedecision boundary) predictions on the training examples. We'll formalize this later using the notion of geometric margins.
Notification Switch
Would you like to follow the 'Machine learning' conversation and receive update notifications?