<< Chapter < Page Chapter >> Page >

Also, the KKT dual-complementarity conditions (which in the next section will be useful for testing for the convergence of the SMO algorithm) are:

α i = 0 y ( i ) ( w T x ( i ) + b ) 1 α i = C y ( i ) ( w T x ( i ) + b ) 1 0 < α i < C y ( i ) ( w T x ( i ) + b ) = 1 .

Now, all that remains is to give an algorithm for actually solving the dual problem, which we will do in the next section.

The smo algorithm

The SMO (sequential minimal optimization) algorithm, due to John Platt, gives an efficient way of solving the dual problem arising from the derivation of the SVM. Partly to motivatethe SMO algorithm, and partly because it's interesting in its own right, let's first take another digression to talk about the coordinate ascent algorithm.

Coordinate ascent

Consider trying to solve the unconstrained optimization problem

max α W ( α 1 , α 2 , ... , α m ) .

Here, we think of W as just some function of the parameters α i 's, and for now ignore any relationship between this problem and SVMs.We've already seen two optimization algorithms, gradient ascent and Newton's method. The new algorithm we're going to consider here is called coordinate ascent :

  1. Loop until convergence: {
    1. For i = 1 , ... , m , {
      1. α i : = arg max α ^ i W ( α 1 , ... , α i - 1 , α ^ i , α i + 1 , ... , α m ) .
    2. }
  2. }

Thus, in the innermost loop of this algorithm, we will hold all the variables except for some α i fixed, and reoptimize W with respect to just the parameter α i . In the version of this method presented here, the inner-loop reoptimizes thevariables in order α 1 , α 2 , ... , α m , α 1 , α 2 , ... . (A more sophisticated version might choose other orderings; for instance, we maychoose the next variable to update according to which one we expect to allow us to make the largestincrease in W ( α ) .)

When the function W happens to be of such a form that the “ arg max ” in the inner loop can be performed efficiently, then coordinate ascent can be a fairlyefficient algorithm. Here's a picture of coordinate ascent in action:

distribution rings with a zig zag line flowing through to each of the points

The ellipses in the figure are the contours of a quadratic function that we want to optimize. Coordinateascent was initialized at ( 2 , - 2 ) , and also plotted in the figure is the path that it took on its way to the global maximum. Notice that on each step, coordinate ascent takes astep that's parallel to one of the axes, since only one variable is being optimized at a time.

Smo

We close off the discussion of SVMs by sketching the derivation of the SMO algorithm. Some details will be left to the homework, and for others you may refer to the paperexcerpt handed out in class.

Here's the (dual) optimization problem that we want to solve:

max α W ( α ) = i = 1 m α i - 1 2 i , j = 1 m y ( i ) y ( j ) α i α j x ( i ) , x ( j ) . s.t. 0 α i C , i = 1 , ... , m i = 1 m α i y ( i ) = 0 .

Let's say we have set of α i 's that satisfy the constraints the second two equations in [link] . Now, suppose we want to hold α 2 , ... , α m fixed, and take a coordinate ascent step and reoptimize the objective with respect to α 1 . Can we make any progress? The answer is no, because the constraint (last equation in [link] ) ensures that

α 1 y ( 1 ) = - i = 2 m α i y ( i ) .

Or, by multiplying both sides by y ( 1 ) , we equivalently have

α 1 = - y ( 1 ) i = 2 m α i y ( i ) .

(This step used the fact that y ( 1 ) { - 1 , 1 } , and hence ( y ( 1 ) ) 2 = 1 .) Hence, α 1 is exactly determined by the other α i 's, and if we were to hold α 2 , ... , α m fixed, then we can't make any change to α 1 without violating the constraint (last equation in [link] ) in the optimization problem.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Machine learning. OpenStax CNX. Oct 14, 2013 Download for free at http://cnx.org/content/col11500/1.4
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Machine learning' conversation and receive update notifications?

Ask