<< Chapter < Page Chapter >> Page >

Digression: the perceptron learning algorithm

We now digress to talk briefly about an algorithm that's of some historical interest, and that we will also return to later whenwe talk about learning theory. Consider modifying the logistic regression method to “force” it to output valuesthat are either 0 or 1 or exactly. To do so, it seems natural to change the definition of g to be the threshold function:

g ( z ) = 1 if z 0 0 if z < 0

If we then let h θ ( x ) = g ( θ T x ) as before but using this modified definition of g , and if we use the update rule

θ j : = θ j + α y ( i ) - h θ ( x ( i ) ) x j ( i ) .

then we have the perceptron learning algorithm .

In the 1960s, this “perceptron” was argued to be a rough model forhow individual neurons in the brain work. Given how simple the algorithm is, it will also provide a starting point for our analysis when we talkabout learning theory later in this class. Note however that even though the perceptron may be cosmetically similar to the otheralgorithms we talked about, it is actually a very different type of algorithm than logistic regression and least squares linear regression;in particular, it is difficult to endow the perceptron's predictions with meaningful probabilistic interpretations, or derive the perceptronas a maximum likelihood estimation algorithm.

Another algorithm for maximizing ( θ )

Returning to logistic regression with g ( z ) being the sigmoid function, let's now talk about a different algorithm for maximizing ( θ ) .

To get us started, let's consider Newton's method for finding a zero of a function. Specifically,suppose we have some function f : R R , and we wish to find a value of θ so that f ( θ ) = 0 . Here, θ R is a real number. Newton's method performs the following update:

θ : = θ - f ( θ ) f ' ( θ ) .

This method has a natural interpretation in which we can think of it as approximating the function f via a linear function that is tangent to f at the current guess θ , solving for where that linear function equals to zero, and letting the next guess for θ be where that linear function is zero.

Here's a picture of the Newton's method in action:

This method has a natural interpretation in which we can think of it as approximating the function f via a linear function that is tangent to f at the current guess theta
This method has a natural interpretation in which we can think of it as approximating the function f via a linear function that is tangent to f at the current guess theta
This method has a natural interpretation in which we can think of it as approximating the function f via a linear function that is tangent to f at the current guess theta

In the leftmost figure, we see the function f plotted along with the line y = 0 . We're trying to find θ so that f ( θ ) = 0 ; the value of θ that achieves this is about 1.3. Suppose we initialized the algorithm with θ = 4 . 5 . Newton's method then fits a straight line tangent to f at θ = 4 . 5 , and solves for the where that line evaluates to 0. (Middle figure.) This give us thenext guess for θ , which is about 2.8. The rightmost figure shows the result of running one more iteration, which the updates θ to about 1.8. After a few more iterations, we rapidly approach θ = 1 . 3 .

Newton's method gives a way of getting to f ( θ ) = 0 . What if we want to use it to maximize some function ? The maxima of correspond to points where its first derivative ' ( θ ) is zero. So, by letting f ( θ ) = ' ( θ ) , we can use the same algorithm to maximize , and we obtain update rule:

θ : = θ - ' ( θ ) ' ' ( θ ) .

(Something to think about: How would this change if we wanted to use Newton's method to minimize rather than maximize a function?)

Lastly, in our logistic regression setting, θ is vector-valued, so we need to generalize Newton's method to this setting. The generalization ofNewton's method to this multidimensional setting (also called the Newton-Raphson method) is given by

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Machine learning. OpenStax CNX. Oct 14, 2013 Download for free at http://cnx.org/content/col11500/1.4
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Machine learning' conversation and receive update notifications?

Ask