<< Chapter < Page Chapter >> Page >
h ( x ) = i = 0 n θ i x i = θ T x ,

where on the right-hand side above we are viewing θ and x both as vectors, and here n is the number of input variables (not counting x 0 ).

Now, given a training set, how do we pick, or learn, the parameters θ ? One reasonable method seems to be to make h ( x ) close to y , at least for the training examples we have. To formalize this, we will define afunction that measures, for each value of the θ 's, how close the h ( x ( i ) ) 's are to the corresponding y ( i ) 's. We define the cost function :

J ( θ ) = 1 2 i = 1 m ( h θ ( x ( i ) ) - y ( i ) ) 2 .

If you've seen linear regression before, you may recognize this as the familiar least-squares cost function that gives rise to the ordinary least squares regression model. Whether or not you have seen it previously, let's keep going, and we'll eventually show this tobe a special case of a much broader family of algorithms.

Lms algorithm

We want to choose θ so as to minimize J ( θ ) . To do so, let's use a search algorithm that starts with some “initial guess” for θ , and that repeatedly changes θ to make J ( θ ) smaller, until hopefully we converge to a value of θ that minimizes J ( θ ) . Specifically, let's consider the gradient descent algorithm, which starts with some initial θ , and repeatedly performs the update:

θ j : = θ j - α θ j J ( θ ) .

(This update is simultaneously performed for all values of j = 0 , ... , n .) Here, α is called the learning rate . This is a very natural algorithm that repeatedly takes a step in the direction of steepestdecrease of J .

In order to implement this algorithm, we have to work out what is the partial derivative term on the right hand side. Let's first work it out for thecase of if we have only one training example ( x , y ) , so that we can neglect the sum in the definition of J . We have:

θ j J ( θ ) = θ j 1 2 h θ ( x ) - y 2 = 2 · 1 2 h θ ( x ) - y · θ j ( h θ ( x ) - y ) = h θ ( x ) - y · θ j i = 0 n θ i x i - y = h θ ( x ) - y x j

For a single training example, this gives the update rule: We use the notation “ a : = b ” to denote an operation (in a computer program) in which we set the value of a variable a to be equal to the value of b . In other words, this operation overwrites a with the value of b . In contrast, we will write “ a = b ” when we are asserting a statement of fact, that the value of a is equal to the value of b .

θ j : = θ j + α y ( i ) - h θ ( x ( i ) ) x j ( i ) .

The rule is called the LMS update rule (LMS stands for “least mean squares”), and is also known as the Widrow-Hoff learning rule. This rule has several properties that seem natural and intuitive. For instance, themagnitude of the update is proportional to the error term ( y ( i ) - h θ ( x ( i ) ) ) ; thus, for instance, if we are encountering a training example on which our prediction nearly matches the actual value of y ( i ) , then we find that there is little need to change the parameters; in contrast,a larger change to the parameters will be made if our prediction h θ ( x ( i ) ) has a large error (i.e., if it is very far from y ( i ) ).

We'd derived the LMS rule for when there was only a single training example. There are two ways to modify this method for a training set of more than one example. The first isreplace it with the following algorithm:

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Machine learning. OpenStax CNX. Oct 14, 2013 Download for free at http://cnx.org/content/col11500/1.4
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Machine learning' conversation and receive update notifications?

Ask