<< Chapter < Page Chapter >> Page >

Least squares revisited

Armed with the tools of matrix derivatives, let us now proceed to find in closed-form the value of θ that minimizes J ( θ ) . We begin by re-writing J in matrix-vectorial notation.

Given a training set, define the design matrix X to be the m -by- n matrix (actually m -by- n + 1 , if we include the intercept term) that contains the training examples' input values in its rows:

X = x 1 T x 2 T x m T .

Also, let y be the m -dimensional vector containing all the target values from the training set:

y = y 1 y 2 y m .

Now, since h θ ( x ( i ) ) = ( x ( i ) ) T θ , we can easily verify that

X θ - y = x 1 T θ x m T θ - y 1 y m = h θ x 1 - y 1 h θ x m - y m .

Thus, using the fact that for a vector z , we have that z T z = i z i 2 :

1 2 ( X θ - y ) T ( X θ - y ) = 1 2 i = 1 m ( h θ ( x ( i ) ) - y ( i ) ) 2 = J ( θ )

Finally, to minimize J , let's find its derivatives with respect to θ . Combining the second and third equation in  [link] , we find that

A T tr A B A T C = B T A T C T + B A T C

Hence,

θ J ( θ ) = θ 1 2 ( X θ - y ) T ( X θ - y ) = 1 2 θ θ T X T X θ - θ T X T y - y T X θ + y T y = 1 2 θ tr θ T X T X θ - θ T X T y - y T X θ + y T y = 1 2 θ tr θ T X T X θ - 2 tr y T X θ = 1 2 X T X θ + X T X θ - 2 X T y = X T X θ - X T y

In the third step, we used the fact that the trace of a real number is just the real number; the fourth step used the fact that tr A = tr A T , and the fifth step used Equation  [link] with A T = θ , B = B T = X T X , and C = I , and Equation  [link] . To minimize J , we set its derivatives to zero, and obtain the normal equations :

X T X θ = X T y

Thus, the value of θ that minimizes J ( θ ) is given in closed form by the equation

θ = ( X T X ) - 1 X T y .

Probabilistic interpretation

When faced with a regression problem, why might linear regression, and specifically why might the least-squares cost function J , be a reasonable choice? In this section, we will give a set of probabilistic assumptions, under which least-squares regressionis derived as a very natural algorithm.

Let us assume that the target variables and the inputs are related via the equation

y ( i ) = θ T x ( i ) + ϵ ( i ) ,

where ϵ ( i ) is an error term that captures either unmodeled effects (such as if there are some features very pertinentto predicting housing price, but that we'd left out of the regression), or random noise. Let us further assume that the ϵ ( i ) are distributed IID (independently and identically distributed) accordingto a Gaussian distribution (also called a Normal distribution) with mean zero and some variance σ 2 . We can write this assumption as “ ϵ ( i ) N ( 0 , σ 2 ) .” I.e., the density of ϵ ( i ) is given by

p ( ϵ ( i ) ) = 1 2 π σ exp - ( ϵ ( i ) ) 2 2 σ 2 .

This implies that

p ( y ( i ) | x ( i ) ; θ ) = 1 2 π σ exp - ( y ( i ) - θ T x ( i ) ) 2 2 σ 2 .

The notation “ p ( y ( i ) | x ( i ) ; θ ) ” indicates that this is the distribution of y ( i ) given x ( i ) and parameterized by θ . Note that we should not condition on θ (“ p ( y ( i ) | x ( i ) , θ ) ”), since θ is not a random variable. We can also write the distribution of y ( i ) as as y ( i ) x ( i ) ; θ N ( θ T x ( i ) , σ 2 ) .

Given X (the design matrix, which contains all the x ( i ) 's) and θ , what is the distribution of the y ( i ) 's? The probability of the data is given by p ( y | X ; θ ) . This quantity is typically viewed a function of y (and perhaps X ), for a fixed value of θ . When we wish to explicitly view this as a function of θ , we will instead call it the likelihood function:

L ( θ ) = L ( θ ; X , y ) = p ( y | X ; θ ) .

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Machine learning. OpenStax CNX. Oct 14, 2013 Download for free at http://cnx.org/content/col11500/1.4
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Machine learning' conversation and receive update notifications?

Ask