<< Chapter < Page Chapter >> Page >
Φ = 1 m i = 1 m 1 { y ( i ) = 1 } μ 0 = i = 1 m 1 { y ( i ) = 0 } x ( i ) i = 1 m 1 { y ( i ) = 0 } μ 1 = i = 1 m 1 { y ( i ) = 1 } x ( i ) i = 1 m 1 { y ( i ) = 1 } Σ = 1 m i = 1 m ( x ( i ) - μ y ( i ) ) ( x ( i ) - μ y ( i ) ) T .

Pictorially, what the algorithm is doing can be seen in as follows:

two different data sets have a distribution circles in different quadrants with a line following y=-x separating them

Shown in the figure are the training set, as well as the contours of the two Gaussian distributions that have been fit to the data in each of thetwo classes. Note that the two Gaussians have contours that are the same shape and orientation, since they share a covariance matrix Σ , but they have different means μ 0 and μ 1 . Also shown in the figure is the straight line giving the decision boundary at which p ( y = 1 | x ) = 0 . 5 . On one side of the boundary, we'll predict y = 1 to be the most likely outcome, and on the other side, we'll predict y = 0 .

Discussion: gda and logistic regression

The GDA model has an interesting relationship to logistic regression. If we view the quantity p ( y = 1 | x ; Φ , μ 0 , μ 1 , Σ ) as a function of x , we'll find that it can be expressed in the form

p ( y = 1 | x ; Φ , Σ , μ 0 , μ 1 ) = 1 1 + exp ( - θ T x ) ,

where θ is some appropriate function of Φ , Σ , μ 0 , μ 1 . This uses the convention of redefining the x ( i ) 's on the right-hand-side to be n + 1 -dimensional vectors by adding the extra coordinate x 0 ( i ) = 1 ; see problem set 1. This is exactly the form that logistic regression—a discriminative algorithm—used to model p ( y = 1 | x ) .

When would we prefer one model over another? GDA and logistic regression will, in general, give different decision boundaries when trained on the same dataset. Which is better?

We just argued that if p ( x | y ) is multivariate gaussian (with shared Σ ), then p ( y | x ) necessarily follows a logistic function. The converse, however, is not true; i.e., p ( y | x ) being a logistic function does not imply p ( x | y ) is multivariate gaussian. This shows that GDA makes stronger modeling assumptions about the data than does logistic regression. It turns out that when these modelingassumptions are correct, then GDA will find better fits to the data, and is a better model. Specifically, when p ( x | y ) is indeed gaussian (with shared Σ ), then GDA is asymptotically efficient . Informally, this means that in the limit of very large training sets (large m ), there is no algorithm that is strictly better than GDA (in terms of, say, how accurately they estimate p ( y | x ) ). In particular, it can be shown that in this setting, GDA will be a better algorithm than logistic regression; and more generally,even for small training set sizes, we would generally expect GDA to better.

In contrast, by making significantly weaker assumptions, logistic regression is also more robust and less sensitive to incorrect modeling assumptions. There are many different sets of assumptions that would lead to p ( y | x ) taking the form of a logistic function. For example, if x | y = 0 Poisson ( λ 0 ) , and x | y = 1 Poisson ( λ 1 ) , then p ( y | x ) will be logistic. Logistic regression will also work well on Poisson data like this. But if we were to use GDA on such data—and fit Gaussian distributions tosuch non-Gaussian data—then the results will be less predictable, and GDA may (or may not) do well.

To summarize: GDA makes stronger modeling assumptions, and is more data efficient (i.e., requires less training data to learn “well”)when the modeling assumptions are correct or at least approximately correct. Logistic regression makes weaker assumptions, and is significantly more robust to deviationsfrom modeling assumptions. Specifically, when the data is indeed non-Gaussian, then in the limit of large datasets, logistic regression will almost always do better thanGDA. For this reason, in practice logistic regression is used more often than GDA. (Some related considerations about discriminative vs. generative models also apply forthe Naive Bayes algorithm that we discuss next, but the Naive Bayes algorithm is still considered a very good, and is certainly also a very popular, classification algorithm.)

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Machine learning. OpenStax CNX. Oct 14, 2013 Download for free at http://cnx.org/content/col11500/1.4
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Machine learning' conversation and receive update notifications?

Ask