<< Chapter < Page Chapter >> Page >
Φ = 1 m i = 1 m 1 { y ( i ) = 1 } μ 0 = i = 1 m 1 { y ( i ) = 0 } x ( i ) i = 1 m 1 { y ( i ) = 0 } μ 1 = i = 1 m 1 { y ( i ) = 1 } x ( i ) i = 1 m 1 { y ( i ) = 1 } Σ = 1 m i = 1 m ( x ( i ) - μ y ( i ) ) ( x ( i ) - μ y ( i ) ) T .

Pictorially, what the algorithm is doing can be seen in as follows:

two different data sets have a distribution circles in different quadrants with a line following y=-x separating them

Shown in the figure are the training set, as well as the contours of the two Gaussian distributions that have been fit to the data in each of thetwo classes. Note that the two Gaussians have contours that are the same shape and orientation, since they share a covariance matrix Σ , but they have different means μ 0 and μ 1 . Also shown in the figure is the straight line giving the decision boundary at which p ( y = 1 | x ) = 0 . 5 . On one side of the boundary, we'll predict y = 1 to be the most likely outcome, and on the other side, we'll predict y = 0 .

Discussion: gda and logistic regression

The GDA model has an interesting relationship to logistic regression. If we view the quantity p ( y = 1 | x ; Φ , μ 0 , μ 1 , Σ ) as a function of x , we'll find that it can be expressed in the form

p ( y = 1 | x ; Φ , Σ , μ 0 , μ 1 ) = 1 1 + exp ( - θ T x ) ,

where θ is some appropriate function of Φ , Σ , μ 0 , μ 1 . This uses the convention of redefining the x ( i ) 's on the right-hand-side to be n + 1 -dimensional vectors by adding the extra coordinate x 0 ( i ) = 1 ; see problem set 1. This is exactly the form that logistic regression—a discriminative algorithm—used to model p ( y = 1 | x ) .

When would we prefer one model over another? GDA and logistic regression will, in general, give different decision boundaries when trained on the same dataset. Which is better?

We just argued that if p ( x | y ) is multivariate gaussian (with shared Σ ), then p ( y | x ) necessarily follows a logistic function. The converse, however, is not true; i.e., p ( y | x ) being a logistic function does not imply p ( x | y ) is multivariate gaussian. This shows that GDA makes stronger modeling assumptions about the data than does logistic regression. It turns out that when these modelingassumptions are correct, then GDA will find better fits to the data, and is a better model. Specifically, when p ( x | y ) is indeed gaussian (with shared Σ ), then GDA is asymptotically efficient . Informally, this means that in the limit of very large training sets (large m ), there is no algorithm that is strictly better than GDA (in terms of, say, how accurately they estimate p ( y | x ) ). In particular, it can be shown that in this setting, GDA will be a better algorithm than logistic regression; and more generally,even for small training set sizes, we would generally expect GDA to better.

In contrast, by making significantly weaker assumptions, logistic regression is also more robust and less sensitive to incorrect modeling assumptions. There are many different sets of assumptions that would lead to p ( y | x ) taking the form of a logistic function. For example, if x | y = 0 Poisson ( λ 0 ) , and x | y = 1 Poisson ( λ 1 ) , then p ( y | x ) will be logistic. Logistic regression will also work well on Poisson data like this. But if we were to use GDA on such data—and fit Gaussian distributions tosuch non-Gaussian data—then the results will be less predictable, and GDA may (or may not) do well.

To summarize: GDA makes stronger modeling assumptions, and is more data efficient (i.e., requires less training data to learn “well”)when the modeling assumptions are correct or at least approximately correct. Logistic regression makes weaker assumptions, and is significantly more robust to deviationsfrom modeling assumptions. Specifically, when the data is indeed non-Gaussian, then in the limit of large datasets, logistic regression will almost always do better thanGDA. For this reason, in practice logistic regression is used more often than GDA. (Some related considerations about discriminative vs. generative models also apply forthe Naive Bayes algorithm that we discuss next, but the Naive Bayes algorithm is still considered a very good, and is certainly also a very popular, classification algorithm.)

Questions & Answers

so some one know about replacing silicon atom with phosphorous in semiconductors device?
s. Reply
how to fabricate graphene ink ?
SUYASH Reply
for screen printed electrodes ?
SUYASH
What is lattice structure?
s. Reply
of graphene you mean?
Ebrahim
or in general
Ebrahim
in general
s.
Graphene has a hexagonal structure
tahir
On having this app for quite a bit time, Haven't realised there's a chat room in it.
Cied
what is biological synthesis of nanoparticles
Sanket Reply
what's the easiest and fastest way to the synthesize AgNP?
Damian Reply
China
Cied
types of nano material
abeetha Reply
I start with an easy one. carbon nanotubes woven into a long filament like a string
Porter
many many of nanotubes
Porter
what is the k.e before it land
Yasmin
what is the function of carbon nanotubes?
Cesar
I'm interested in nanotube
Uday
what is nanomaterials​ and their applications of sensors.
Ramkumar Reply
what is nano technology
Sravani Reply
what is system testing?
AMJAD
preparation of nanomaterial
Victor Reply
Yes, Nanotechnology has a very fast field of applications and their is always something new to do with it...
Himanshu Reply
good afternoon madam
AMJAD
what is system testing
AMJAD
what is the application of nanotechnology?
Stotaw
In this morden time nanotechnology used in many field . 1-Electronics-manufacturad IC ,RAM,MRAM,solar panel etc 2-Helth and Medical-Nanomedicine,Drug Dilivery for cancer treatment etc 3- Atomobile -MEMS, Coating on car etc. and may other field for details you can check at Google
Azam
anybody can imagine what will be happen after 100 years from now in nano tech world
Prasenjit
after 100 year this will be not nanotechnology maybe this technology name will be change . maybe aftet 100 year . we work on electron lable practically about its properties and behaviour by the different instruments
Azam
name doesn't matter , whatever it will be change... I'm taking about effect on circumstances of the microscopic world
Prasenjit
how hard could it be to apply nanotechnology against viral infections such HIV or Ebola?
Damian
silver nanoparticles could handle the job?
Damian
not now but maybe in future only AgNP maybe any other nanomaterials
Azam
Hello
Uday
I'm interested in Nanotube
Uday
this technology will not going on for the long time , so I'm thinking about femtotechnology 10^-15
Prasenjit
can nanotechnology change the direction of the face of the world
Prasenjit Reply
At high concentrations (>0.01 M), the relation between absorptivity coefficient and absorbance is no longer linear. This is due to the electrostatic interactions between the quantum dots in close proximity. If the concentration of the solution is high, another effect that is seen is the scattering of light from the large number of quantum dots. This assumption only works at low concentrations of the analyte. Presence of stray light.
Ali Reply
the Beer law works very well for dilute solutions but fails for very high concentrations. why?
bamidele Reply
how did you get the value of 2000N.What calculations are needed to arrive at it
Smarajit Reply
Privacy Information Security Software Version 1.1a
Good
Got questions? Join the online conversation and get instant answers!
QuizOver.com Reply

Get the best Algebra and trigonometry course in your pocket!





Source:  OpenStax, Machine learning. OpenStax CNX. Oct 14, 2013 Download for free at http://cnx.org/content/col11500/1.4
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Machine learning' conversation and receive update notifications?

Ask