<< Chapter < Page Chapter >> Page >

Don’t worry too much about this formula for the density. You rarely end up needing to use it, but the two key quantities are this vector mew is the mean of the Gaussian and this matrix sigma is the covariance matrix – covariance, and so sigma will be equal to, right, the definition of covariance of a vector valued random variable is X - U, X - V conspose, okay?

And, actually, if this doesn’t look familiar to you, you might re-watch the discussion section that the TAs held last Friday or the one that they’ll be holding later this week on, sort of, a recap of probability, okay?

So multi-grade Gaussians is parameterized by a mean and a covariance, and let me just – can I have the laptop displayed, please? I’ll just go ahead and actually show you, you know, graphically, the effects of varying a Gaussian – varying the parameters of a Gaussian. So what I have up here is the density of a zero mean Gaussian with covariance matrix equals the identity. The covariance matrix is shown in the upper right-hand corner of the slide, and there’s the familiar bell-shaped curve in two dimensions.

And so if I shrink the covariance matrix, instead of covariance your identity, if I shrink the covariance matrix, then the Gaussian becomes more peaked, and if I widen the covariance, so like same = 2, 2, then the distribution – well, the density becomes more spread out, okay?

Those vectors stand at normal, identity covariance one. If I increase the diagonals of a covariance matrix, right, if I make the variables correlated, and the Gaussian becomes flattened out in this X = Y direction, and increase it even further, then my variables, X and Y, right – excuse me, it goes Z1 and Z2 are my two variables on a horizontal axis become even more correlated.

I’ll just show the same thing in contours. The standard normal of distribution has contours that are – they’re actually circles. Because of the aspect ratio, these look like ellipses. These should actually be circles, and if you increase the off diagonals of the Gaussian covariance matrix, then it becomes ellipses aligned along the, sort of, 45 degree angle in this example.

This is the same thing. Here’s an example of a Gaussian density with negative covariances. So now the correlation goes the other way, so that even strong [inaudible] of covariance and the same thing in contours. This is a Gaussian with negative entries on the diagonals and even larger entries on the diagonals, okay?

And other parameter for the Gaussian is the mean parameters, so if this is – with mew0, and as he changed the mean parameter, this is mew equals 0.15, the location of the Gaussian just moves around, okay?

All right. So that was a quick primer on what Gaussians look like, and here’s as a roadmap or as a picture to keep in mind, when we described the Gaussian discriminant analysis algorithm, this is what we’re going to do. Here’s the training set, and in the Gaussian discriminant analysis algorithm, what I’m going to do is I’m going to look at the positive examples, say the crosses, and just looking at only the positive examples, I’m gonna fit a Gaussian distribution to the positive examples, and so maybe I end up with a Gaussian distribution like that, okay? So there’s PFX given Y = 1.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Machine learning. OpenStax CNX. Oct 14, 2013 Download for free at http://cnx.org/content/col11500/1.4
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Machine learning' conversation and receive update notifications?

Ask