<< Chapter < Page Chapter >> Page >

Generative learning algorithms

So far, we've mainly been talking about learning algorithms that model p ( y | x ; θ ) , the conditional distribution of y given x . For instance, logistic regression modeled p ( y | x ; θ ) as h θ ( x ) = g ( θ T x ) where g is the sigmoid function. In these notes, we'll talk about a different type of learning algorithm.

Consider a classification problem in which we want to learn to distinguish between elephants ( y = 1 ) and dogs ( y = 0 ), based on some features of an animal. Given a training set, an algorithm like logistic regression or the perceptron algorithm (basically) tries to find a straight line—that is, a decision boundary—that separates the elephants anddogs. Then, to classify a new animal as either an elephant or a dog, it checks on which side of the decision boundary it falls, and makes its prediction accordingly.

Here's a different approach. First, looking at elephants, we can build a model of what elephants look like. Then, looking at dogs, we can build a separate model of whatdogs look like. Finally, to classify a new animal, we can match the new animal against the elephant model, and match it against the dog model, to see whether the new animal looks morelike the elephants or more like the dogs we had seen in the training set.

Algorithms that try to learn p ( y | x ) directly (such as logistic regression), or algorithms that try to learn mappings directly from the space of inputs X to the labels { 0 , 1 } , (such as the perceptron algorithm) are called discriminative learning algorithms. Here, we'll talk about algorithms that instead try to model p ( x | y ) (and p ( y ) ). These algorithms are called generative learning algorithms. For instance, if y indicates whether an example is a dog (0) or an elephant (1), then p ( x | y = 0 ) models the distribution of dogs' features, and p ( x | y = 1 ) models the distribution of elephants' features.

After modeling p ( y ) (called the class priors ) and p ( x | y ) , our algorithm can then use Bayes rule to derive the posterior distribution on y given x :

p ( y | x ) = p ( x | y ) p ( y ) p ( x ) .

Here, the denominator is given by p ( x ) = p ( x | y = 1 ) p ( y = 1 ) + p ( x | y = 0 ) p ( y = 0 ) (you should be able to verify that this is true from the standard properties of probabilities), and thus canalso be expressed in terms of the quantities p ( x | y ) and p ( y ) that we've learned. Actually, if were calculating p ( y | x ) in order to make a prediction, then we don't actually need to calculate the denominator, since

arg max y p ( y | x ) = arg max y p ( x | y ) p ( y ) p ( x ) = arg max y p ( x | y ) p ( y ) .

Gaussian discriminant analysis

The first generative learning algorithm that we'll look at is Gaussian discriminant analysis (GDA). In this model, we'll assume that p ( x | y ) is distributed according to a multivariate normal distribution. Let's talk briefly about the properties ofmultivariate normal distributions before moving on to the GDA model itself.

The multivariate normal distribution

The multivariate normal distribution in n -dimensions, also called the multivariate Gaussian distribution, is parameterized by a mean vector μ R n and a covariance matrix Σ R n × n , where Σ 0 is symmetric and positive semi-definite. Also written “ N ( μ , Σ ) ”, its density is given by:

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Machine learning. OpenStax CNX. Oct 14, 2013 Download for free at http://cnx.org/content/col11500/1.4
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Machine learning' conversation and receive update notifications?

Ask