<< Chapter < Page
  Machine learning   Page 1 / 1
Chapter >> Page >

Mixtures of gaussians and the em algorithm

In this set of notes, we discuss the EM (Expectation-Maximization) for density estimation.

Suppose that we are given a training set { x ( 1 ) , ... , x ( m ) } as usual. Since we are in the unsupervised learning setting, these points do not come with any labels.

We wish to model the data by specifying a joint distribution p ( x ( i ) , z ( i ) ) = p ( x ( i ) | z ( i ) ) p ( z ( i ) ) . Here, z ( i ) Multinomial ( Φ ) (where Φ j 0 , j = 1 k Φ j = 1 , and the parameter Φ j gives p ( z ( i ) = j ) ), and x ( i ) | z ( i ) = j N ( μ j , Σ j ) . We let k denote the number of values that the z ( i ) 's can take on. Thus, our model posits that each x ( i ) was generated by randomly choosing z ( i ) from { 1 , ... , k } , and then x ( i ) was drawn from one of k Gaussians depending on z ( i ) . This is called the mixture of Gaussians model. Also, note that the z ( i ) 's are latent random variables, meaning that they're hidden/unobserved. This is what will make ourestimation problem difficult.

The parameters of our model are thus Φ , μ and Σ . To estimate them, we can write down the likelihood of our data:

( Φ , μ , Σ ) = i = 1 m log p ( x ( i ) ; Φ , μ , Σ ) = i = 1 m log z ( i ) = 1 k p ( x ( i ) | z ( i ) ; μ , Σ ) p ( z ( i ) ; Φ ) .

However, if we set to zero the derivatives of this formula with respect to the parameters and try to solve, we'll find that it is notpossible to find the maximum likelihood estimates of the parameters in closed form. (Try this yourself at home.)

The random variables z ( i ) indicate which of the k Gaussians each x ( i ) had come from. Note that if we knew what the z ( i ) 's were, the maximum likelihood problem would have been easy. Specifically,we could then write down the likelihood as

( Φ , μ , Σ ) = i = 1 m log p ( x ( i ) | z ( i ) ; μ , Σ ) + log p ( z ( i ) ; Φ ) .

Maximizing this with respect to Φ , μ and Σ gives the parameters:

Φ j = 1 m i = 1 m 1 { z ( i ) = j } , μ j = i = 1 m 1 { z ( i ) = j } x ( i ) i = 1 m 1 { z ( i ) = j } , Σ j = i = 1 m 1 { z ( i ) = j } ( x ( i ) - μ j ) ( x ( i ) - μ j ) T i = 1 m 1 { z ( i ) = j } .

Indeed, we see that if the z ( i ) 's were known, then maximum likelihood estimationbecomes nearly identical to what we had when estimating the parameters of the Gaussian discriminant analysis model, except that here the z ( i ) 's playingthe role of the class labels. There are other minor differences in the formulas here from what we'd obtained in PS1 with Gaussiandiscriminant analysis, first because we've generalized the z ( i ) 's to be multinomial rather than Bernoulli, and second because here we areusing a different Σ j for each Gaussian.

However, in our density estimation problem, the z ( i ) 's are not known. What can we do?

The EM algorithm is an iterative algorithm that has two main steps. Applied to our problem, inthe E-step, it tries to “guess” the values of the z ( i ) 's. In the M-step, it updates the parameters of our model based onour guesses. Since in the M-step we are pretending that the guesses in the first part were correct, the maximization becomes easy.Here's the algorithm:

  • Repeat until convergence: {
    • (E-step) For each i , j , set
      w j ( i ) : = p ( z ( i ) = j | x ( i ) ; Φ , μ , Σ )
    • (M-step) Update the parameters:
      Φ j : = 1 m i = 1 m w j ( i ) , μ j : = i = 1 m w j ( i ) x ( i ) i = 1 m w j ( i ) , Σ j : = i = 1 m w j ( i ) ( x ( i ) - μ j ) ( x ( i ) - μ j ) T i = 1 m w j ( i )
  • }

In the E-step, we calculate the posterior probability of our parameters the z ( i ) 's, given the x ( i ) and using the current setting of our parameters. I.e., using Bayes rule, we obtain:

p ( z ( i ) = j | x ( i ) ; Φ , μ , Σ ) = p ( x ( i ) | z ( i ) = j ; μ , Σ ) p ( z ( i ) = j ; Φ ) l = 1 k p ( x ( i ) | z ( i ) = l ; μ , Σ ) p ( z ( i ) = l ; Φ )

Here, p ( x ( i ) | z ( i ) = j ; μ , Σ ) is given by evaluating the density of a Gaussian with mean μ j and covariance Σ j at x ( i ) ; p ( z ( i ) = j ; Φ ) is given by Φ j , and so on. The values w j ( i ) calculated in the E-step represent our “soft” guesses The term “soft” refers to our guesses being probabilities and taking values in [ 0 , 1 ] ; in contrast, a “hard” guess is one that represents a single best guess (suchas taking values in { 0 , 1 } or { 1 , ... , k } ). for the values of z ( i ) .

Also, you should contrast the updates in the M-step with the formulas we had when the z ( i ) 's were known exactly. They are identical, except that instead of the indicator functions “ 1 { z ( i ) = j } ” indicating from which Gaussian each datapoint had come, wenow instead have the w j ( i ) 's.

The EM-algorithm is also reminiscent of the K-means clustering algorithm, except that instead of the “hard” cluster assignments c ( i ) , we instead have the “soft” assignments w j ( i ) . Similar to K-means, it is also susceptible to local optima, so reinitializingat several different initial parameters may be a good idea.

It's clear that the EM algorithm has a very natural interpretation of repeatedly trying to guess the unknown z ( i ) 's; but how did it come about, and can we make any guarantees about it, such as regardingits convergence? In the next set of notes, we will describe a more general view of EM, one that will allow us to easily apply it to otherestimation problems in which there are also latent variables, and which will allow us to give a convergence guarantee.

Questions & Answers

what does preconceived mean
sammie Reply
physiological Psychology
Nwosu Reply
How can I develope my cognitive domain
Amanyire Reply
why is communication effective
Dakolo Reply
Communication is effective because it allows individuals to share ideas, thoughts, and information with others.
effective communication can lead to improved outcomes in various settings, including personal relationships, business environments, and educational settings. By communicating effectively, individuals can negotiate effectively, solve problems collaboratively, and work towards common goals.
it starts up serve and return practice/assessments.it helps find voice talking therapy also assessments through relaxed conversation.
miss
Every time someone flushes a toilet in the apartment building, the person begins to jumb back automatically after hearing the flush, before the water temperature changes. Identify the types of learning, if it is classical conditioning identify the NS, UCS, CS and CR. If it is operant conditioning, identify the type of consequence positive reinforcement, negative reinforcement or punishment
Wekolamo Reply
please i need answer
Wekolamo
because it helps many people around the world to understand how to interact with other people and understand them well, for example at work (job).
Manix Reply
Agreed 👍 There are many parts of our brains and behaviors, we really need to get to know. Blessings for everyone and happy Sunday!
ARC
A child is a member of community not society elucidate ?
JESSY Reply
Isn't practices worldwide, be it psychology, be it science. isn't much just a false belief of control over something the mind cannot truly comprehend?
Simon Reply
compare and contrast skinner's perspective on personality development on freud
namakula Reply
Skinner skipped the whole unconscious phenomenon and rather emphasized on classical conditioning
war
explain how nature and nurture affect the development and later the productivity of an individual.
Amesalu Reply
nature is an hereditary factor while nurture is an environmental factor which constitute an individual personality. so if an individual's parent has a deviant behavior and was also brought up in an deviant environment, observation of the behavior and the inborn trait we make the individual deviant.
Samuel
I am taking this course because I am hoping that I could somehow learn more about my chosen field of interest and due to the fact that being a PsyD really ignites my passion as an individual the more I hope to learn about developing and literally explore the complexity of my critical thinking skills
Zyryn Reply
good👍
Jonathan
and having a good philosophy of the world is like a sandwich and a peanut butter 👍
Jonathan
generally amnesi how long yrs memory loss
Kelu Reply
interpersonal relationships
Abdulfatai Reply
What would be the best educational aid(s) for gifted kids/savants?
Heidi Reply
treat them normal, if they want help then give them. that will make everyone happy
Saurabh
What are the treatment for autism?
Magret Reply
hello. autism is a umbrella term. autistic kids have different disorder overlapping. for example. a kid may show symptoms of ADHD and also learning disabilities. before treatment please make sure the kid doesn't have physical disabilities like hearing..vision..speech problem. sometimes these
Jharna
continue.. sometimes due to these physical problems..the diagnosis may be misdiagnosed. treatment for autism. well it depends on the severity. since autistic kids have problems in communicating and adopting to the environment.. it's best to expose the child in situations where the child
Jharna
child interact with other kids under doc supervision. play therapy. speech therapy. Engaging in different activities that activate most parts of the brain.. like drawing..painting. matching color board game. string and beads game. the more you interact with the child the more effective
Jharna
results you'll get.. please consult a therapist to know what suits best on your child. and last as a parent. I know sometimes it's overwhelming to guide a special kid. but trust the process and be strong and patient as a parent.
Jharna
Got questions? Join the online conversation and get instant answers!
Jobilize.com Reply

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Machine learning. OpenStax CNX. Oct 14, 2013 Download for free at http://cnx.org/content/col11500/1.4
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Machine learning' conversation and receive update notifications?

Ask