1996
DOI: 10.1162/neco.1996.8.1.129
|View full text |Cite
|
Sign up to set email alerts
|

On Convergence Properties of the EM Algorithm for Gaussian Mixtures

Abstract: We build up the mathematical connection between the “Expectation-Maximization” (EM) algorithm and gradient-based approaches for maximum likelihood learning of finite gaussian mixtures. We show that the EM step in parameter space is obtained from the gradient via a projection matrix P, and we provide an explicit expression for the matrix. We then analyze the convergence of EM in terms of special properties of P and provide new results analyzing the effect that P has on the likelihood surface. Based on these mat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
351
0
5

Year Published

2000
2000
2016
2016

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 627 publications
(365 citation statements)
references
References 11 publications
3
351
0
5
Order By: Relevance
“…This is similar to a result from Xu and Jordan (1996) for the EM algorithm when used to estimate the parameters π m , µ m and Σ m from a data sample.…”
Section: Speed Of Convergencesupporting
confidence: 75%
“…This is similar to a result from Xu and Jordan (1996) for the EM algorithm when used to estimate the parameters π m , µ m and Σ m from a data sample.…”
Section: Speed Of Convergencesupporting
confidence: 75%
“…MEM is based on the common method of likelihood maximization via Expectation Maximization, i.e. the EM algorithm [17,21,25]. See a description of the algorithm in Appendix D. The input to the EM algorithm includes the number of distributions k and will return a maximum likelihood estimator of the parameters of k distributions that best explain S. In order to find the optimal k we use the rule given by [22] to estimate the number of generating distributions, i.e k that maximizes max log like(k) + k log(N ), where N is the number of samples, and max log like(k) is the maximal value of the likelihood function for k distributions.…”
Section: A Heuristic For Cim In the Euclidean Spacementioning
confidence: 99%
“…The EM algorithm shows robust and repeatable performance in the segmentations of heart, brain and abdominal images. The EM algorithm is locally convergent [6,14,15] so we have introduced an automatic seeding method that uses local maxima in the intensity histogram. The results are compared against a manual initialization, achieved by first manually selecting a region and then measuring the mean intensity values and variance in that region.…”
Section: Resultsmentioning
confidence: 99%
“…It is important to note that local convergence of the EM algorithm is assured [6,14,15]. The updates for the parameters for the GMM are the mixture values a m and the parameters of the Gaussian distributions y m ¼ {m m , s m }.…”
Section: Qðy;ŷðtþþ E½log Pðx; Y J Yþjx;ŷðtþ ð3þmentioning
confidence: 99%