1993
DOI: 10.1093/biomet/80.2.267
|View full text |Cite
|
Sign up to set email alerts
|

Maximum likelihood estimation via the ECM algorithm: A general framework

Abstract: Two major reasons for the popularity of the EM algorithm are that its maximum step involves only complete-data maximum likelihood estimation, which is often computationally simple, and that its convergence is stable, with each iteration increasing the likelihood. When the associated complete-data maximum likelihood estimation itself is complicated, EM is less attractive because the M-step is computationally unattractive. In many cases, however, complete-data maximum likelihood estimation is relatively simple w… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
801
0
9

Year Published

1998
1998
2016
2016

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 1,513 publications
(839 citation statements)
references
References 18 publications
0
801
0
9
Order By: Relevance
“…If the EM algorithm is used to produce the estimates, then the SEM algorithm can be used for variance estimation. For some contingency table models, the ECM algorithm may be used for estimation (Meng and Rubin, 1993) and variance calculation (Van Dyk et al, 1995). A score test for independence in two-way incomplete contingency tables was proposed by Lipsitz and Fitzmaurice (1996).…”
Section: Literature Review and Existing Methodsmentioning
confidence: 99%
“…If the EM algorithm is used to produce the estimates, then the SEM algorithm can be used for variance estimation. For some contingency table models, the ECM algorithm may be used for estimation (Meng and Rubin, 1993) and variance calculation (Van Dyk et al, 1995). A score test for independence in two-way incomplete contingency tables was proposed by Lipsitz and Fitzmaurice (1996).…”
Section: Literature Review and Existing Methodsmentioning
confidence: 99%
“…To perform this maximization, we first use the EM algorithm and then describe an extension based on the ECM algorithm (Meng and Rubin, 1993) as in Liu and Rubin (1995) for a single t distribution, and as in Mclachlan and Peel (1998) and Peel and Mclachlan (2000) for mixture of t-distributions.…”
Section: Maximum Likelihood Estimation Of the Tmoe Modelmentioning
confidence: 99%
“…For the general multivariate case using t mixtures, one can refer to for example the two key papers Mclachlan and Peel (1998); Peel and Mclachlan (2000). The inference in the previously described approaches is performed by maximum likelihood estimation via expectation-maximization (EM) or extensions (Dempster et al, 1977;McLachlan and Krishnan, 2008), in particular the expectation conditional maximization (ECM) algorithm (Meng and Rubin, 1993). For the Bayesian framework, Frühwirth-Schnatter and Pyne (2010) have considered the Bayesian inference for both the univariate and the multivariate skew-normal and skew-t mixtures.…”
Section: Introductionmentioning
confidence: 99%
“…One solution to this problem could be expectation-conditional maximization (ECM) (Meng and Rubin, 1993;Meng, 1994), which replaces the M-step by a series of computationally simplified conditional maximization (CM) steps. The ECM is a class of generalized EM (GEM) algorithms in which the Q-function is increased rather than being maximized (Fessler and Hero, 1994).…”
Section: Estimation Algorithmmentioning
confidence: 99%