2000
DOI: 10.1002/0471721182
|View full text |Cite
|
Sign up to set email alerts
|

Finite Mixture Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

17
6,250
0
53

Year Published

2005
2005
2014
2014

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 7,326 publications
(6,464 citation statements)
references
References 0 publications
17
6,250
0
53
Order By: Relevance
“…In the one-component case there is no entropy by definition, and therefore the ICL coincides with the BIC. While there is no theoretical justification for this approach, simulations show a superior performance compared to other heuristic criteria, such as the N EC (Biernacki, Celeux, and Govaert, 2000), as well as compared to AIC and BIC (McLachlan and Peel, 2000).…”
Section: Descriptive Statisticsmentioning
confidence: 97%
See 1 more Smart Citation
“…In the one-component case there is no entropy by definition, and therefore the ICL coincides with the BIC. While there is no theoretical justification for this approach, simulations show a superior performance compared to other heuristic criteria, such as the N EC (Biernacki, Celeux, and Govaert, 2000), as well as compared to AIC and BIC (McLachlan and Peel, 2000).…”
Section: Descriptive Statisticsmentioning
confidence: 97%
“…may encounter several problems, even if it is in principle feasible (for a general treatise see for example McLachlan and Peel (2000)). First, the highly nonlinear form of the log likelihood causes the optimization algorithm to be rather slow or even incapable of finding the maximum.…”
Section: A Estimation Of the Finite Mixture Regression Modelmentioning
confidence: 99%
“…The Gaussian mixture model has been extensively studied in the last decades and used in many situations (see [3] and [31] for a review). Therefore, if the Gaussian model is chosen, f k (x; θ k ) will denote the density of a multivariate Gaussian density parametrized by θ k = {µ k , Σ k } where µ k and Σ k are respectively the mean and covariance matrix of kth component of the mixture.…”
Section: Generative Supervised Classificationmentioning
confidence: 99%
“…In the case of the discovery phase of the inductive approach, the maximization of Q(X * ; Θ) according to the parameters µ k and Σ k can be done classically except that parameters µ k and Σ k have only to be estimated for k = C + 1, ..., K. We therefore refer to [31] for ML inference for µ k and Σ k in finite mixture models. The estimation of the mixture proportions π k can unfortunately not be done classically and must be done sequentially.…”
Section: A2 Inductive Approachmentioning
confidence: 99%
“…Identifiability for mixture distributions is defined slightly different in that distinct values of the parameters determine distinct members of the mixture family, allowing permutations of the component labels, i.e. the indicator variables; see, for example, McLachlan and Peel [15].…”
Section: General Formulae For Obtaining the Mlesmentioning
confidence: 99%