2008
DOI: 10.1109/tpami.2007.70775
|View full text |Cite
|
Sign up to set email alerts
|

TRUST-TECH-Based Expectation Maximization for Learning Finite Mixture Models

Abstract: Abstract-The Expectation Maximization (EM) algorithm is widely used for learning finite mixture models despite its greedy nature. Most popular model-based clustering techniques might yield poor clusters if the parameters are not initialized properly. To reduce the sensitivity of initial points, a novel algorithm for learning mixture models from multivariate data is introduced in this paper. The proposed algorithm takes advantage of TRUST-TECH (TRansformation Under STability-reTaining Equilibria CHaracterizatio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
15
0

Year Published

2008
2008
2019
2019

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 42 publications
(15 citation statements)
references
References 30 publications
0
15
0
Order By: Relevance
“…It is well-known that care is required when initializing any EM algorithm. If the initialization is not carefully performed, then the EM algorithm may lead to unsatisfactory results (See for example Biernacki, Celeux, & Govaert, 2003;Reddy, Chiang, & Rajaratnam, 2008;Yang, Lai, & Lin, 2012 for discussions). Thus, fitting regression mixture models with the standard EM algorithm may yield poor estimations if the model parameters are not initialized properly.…”
Section: Regularized Regression Mixtures For Functional Datamentioning
confidence: 99%
See 1 more Smart Citation
“…It is well-known that care is required when initializing any EM algorithm. If the initialization is not carefully performed, then the EM algorithm may lead to unsatisfactory results (See for example Biernacki, Celeux, & Govaert, 2003;Reddy, Chiang, & Rajaratnam, 2008;Yang, Lai, & Lin, 2012 for discussions). Thus, fitting regression mixture models with the standard EM algorithm may yield poor estimations if the model parameters are not initialized properly.…”
Section: Regularized Regression Mixtures For Functional Datamentioning
confidence: 99%
“…Several approaches have been proposed in the literature in order to overcome the initialization problem, and to make the EM algorithm for Gaussian mixture models robust to initialization (e.g., Biernacki et al, 2003;Reddy et al, 2008;Yang et al, 2012). Further details about choosing starting values for the EM algorithm for Gaussian mixtures can be found in Biernacki et al (2003).…”
Section: Regularized Regression Mixtures For Functional Datamentioning
confidence: 99%
“…A large number of local models are firstly identified in [18], and it selects the most separated ones. In [19], the TRUST-TECH technique is used to estimate adjacent local maximum values of the log-likelihood. en, Nasios and Bors [20] adopted a hierarchical maximum initialization to initialize the hyperparameters.…”
Section: Introductionmentioning
confidence: 99%
“…Applying traditional clustering techniques, such as k-means and hierarchical clustering, will not capture such co-clusters [10], [16], [22], [4], [32]. However, co-clustering (or biclustering) 1 has been proposed to simultaneously cluster both dimensions of a data matrix by utilizing the relationship between the two entities [10], [16], [26], [34], [27].…”
Section: Introductionmentioning
confidence: 99%