2021
DOI: 10.1214/21-aos2072
|View full text |Cite
|
Sign up to set email alerts
|

Estimating the number of components in finite mixture models via the Group-Sort-Fuse procedure

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
13
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 12 publications
(16 citation statements)
references
References 52 publications
3
13
0
Order By: Relevance
“…Theorem 2(ii) also follows by the same proof technique as Theorem 7.4 of van de Geer (2000), with modifications to account for the presence of the penalty in the definition of G n . An analogue of this result was previously proven by Manole and Khalili (2021), though with different conditions on the tuning parameter ξ n . For completeness, we provide a self-contained proof of Theorem 2(ii), under the conditions on ξ n required for our development.…”
Section: A Proofssupporting
confidence: 70%
See 2 more Smart Citations
“…Theorem 2(ii) also follows by the same proof technique as Theorem 7.4 of van de Geer (2000), with modifications to account for the presence of the penalty in the definition of G n . An analogue of this result was previously proven by Manole and Khalili (2021), though with different conditions on the tuning parameter ξ n . For completeness, we provide a self-contained proof of Theorem 2(ii), under the conditions on ξ n required for our development.…”
Section: A Proofssupporting
confidence: 70%
“…Its application to finite mixture models has previously been discussed by Ho and Nguyen (2016b), who also argue that condition B(k) is satisfied by a broad collection of parametric families F, including the multivariate location-scale Gaussian and Student-t families. A version of Theorem 2(ii) is implicit in the work of Manole and Khalili (2021), though with a stronger condition on the tuning parameter ξ n . We provide a self-contained proof of this result in Appendix A for completeness.…”
Section: Convergence Rates For Maximum Likelihood Density Estimatorsmentioning
confidence: 99%
See 1 more Smart Citation
“…There are two popular line of directions to account for the unknown number of components. The first line of directions consist of approaches that penalize the sample log-likelihood function of the Gaussian mixture models via the number of parameters [25,24] or via the separation of the parameters [22]. While these methods can guarantee the consistency of estimating the true number of components when the sample size is sufficiently large, they tend to be computationally expensive, especially when the true number of components is quite large.…”
Section: Introductionmentioning
confidence: 99%
“…In the statistical literature, the clustering problem is commonly formulated as fitting a mixture model [13,18,24]. That is, suppose the observed random vector X is p-dimensional and is absolutely continuous with respect to the Lebesgue measure ζ p on R p ; we assume that f 0 (•), the probability density function (pdf) of X, has the form…”
Section: Introductionmentioning
confidence: 99%