2016
DOI: 10.1016/j.csda.2016.03.002
|View full text |Cite
|
Sign up to set email alerts
|

Maximum likelihood estimation of the mixture of log-concave densities

Abstract: Finite mixture models are useful tools and can be estimated via the EM algorithm. A main drawback is the strong parametric assumption about the component densities. In this paper, a much more flexible mixture model is considered, which assumes each component density to be log-concave. Under fairly general conditions, the log-concave maximum likelihood estimator (LCMLE) exists and is consistent. Numeric examples are also made to demonstrate that the LCMLE improves the clustering results while comparing with the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
7

Relationship

2
5

Authors

Journals

citations
Cited by 13 publications
(10 citation statements)
references
References 34 publications
0
10
0
Order By: Relevance
“…By extending the idea of Hu et al (), Hu et al () proposed a robust EM‐type algorithm for mixture regression models by assuming the component error densities are log‐concave. A density g ( x ) is log‐concave if its log‐density, ϕfalse(xfalse)=normalloggfalse(xfalse), is concave.…”
Section: Robust Mixture Regression Methodsmentioning
confidence: 99%
“…By extending the idea of Hu et al (), Hu et al () proposed a robust EM‐type algorithm for mixture regression models by assuming the component error densities are log‐concave. A density g ( x ) is log‐concave if its log‐density, ϕfalse(xfalse)=normalloggfalse(xfalse), is concave.…”
Section: Robust Mixture Regression Methodsmentioning
confidence: 99%
“…This work largely inspired further work on nonparametric mixture models from the kernel density estimation viewpoint. But nonparametric maximum likelihood estimation is also possible if one assumes log-concavity of the component densities [18].…”
Section: Four Kinds Of Mixture Models 21 a Review Of Paradigms For Mmentioning
confidence: 99%
“…Proof of Theorem 1. In this proof, the symbolp n stands for the solution of the optimization problem (17)- (18), that is, without the positivity constraint, and the symbolp + n stands for the solution of the optimization problem (14)- (15), that is, with the positivity constraint. In view of Lemma 1, it is sufficient to show that…”
Section: A2 Proof Of Theoremmentioning
confidence: 99%
“…In the multivariate case, the most popular extensions have been to inflate the tails by using multivariate t ‐kernels (McLachlan and Peel, 1998; Lee and McLachlan, 2016) and/or to generalize the Gaussian distribution to allow skewness (Azzalini and Dalla Valle, 1996; Arellano‐Valle and Azzalini, 2009). Motivated by the problem of more robust clustering, methods are also available for non‐parametrically estimating the kernels subject to unimodality (Rodríguez and Walker, 2014) and log‐concavity (Hu et al ., 2016) restrictions. Also, to improve robustness of clustering, one can use a mixture of mixtures model employing multiple kernels having similar location parameters to characterize the data within a cluster (Malsiner‐Walli et al ., 2017).…”
Section: Introductionmentioning
confidence: 99%