2005
DOI: 10.1214/009053605000000200
|View full text |Cite
|
Sign up to set email alerts
|

Variable selection using MM algorithms

Abstract: Variable selection is fundamental to high-dimensional statistical modeling. Many variable selection techniques may be implemented by maximum penalized likelihood using various penalty functions. Optimizing the penalized likelihood function is often challenging because it may be nondifferentiable and/or nonconcave. This article proposes a new class of algorithms for finding a maximizer of the penalized likelihood for a broad class of penalty functions. These algorithms operate by perturbing the penalty function… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
320
0
1

Year Published

2006
2006
2021
2021

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 368 publications
(323 citation statements)
references
References 24 publications
2
320
0
1
Order By: Relevance
“…This last condition raises a potential problem: when the elements of ω(n) approach zero -to be expected when a sparse solution emerges -the surrogate, S (ω|ω(n)), is no longer defined. The authors of [16] have shown that the addition of a small positive quantity to the denominator of the diagonal elements of B (ω(n)) retains their maximum likelihood interpretation and that this quantity can be allowed to decay to zero in the limit so that the original problem is solved. They also present a method for the informed selection of its value.…”
Section: Algorithm Development Via the Majorize-minimize Principlementioning
confidence: 99%
See 1 more Smart Citation
“…This last condition raises a potential problem: when the elements of ω(n) approach zero -to be expected when a sparse solution emerges -the surrogate, S (ω|ω(n)), is no longer defined. The authors of [16] have shown that the addition of a small positive quantity to the denominator of the diagonal elements of B (ω(n)) retains their maximum likelihood interpretation and that this quantity can be allowed to decay to zero in the limit so that the original problem is solved. They also present a method for the informed selection of its value.…”
Section: Algorithm Development Via the Majorize-minimize Principlementioning
confidence: 99%
“…In a departure, we apply a majorize-minimize technique to overcome this technical problem leading to a very simple iterative algorithm that converges to the (penalized) least-squares solution. In [16] a general majorize-minimize framework is presented for variable selection via penalized maximum likelihood but there only a small least-squares problem in conjunction with the SCAD (smoothly-clipped absolute deviations) penalty is examined.…”
Section: Introductionmentioning
confidence: 99%
“…A major insight is in the work of Efron et al (2004), in which it is shown that a modification of their fast Least Angle Regression (LAR) gives the complete path of the Lasso problem with varying penalty parameter. On the other hand, Hunter and Li (2005) proposed to use minorization-maximization (MM) algorithms for optimization involving nonconcave penalties and justified their convergence. Whether the latter algorithms will be computationally effective when there are many local minima remains to be seen.…”
Section: Computational Issuesmentioning
confidence: 99%
“…Given an initial value β (1) that is close to the ALASSO minimizer, if β (1) j is very close to zero, then setβ j = 0. Otherwise, following the LQA algorithm proposed in Fan and Li [24] and recently studied by Hunter and Li [26], we can approximate the penalty function locally by a quadratic function…”
Section: Standard Errors Of Alasso Estimatesmentioning
confidence: 99%