2009
DOI: 10.1214/07-aos582
|View full text |Cite
|
Sign up to set email alerts
|

Lasso-type recovery of sparse representations for high-dimensional data

Abstract: The Lasso is an attractive technique for regularization and variable selection for high-dimensional data, where the number of predictor variables $p_n$ is potentially much larger than the number of samples $n$. However, it was recently discovered that the sparsity pattern of the Lasso estimator can only be asymptotically identical to the true sparsity pattern if the design matrix satisfies the so-called irrepresentable condition. The latter condition can easily be violated in the presence of highly correlated … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

19
588
0
1

Year Published

2010
2010
2017
2017

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 654 publications
(608 citation statements)
references
References 38 publications
19
588
0
1
Order By: Relevance
“…[2,6,10,13,20,19] demonstrated the fundamental result that ℓ 1 -penalized least squares estimators achieve the rate s/n √ log p, which is very close to the oracle rate s/n achievable when the true model is known. [17] demonstrated a similar fundamental result on the excess forecasting error loss under both quadratic and non-quadratic loss functions.…”
Section: Introductionmentioning
confidence: 81%
See 3 more Smart Citations
“…[2,6,10,13,20,19] demonstrated the fundamental result that ℓ 1 -penalized least squares estimators achieve the rate s/n √ log p, which is very close to the oracle rate s/n achievable when the true model is known. [17] demonstrated a similar fundamental result on the excess forecasting error loss under both quadratic and non-quadratic loss functions.…”
Section: Introductionmentioning
confidence: 81%
“…Several papers have begun to investigate estimation of HDSMs, primarily focusing on penalized mean regression, with the ℓ 1 -norm acting as a penalty function [2,6,10,13,17,20,19]. [2,6,10,13,20,19] demonstrated the fundamental result that ℓ 1 -penalized least squares estimators achieve the rate s/n √ log p, which is very close to the oracle rate s/n achievable when the true model is known.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…A recent stream of works on the asymptotic consistency of these procedures can be found in Meinshausen & Yu (2007), Candes & Tao (2007), Banerjee et al (2007), Yuan & Lin (2007) or Rothman, Bickel, Levina & Zhu (2007) among others.…”
Section: Introductionmentioning
confidence: 99%