2006
DOI: 10.21236/ada472998
|View full text |Cite
|
Sign up to set email alerts
|

Lasso-type recovery of sparse representations for high-dimensional data

Abstract: The Lasso (Tibshirani, 1996) is an attractive technique for regularization and variable selection for high-dimensional data, where the number of predictor variables p is potentially much larger than the number of samples n. However, it was recently discovered (Zhao and Yu, 2006;Zou, 2005;Meinshausen and Bühlmann, 2006) that the sparsity pattern of the Lasso estimator can only be asymptotically identical to the true sparsity pattern if the design matrix satisfies the so-called irrepresentable condition. The lat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

6
253
0

Year Published

2008
2008
2017
2017

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 151 publications
(259 citation statements)
references
References 26 publications
6
253
0
Order By: Relevance
“…In fact, the similar results have been proved with respect to the least square approaches (Mernshausen and Yu 2009;Raskutti et al 2012). Thus, it is sufficient to conduct our analysis over the restricted subset F S .…”
Section: Assumption A1supporting
confidence: 66%
“…In fact, the similar results have been proved with respect to the least square approaches (Mernshausen and Yu 2009;Raskutti et al 2012). Thus, it is sufficient to conduct our analysis over the restricted subset F S .…”
Section: Assumption A1supporting
confidence: 66%
“…Another interesting finding is that, overall, lasso performs poorly in selecting the non-zeros components (variable selection). This is consistent with recent results (Meinhausen and Yu (2009)) which shows that variable selection consistency of lasso requires the irrepresentable condition, which actually is a very strong condition that often does not hold in practice. For instance, the irrepresentable condition fails for all the design matrices of this simulation study, except for the design matrix behind Figure 2.…”
Section: Given a Working Solution ({σsupporting
confidence: 92%
“…In particular we wish to understand the type of design matrix X for which these results continue to hold. This SBL theory and its comparison with the recently developed lasso theory (see for instance Meinhausen and Yu (2009);Bickel et al (2009)) could potentially give new insight into high-dimensional regression analysis. The generalized singular value decomposition (see e.g.…”
Section: Resultsmentioning
confidence: 97%
“…A weakness of lasso method is that it is not consistent for edge selection: a high number of extra weak edges are included (Meinhausen and Yu, 2006;Schäfer and Strimmer, 2005). We do not regard this as a serious problem here.…”
Section: Discussionmentioning
confidence: 99%