2011
DOI: 10.1214/11-aos896
|View full text |Cite
|
Sign up to set email alerts
|

Oracle inequalities and optimal inference under group sparsity

Abstract: We consider the problem of estimating a sparse linear regression vector β * under a gaussian noise model, for the purpose of both prediction and model selection. We assume that prior knowledge is available on the sparsity pattern, namely the set of variables is partitioned into prescribed groups, only few of which are relevant in the estimation process. This group sparsity assumption suggests us to consider the Group Lasso method as a means to estimate β * . We establish oracle inequalities for the prediction … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

23
367
0
1

Year Published

2013
2013
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 298 publications
(400 citation statements)
references
References 43 publications
23
367
0
1
Order By: Relevance
“…7 Our techniques build on prior studies, in particular Bickel, Ritov, and Tsybakov (2009), Lounici, Pontil, van de Geer, and Tsybakov (2011), Obozinski, Wainwright, and Jordan (2011), Belloni and Chernozhukov (2011), Belloni, Chen, Chernozhukov, and Hansen (2012), Belloni and Chernozhukov (2013), and Belloni, Chernozhukov, FernandezVal, and Hansen (2014).…”
Section: Introductionmentioning
confidence: 99%
“…7 Our techniques build on prior studies, in particular Bickel, Ritov, and Tsybakov (2009), Lounici, Pontil, van de Geer, and Tsybakov (2011), Obozinski, Wainwright, and Jordan (2011), Belloni and Chernozhukov (2011), Belloni, Chen, Chernozhukov, and Hansen (2012), Belloni and Chernozhukov (2013), and Belloni, Chernozhukov, FernandezVal, and Hansen (2014).…”
Section: Introductionmentioning
confidence: 99%
“…In these cases, the selection of groups of variables is of interest, rather than of individual variables. In order to address this type of problems, Yuan and Lin (2006) developed the group lasso method and a number of authors have subsequently extended it and studied its theoretical properties (Bach 2008;Huang and Zhang 2010;Wei and Huang 2010;Lounici et al 2011;Sharma et al 2013;Simon et al 2013). Given the merits of the regularized methods just described, regularized methods for binary response variables have also been developed.…”
Section: Introductionmentioning
confidence: 99%
“…Then we build on existing results in 107 multi-task regression [1,20] to improve the performance w.r.t. single-task learning.…”
mentioning
confidence: 99%
“…Since from Lemma 1, s k t ≤ |J t | = s t for any iter- In order to exploit the similarity across tasks stated in Asm. 4, we resort to the Group LASSO (GL) algorithm [12,20], which defines a joint optimization problem over all the tasks. GL is based on the observation that, given the weight matrix W ∈ R d×T , the norm W 2,1 measures the level of shared-sparsity across tasks.…”
mentioning
confidence: 99%