2014
DOI: 10.1111/rssb.12094
|View full text |Cite
|
Sign up to set email alerts
|

Group Bound: Confidence Intervals for Groups of Variables in Sparse High Dimensional Regression Without Assumptions on the Design

Abstract: It is in general challenging to provide confidence intervals for individual variables in highdimensional regression without making strict or unverifiable assumptions on the design matrix. We show here that a "group-bound" confidence interval can be derived without making any assumptions on the design matrix. The lower bound for the regression coefficient of individual variables can be derived via linear programming. The idea also generalises naturally to groups of variables, where we can derive a one-sided con… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
52
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 43 publications
(52 citation statements)
references
References 35 publications
0
52
0
Order By: Relevance
“…More recently, huge strides have been made in quantifying uncertainty about parameter estimates. For the important special case of the high dimensional linear model, frequentist p ‐values for individual parameters or groups of parameters can now be obtained through an array of techniques (Wasserman and Roeder, ; Meinshausen et al ., ; Bühlmann, ; Zhang and Zhang, ; Lockhart et al ., ; van de Geer et al ., ; Javanmard and Montanari, ; Meinshausen, ; Ning and Liu, ; Voorman et al ., ; Zhou, )—see Dezeure et al . () for an overview of some of these methods.…”
Section: Introductionmentioning
confidence: 99%
“…More recently, huge strides have been made in quantifying uncertainty about parameter estimates. For the important special case of the high dimensional linear model, frequentist p ‐values for individual parameters or groups of parameters can now be obtained through an array of techniques (Wasserman and Roeder, ; Meinshausen et al ., ; Bühlmann, ; Zhang and Zhang, ; Lockhart et al ., ; van de Geer et al ., ; Javanmard and Montanari, ; Meinshausen, ; Ning and Liu, ; Voorman et al ., ; Zhou, )—see Dezeure et al . () for an overview of some of these methods.…”
Section: Introductionmentioning
confidence: 99%
“…Hence, Meinshausen and Buḧlmann advised the choice of α between 0.2 and 0.8, and showed that this gives useful data in practice. [46] Moreover, this method is consistent under the sparse Riesz condition [47] (Table 2), a weaker condition than that which ensures lasso consistency. The stability is also satisfied, even in high dimensions.…”
Section: The Randomized Lasso (Stability Selection)mentioning
confidence: 99%
“….,p}, where α is the weakness. Then the randomized lasso estimator b β(RandLasso) is defined by [46]…”
Section: The Randomized Lasso (Stability Selection)mentioning
confidence: 99%
See 1 more Smart Citation
“…For general models and high-dimensional settings, sample splitting procedures (Wasserman and Roeder, 2009;Meinshausen et al, 2009) and stability selection (Meinshausen and Bühlmann, 2010;Shah and Samworth, 2013) provide some statistical error control and significance. For the case of a linear model with homoscedastic and Gaussian errors, more recent and powerful techniques have been proposed (Bühlmann, 2013;Zhang and Zhang, 2014;van de Geer et al, 2014;Javanmard and Montanari, 2014;Meinshausen, 2015;Foygel Barber and Candès, 2015) and some of these extend to generalized linear models. For a recent overview, see also Dezeure et al (2015).…”
Section: Introductionmentioning
confidence: 99%