2014
DOI: 10.1214/14-aos1221
|View full text |Cite
|
Sign up to set email alerts
|

On asymptotically optimal confidence regions and tests for high-dimensional models

Abstract: convergence to the limit is not uniform. Furthermore, bootstrap and even subsampling techniques are plagued by noncontinuity of limiting distributions. Nevertheless, in the low-dimensional setting, a modified bootstrap scheme has been proposed; [13] and [14] have recently proposed a residual based bootstrap scheme. They provide consistency guarantees for the highdimensional setting; we consider this method in an empirical analysis in Section 4.Some approaches for quantifying uncertainty include the following. … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

25
1,217
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 922 publications
(1,254 citation statements)
references
References 56 publications
25
1,217
0
Order By: Relevance
“…In a penalized regression framework, several inference methods for a low-dimensional sub-vector of a high-dimensional regression coefficient vector have been developed (Van de Geer et al, 2014;Zhang and Zhang, 2014;Voorman et al, 2014), which however differs from the goal of testing on a high-dimensional parameter here and thus will not be further discussed.…”
Section: Introductionmentioning
confidence: 99%
“…In a penalized regression framework, several inference methods for a low-dimensional sub-vector of a high-dimensional regression coefficient vector have been developed (Van de Geer et al, 2014;Zhang and Zhang, 2014;Voorman et al, 2014), which however differs from the goal of testing on a high-dimensional parameter here and thus will not be further discussed.…”
Section: Introductionmentioning
confidence: 99%
“…Several recent papers study the problem of constructing confidence regions after model selection allowing p ≫ n. In the case of linear mean regression, [6] proposed a double selection inference in a parametric with homoscedastic Gaussian errors, [10] studies the double selection procedure in a non-parametric setting with heteroscedastic errors, [39] and [35] proposed estimators based on ℓ 1 -penalized estimators based on "1-step" correction in parametric models. Going beyond mean models, [35] also provides high level conditions for the one-step estimator applied to smooth generalized linear problems, [7] analyzes confidence regions for a parametric homoscedastic LAD regression under primitive conditions based on the instrumental LAD regression, and [9] provides two post-selection procedures to build confidence regions for the logistic regression. None of the aforementioned papers deal with the problem of the present paper.…”
mentioning
confidence: 99%
“…Such an approach has been used in the construction of confidence intervals for high-dimensional linear regression in the recent literature. See, for example, [14,41,48,4]. We follow the same principle to de-bias the estimator β(λ) given in Algorithm 1.…”
Section: Statistical Inferencementioning
confidence: 99%