Learning Theory
DOI: 10.1007/978-3-540-72927-3_11
|View full text |Cite
|
Sign up to set email alerts
|

Resampling-Based Confidence Regions and Multiple Tests for a Correlated Random Vector

Abstract: We derive non-asymptotic confidence regions for the mean of a random vector whose coordinates have an unknown dependence structure. The random vector is supposed to be either Gaussian or to have a symmetric bounded distribution, and we observe n i.i.d copies of it. The confidence regions are built using a data-dependent threshold based on a weighted bootstrap procedure. We consider two approaches, the first based on a concentration approach and the second on a direct boostrapped quantile approach. The first on… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
4
0
1

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(5 citation statements)
references
References 10 publications
0
4
0
1
Order By: Relevance
“…Blanchard and Fleuret present an extension of the Occam's razor principle for generalization error analysis in classification, and apply it to derive p-value adjustment procedures for controlling FDR [5]. Arlot et al develop concentration inequalities that apply to multiple testing with correlated observations [2]. None of these works consider FDR/FNDR as performance criteria for classification.…”
Section: Related Conceptsmentioning
confidence: 99%
“…Blanchard and Fleuret present an extension of the Occam's razor principle for generalization error analysis in classification, and apply it to derive p-value adjustment procedures for controlling FDR [5]. Arlot et al develop concentration inequalities that apply to multiple testing with correlated observations [2]. None of these works consider FDR/FNDR as performance criteria for classification.…”
Section: Related Conceptsmentioning
confidence: 99%
“…We use two different ingredients to compute ρ(m). The first one is a resampling estimator of s m − ŝm 2 , where s m denotes the projection of s onto S m . It is naturally derived from Efron's heuristic (see Efron [10]), in the same way as Arlot, Blanchard & Roquain [2].…”
Section: Introductionmentioning
confidence: 99%
“…The first one is a resampling estimator of s m − ŝm 2 , where s m denotes the projection of s onto S m . It is naturally derived from Efron's heuristic (see Efron [10]), in the same way as Arlot, Blanchard & Roquain [2]. This allows us in particular to keep all the sample to build ŝm .…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations