2009
DOI: 10.1214/09-ejs363
|View full text |Cite
|
Sign up to set email alerts
|

The false discovery rate for statistical pattern recognition

Abstract: The false discovery rate (FDR) and false nondiscovery rate (FNDR) have received considerable attention in the literature on multiple testing. These performance measures are also appropriate for classification, and in this work we develop generalization error analyses for FDR and FNDR when learning a classifier from labeled training data. Unlike more conventional classification performance measures, the empirical FDR and FNDR are not binomial random variables but rather a ratio of binomials, which introduces ch… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2013
2013
2022
2022

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 28 publications
0
4
0
Order By: Relevance
“…That said, several works have studied FDR control in prediction problems, especially in binary classification. Among them, [19] connects classification to multiple testing, showing that controlling type-I error (FDR) at certain levels by thresholding an oracle classifier asymptotically achieves the optimal (Bayes) classification risk; [50] provides high-probability bounds for estimating the FDR achieved by classification rules, rather than adaptively controlling it at a specific level.…”
Section: Related Workmentioning
confidence: 99%
“…That said, several works have studied FDR control in prediction problems, especially in binary classification. Among them, [19] connects classification to multiple testing, showing that controlling type-I error (FDR) at certain levels by thresholding an oracle classifier asymptotically achieves the optimal (Bayes) classification risk; [50] provides high-probability bounds for estimating the FDR achieved by classification rules, rather than adaptively controlling it at a specific level.…”
Section: Related Workmentioning
confidence: 99%
“…Some analytical results on FDR-controlled classification can be found in the works by Scott et al (2009) and Genovese and Wasserman (2004). In the present work, we focus on practical considerations, namely, implementability of Algorithm 3.1 below and alternative computational approaches based on direct estimation of density ratios.…”
Section: Bayesian Classification Rulesmentioning
confidence: 99%
“…The principle of controlling a specific error rate related to classical statistical confidence criteria has also been considered in, e.g. Scott and Nowak (2005) and Scott, Bellala, Willett et al (2009) in the context of binary classification; see also Tong, Feng, and Zhao (2016) for a more recent survey. In that setting, an asymmetry is introduced between the classes: the goal is to maximize correct classification rate in class 1 subject to a fixed control at a prescribed level of the classification error in class 0, either in absolute-value (Neyman-Pearson classification) or in the sense of FDR.…”
Section: Introductionmentioning
confidence: 99%