2021
DOI: 10.48550/arxiv.2110.01052
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learn then Test: Calibrating Predictive Algorithms to Achieve Risk Control

Abstract: We introduce Learn then Test, a framework for calibrating machine learning models so that their predictions satisfy explicit, finite-sample statistical guarantees regardless of the underlying model and (unknown) datagenerating distribution. The framework addresses, among other examples, false discovery rate control in multilabel classification, intersection-over-union control in instance segmentation, and the simultaneous control of the type-1 error of outlier detection and confidence set coverage in classific… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
2

Relationship

3
5

Authors

Journals

citations
Cited by 11 publications
(15 citation statements)
references
References 68 publications
0
15
0
Order By: Relevance
“…For the proof of this fact, along with a discussion of the tighter confidence bounds used in the experiments, see previous work [4,13]. This calibration procedure is easy to implement in code, and we summarize it in Algorithm 2.…”
Section: Algorithm 2 Pseudocode For Computing λmentioning
confidence: 88%
See 1 more Smart Citation
“…For the proof of this fact, along with a discussion of the tighter confidence bounds used in the experiments, see previous work [4,13]. This calibration procedure is easy to implement in code, and we summarize it in Algorithm 2.…”
Section: Algorithm 2 Pseudocode For Computing λmentioning
confidence: 88%
“…Of particular interest to us is the method of conformalized quantile regression (CQR) [6]. We directly build on CQR, by replacing the the conformal subroutine with the fixed-sequence testing procedure from [4,13]. Other works have applied distribution-free uncertainty quantification to biological and medical computer vision tasks [76][77][78][79][80][81].…”
Section: Related Workmentioning
confidence: 99%
“…Note on Terminology. The term calibration sometimes has a differing meaning in a variety of areas of human activity, including measurement technology, engineering, economics, and even statistics etc., see e.g., Franklin (1999); Dawkins et al (2001); Kodovskỳ and Fridrich (2009); Osborne (1991); Vovk et al (2020); Angelopoulos et al (2021). These generally mean adjusting a measurement to agree with a desired standard, within a specified accuracy.…”
Section: Related Workmentioning
confidence: 99%
“…Our work also relies on the nested set outlook on conformal prediction [13]. We also build directly on existing work involving distribution-free risk-controlling prediction sets and Learn then Test [8,3]. The LAC baseline is taken from [26], and the ordinal CDF baseline is similar to the softmax method in [5], which is in turn motivated by [15,24].…”
Section: Conformal Prediction and Distribution-free Uncertainty Quant...mentioning
confidence: 99%