2020
DOI: 10.48550/arxiv.2009.14193
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Uncertainty Sets for Image Classifiers using Conformal Prediction

Abstract: Convolutional image classifiers can achieve high predictive accuracy, but quantifying their uncertainty remains an unresolved challenge, hindering their deployment in consequential settings. Existing uncertainty quantification techniques, such as Platt scaling, attempt to calibrate the network's probability estimates, but they do not have formal guarantees. We present an algorithm that modifies any classifier to output a predictive set containing the true label with a user-specified probability, such as 90%. T… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
37
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 23 publications
(37 citation statements)
references
References 13 publications
0
37
0
Order By: Relevance
“…Essentially all conformalized confidence sets offer the coverage guarantee (1), so it is of interest to improve various aspects of the mappings C n . For example, works focus on improving the precision of these methods and optimizing average confidence set size [31,42,22,39,2,3], or on bridging the gap with other forms of coverage, like classwise [42] or conditional [40,3,8,41] coverage.…”
Section: Related Workmentioning
confidence: 99%
“…Essentially all conformalized confidence sets offer the coverage guarantee (1), so it is of interest to improve various aspects of the mappings C n . For example, works focus on improving the precision of these methods and optimizing average confidence set size [31,42,22,39,2,3], or on bridging the gap with other forms of coverage, like classwise [42] or conditional [40,3,8,41] coverage.…”
Section: Related Workmentioning
confidence: 99%
“…Uncertainty quantification for classification For classification problems, two main types of uncertainty quantification methods have been considered: outputting discrete prediction sets with guarantees of covering the true (discrete) label [70,71,38,7,12,18,17], or calibrating the predicted probabilities [51,72,73,37,26]. The connection between prediction sets and calibration was discussed in [27].…”
Section: Related Workmentioning
confidence: 99%
“…Large κ; extension to over-parametrized learning While Theorem 1 requires a small κ = d/n, the approximation formula (7) suggests that the over-coverage should get more severe as κ-the measure of over-parametrization in this problem-gets larger. We confirm this trend experimentally in our simulations in Section 5.1.…”
Section: Quantile Regression Exhibits Under-coveragementioning
confidence: 99%
“…Experimental setup. We split the dataset (5,528 [54] to train a random forest 𝑓 : X × Y → R with 100 trees on the training set, where 𝑓 (𝑥, 𝑦) ∈ R is the probability assigned to label 𝑦 ∈ Y, and use 𝑓 in conjunction with the program shown in Figure 9. This program includes two thresholds 𝑐 low and 𝑐 high , and only assigns a low dose to a patient with covariates 𝑥 if 𝑓 (𝑥) ≥ 1 − 𝑐 low , and similarly for a high dose-i.e., it only assigns the riskier outcomes when 𝑓 is sufficiently confident in its prediction.…”
Section: Case Study 2: Precision Medicinementioning
confidence: 99%
“…Conformal prediction. There has been work on conformal prediction [6,58,64,68], including applications of these ideas to deep learning [5,37,51,52], which aim to use statistical techniques to provide guarantees on the predictions of machine learning models. In particular, they provide confidence sets of outputs that contain the true label with high probability.…”
Section: Related Workmentioning
confidence: 99%