2022
DOI: 10.48550/arxiv.2205.05878
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Training Uncertainty-Aware Classifiers with Conformalized Deep Learning

Abstract: Deep neural networks are powerful tools to detect hidden patterns in data and leverage them to make predictions, but they are not designed to understand uncertainty and estimate reliable probabilities. In particular, they tend to be overconfident. We address this problem by developing a novel training algorithm that can lead to more dependable uncertainty estimates, without sacrificing predictive power. The idea is to mitigate overconfidence by minimizing a loss function, inspired by advances in conformal infe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 45 publications
0
2
0
Order By: Relevance
“…Through experimental results, including for modulation classification [11], [12], meta-XB was shown to outperform both conventional conformal prediction-based solutions and meta-learning conformal prediction schemes. Future work may integrate meta-learning with CP-aware training criteria [41], [42], or with stochastic set predictors.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Through experimental results, including for modulation classification [11], [12], meta-XB was shown to outperform both conventional conformal prediction-based solutions and meta-learning conformal prediction schemes. Future work may integrate meta-learning with CP-aware training criteria [41], [42], or with stochastic set predictors.…”
Section: Discussionmentioning
confidence: 99%
“…CP-aware loss. [41] and [42] proposed CP-aware loss functions to enhance the efficiency or per-input validity (3) of VB-CP. The drawback of these solutions is that they require a large amount of data samples, i.e., N τ 1, unlike the meta-learning methods studied here.…”
Section: Per-task Validity Of Meta-xbmentioning
confidence: 99%