2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00999
|View full text |Cite
|
Sign up to set email alerts
|

Post-hoc Uncertainty Calibration for Domain Drift Scenarios

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
33
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 34 publications
(42 citation statements)
references
References 4 publications
1
33
0
Order By: Relevance
“…Calibration Performance under Dataset Drift: Tomani et al [52] show that DNNs are over-confident and highly uncalibrated under dataset/domain shift. Our experiments shows that a model trained with MDCA fairs well in terms of calibration performance even under non-semantic/natural domain shift.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Calibration Performance under Dataset Drift: Tomani et al [52] show that DNNs are over-confident and highly uncalibrated under dataset/domain shift. Our experiments shows that a model trained with MDCA fairs well in terms of calibration performance even under non-semantic/natural domain shift.…”
Section: Resultsmentioning
confidence: 99%
“…Following[31], we also divide the training set of photo domain into 9 : 1 train/val set. Rotated MNIST Dataset: This dataset is also used for domain shift experiments.Inspired from[52], we create 5 different test sets namely {M 15 , M 30 , M 45 , M 60 , M 75 }. Domain drift is introduced in each M x by rotating the images in the MNIST test set by x degrees counter-clockwise.…”
mentioning
confidence: 99%
“…A straightforward yet efficient strategy to mitigate mis-calibrated predictions is to include a post-processing step, which transforms the probability predictions of a deep network [5,8,30,34]. Among these methods, temperature scaling [8], a variant of Platt scaling [28], employs a single scalar parameter over all the presoftmax activations, which results in softened class predictions.…”
Section: Related Workmentioning
confidence: 99%
“…Despite its good performance on in-domain samples, [25] demonstrated that temperature scaling does not work well under data distributional shift. [30] mitigated this limitation by transforming the validation set before performing the post-hoc calibration step. In [20], a ranking model was introduced to improve the post-processing model calibration, whereas [5] used a simple regression model to predict the temperature parameter during the inference phase.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation