2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022
DOI: 10.1109/cvpr52688.2022.00018
|View full text |Cite
|
Sign up to set email alerts
|

The Devil is in the Margin: Margin-based Label Smoothing for Network Calibration

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
11
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 29 publications
(17 citation statements)
references
References 11 publications
0
11
0
Order By: Relevance
“…Note that the reasoning behind working directly on the logit space is two-fold. First, observations in [11] suggest that directly imposing the constraints on the logits results in better performance than in the softmax predictions. And second, by imposing a bounded constraint on the logits values 2 , their magnitudes are further decreased, which has a favorable effect on model calibration [17].…”
Section: Proposed Constrained Calibration Approachmentioning
confidence: 99%
See 3 more Smart Citations
“…Note that the reasoning behind working directly on the logit space is two-fold. First, observations in [11] suggest that directly imposing the constraints on the logits results in better performance than in the softmax predictions. And second, by imposing a bounded constraint on the logits values 2 , their magnitudes are further decreased, which has a favorable effect on model calibration [17].…”
Section: Proposed Constrained Calibration Approachmentioning
confidence: 99%
“…And second, by imposing a bounded constraint on the logits values 2 , their magnitudes are further decreased, which has a favorable effect on model calibration [17]. We stress that despite both [11] and our method enforce constraints on the predicted logits, [11] is fundamentally different. In particular, [11] imposes an inequality constraint on the logit distances so that it encourages uniform-alike distributions up to a given margin, disregarding the importance of each class in a given patch.…”
Section: Proposed Constrained Calibration Approachmentioning
confidence: 99%
See 2 more Smart Citations
“…Recently, many approaches have been developed to alleviate the overconfidence problem by calibrating the confidence, i.e., matching the accuracy and confidence scores to reflect the predictive uncertainty [46]. Specifically, one category of approaches [22,43,50,51,57,65,70,72,74,79] aim to learn well-calibrated models during training. For instance, mixup [65], label smoothing [51] and focal loss [50] have been demonstrated to be effective for confidence calibration.…”
Section: Introductionmentioning
confidence: 99%