Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2021
DOI: 10.48550/arxiv.2107.02170
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

On Model Calibration for Long-Tailed Object Detection and Instance Segmentation

Abstract: Vanilla models for object detection and instance segmentation suffer from the heavy bias toward detecting frequent objects in the long-tailed setting. Existing methods address this issue mostly during training, e.g., by re-sampling or reweighting. In this paper, we investigate a largely overlooked approach -postprocessing calibration of confidence scores. We propose NORCAL, Normalized Calibration for long-tailed object detection and instance segmentation, a simple and straightforward recipe that reweighs the p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 68 publications
(203 reference statements)
0
2
0
Order By: Relevance
“…Similarly, ACSL [38] only penalize negative classes over threshold. Separating the categories into some small groups [21,42] and simple calibration [26,47] helps, too. [28,36] modified the original soft-max function by embedding the distribution prior, achieving success.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Similarly, ACSL [38] only penalize negative classes over threshold. Separating the categories into some small groups [21,42] and simple calibration [26,47] helps, too. [28,36] modified the original soft-max function by embedding the distribution prior, achieving success.…”
Section: Related Workmentioning
confidence: 99%
“…Comparison with NORCAL[26] which is also complementary to other methods. NORCAL is grid searched and reported the results from optimal hyper-parameters for each method.…”
mentioning
confidence: 99%