2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00609
|View full text |Cite
|
Sign up to set email alerts
|

Anchor Loss: Modulating Loss Scale Based on Prediction Difficulty

Abstract: We propose a novel loss function that dynamically rescales the cross entropy based on prediction difficulty regarding a sample. Deep neural network architectures in image classification tasks struggle to disambiguate visually similar objects. Likewise, in human pose estimation symmetric body parts often confuse the network with assigning indiscriminative scores to them. This is due to the output prediction, in which only the highest confidence label is selected without taking into consideration a measure of un… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
21
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 49 publications
(22 citation statements)
references
References 36 publications
0
21
0
Order By: Relevance
“…For instance, [18] scaled the loss by inverse class frequency. An alternative strategy down-weighs the loss of well-classified examples, preventing easy negatives from dominating the loss [27] or dynamically rescale the cross-entropy loss based on the difficulty to classify a sample [34]. [6] proposed to encourage larger margins for rare classes.…”
Section: Long-tail Recognitionmentioning
confidence: 99%
“…For instance, [18] scaled the loss by inverse class frequency. An alternative strategy down-weighs the loss of well-classified examples, preventing easy negatives from dominating the loss [27] or dynamically rescale the cross-entropy loss based on the difficulty to classify a sample [34]. [6] proposed to encourage larger margins for rare classes.…”
Section: Long-tail Recognitionmentioning
confidence: 99%
“…The similar problem arises in the field of object detection where the model should detect scarce objects from countless of easily classified background examples. This problem is typically addressed via adjusting the loss scales based on example difficulty and avoiding major gradient updates on trivial predictions [14,15].…”
Section: Overlap Ratio Modulationmentioning
confidence: 99%
“…This is because the background occupies most of the image in an objectdetection problem, and it is necessary to eliminate the imbalance of the label to identify a specific object in the minority class. Thus far, studies segmented medical images using a loss function based on the Dice coefficient [43,44], and robust losses for imbalanced data, such as focal loss [18], dice loss [19], and anchor loss [20] were proposed. We applied BERT, adopting focal loss and dice loss as the loss functions, to the text segmentation of novels into paragraphs, and demonstrated the effectiveness of the approach [45][46][47].…”
Section: Imbalanced Classificationmentioning
confidence: 99%
“…Motivated by focal loss, anchor loss (AL) [20] is a loss function that dynamically scales CE loss on the basis of the difficulty of predicting the sample. Similar to focal loss, AL was proposed for use in an object-detection task where the imbalance between the number of pixels of the background and the target object is a bottleneck.…”
Section: Anchor Lossmentioning
confidence: 99%
See 1 more Smart Citation