2020
DOI: 10.48550/arxiv.2009.14119
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Asymmetric Loss For Multi-Label Classification

Abstract: Pictures of everyday life are inherently multi-label in nature. Hence, multi-label classification is commonly used to analyze their content. In typical multi-label datasets, each picture contains only a few positive labels, and many negative ones. This positive-negative imbalance can result in under-emphasizing gradients from positive labels during training, leading to poor accuracy.In this paper, we introduce a novel asymmetric loss ("ASL"), that operates differently on positive and negative samples. The loss… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
53
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 23 publications
(56 citation statements)
references
References 23 publications
(57 reference statements)
0
53
0
Order By: Relevance
“…The reason behind this phenomenon is the distinctive features in multi-label learning: 1) the domination of negatives on positives in multiple binary classifiers; In the multi-label setting, an image typically contains a few positives but much more negatives, which results in serious positive-negative imbalance [26]. 2) Missing labels exacerbate the positivenegative imbalance and plague the learning of recognizing positives.…”
Section: B An Observation In Mlmlmentioning
confidence: 99%
See 4 more Smart Citations
“…The reason behind this phenomenon is the distinctive features in multi-label learning: 1) the domination of negatives on positives in multiple binary classifiers; In the multi-label setting, an image typically contains a few positives but much more negatives, which results in serious positive-negative imbalance [26]. 2) Missing labels exacerbate the positivenegative imbalance and plague the learning of recognizing positives.…”
Section: B An Observation In Mlmlmentioning
confidence: 99%
“…VOC [31] contains 5,717 training images with 20 classes, and additional 5,823 images are used for test. Following [26], we collect NUS-wide [32] with 119,103 and 50,720 images as training set and test set, consists of 81 classes. Due to many download urls of Openimages [33] are expired, we were able We construct the training sets of missing labels by randomly dropping positive labels for each training image with different ratios.…”
Section: A Experimental Settings 1) Datasetsmentioning
confidence: 99%
See 3 more Smart Citations