2021
DOI: 10.48550/arxiv.2112.07368
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Simple and Robust Loss Design for Multi-Label Learning with Missing Labels

Abstract: Multi-label learning in the presence of missing labels (MLML) is a challenging problem. Existing methods mainly focus on the design of network structures or training schemes, which increase the complexity of implementation. This work seeks to fulfill the potential of loss function in MLML without increasing the procedure and complexity. Toward this end, we propose two simple yet effective methods via robust loss design based on an observation that a model can identify missing labels during training with a high… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
13
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(13 citation statements)
references
References 23 publications
(38 reference statements)
0
13
0
Order By: Relevance
“…We use the codebase shared by [9] to report the performance of ROLE with our setup. Comparison with [59] is in appendix B as it uses a different data split, which also includes partial labeling experiments where 40% or 75% of the positives are labeled instead of only a single positive.…”
Section: Results and Comparisonmentioning
confidence: 99%
See 1 more Smart Citation
“…We use the codebase shared by [9] to report the performance of ROLE with our setup. Comparison with [59] is in appendix B as it uses a different data split, which also includes partial labeling experiments where 40% or 75% of the positives are labeled instead of only a single positive.…”
Section: Results and Comparisonmentioning
confidence: 99%
“…The output of one serves as ground-truth for the other, with the intuition that both are more likely to converge to the same solution. Other approaches reweight samples based on their loss values [59,42]. Large Loss Matters [25] marks elements with large loss values as mislabeled and ignores or reweights those.…”
Section: Related Workmentioning
confidence: 99%
“…Besides, label correlation modeling [8,9,55,57] and the utilization of region features [15,36,50] are proved to be effective for multi-label classification. In light of the challenge of annotating all groundtruth labels for an image, multi-label learning in the presence of missing labels (MLML) has also attracted much research attention [11,20,56,61].…”
Section: Related Workmentioning
confidence: 99%
“…Ignoring the unannotated classes in the loss function can alleviate this issue [12], but this is inapplicable when the annotations only contain positives [8]. Asymmetric loss design can help handle missing labels beyond the BCE loss [54].…”
Section: Related Workmentioning
confidence: 99%
“…Prior works consider single-positive labels in the single-label setting [8,54], as a combination of single-label learning [37,11,20] and positive-unlabeled learning [10,1]. [8] propose to go beyond label smoothing [42,47] to deal with the label noise introduced by false negative labels their regularized online label estimation (ROLE) method estimates the missing labels in an online fashion, by jointly optimizing a label estimator and image classifier, the output of the former serving as ground truth of the latter.…”
Section: Related Workmentioning
confidence: 99%