2020
DOI: 10.48550/arxiv.2006.07831
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Class2Simi: A Noise Reduction Perspective on Learning with Noisy Labels

Abstract: Label noise is ubiquitous in the era of big data. Deep learning algorithms can easily t the noise and thus cannot generalize well without properly modeling the noise. In this paper, we propose a new perspective on dealing with label noise called "Class2Simi". Speci cally, we transform the training examples with noisy class labels into pairs of examples with noisy similarity labels and propose a deep learning framework to learn robust classi ers directly with the noisy similarity labels. Note that a class label… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 20 publications
(38 reference statements)
0
2
0
Order By: Relevance
“…Generally, the algorithms of learning with noisy labels can be divided into two categories: statistically inconsistent algorithms and statistically consistent algorithms. Methods in the rst category are heuristic, such as selecting reliable examples to train model [15,48,50,14,30,35,18], correcting labels [29,19,38,34], and adding regularization [13,12,41,40,26,24,43]. Those methods empirically perform well.…”
Section: Introductionmentioning
confidence: 99%
“…Generally, the algorithms of learning with noisy labels can be divided into two categories: statistically inconsistent algorithms and statistically consistent algorithms. Methods in the rst category are heuristic, such as selecting reliable examples to train model [15,48,50,14,30,35,18], correcting labels [29,19,38,34], and adding regularization [13,12,41,40,26,24,43]. Those methods empirically perform well.…”
Section: Introductionmentioning
confidence: 99%
“…Generally, the algorithms for combating noisy labels can be categorized into statistically inconsistent algorithms and statistically consistent algorithms. The statistically inconsistent algorithms are heuristic, such as selecting possible clean examples to train the classifier [9,51,53,11,26,33,14], re-weighting examples to reduce the effect of noisy labels [33,21], correcting labels [25,15,36,32], or adding regularization [10,8,39,38,19,17,41]. These approaches empirically work well, but there is no theoretical guarantee that the learned classifiers can converge to the optimal ones learned from clean data.…”
Section: Introductionmentioning
confidence: 99%