2022
DOI: 10.1587/transinf.2021edp7127
|View full text |Cite
|
Sign up to set email alerts
|

Consistency Regularization on Clean Samples for Learning with Noisy Labels

Abstract: In the recent years, deep learning has achieved significant results in various areas of machine learning. Deep learning requires a huge amount of data to train a model, and data collection techniques such as web crawling have been developed. However, there is a risk that these data collection techniques may generate incorrect labels. If a deep learning model for image classification is trained on a dataset with noisy labels, the generalization performance significantly decreases. This problem is called Learnin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 10 publications
0
1
0
Order By: Relevance
“…DivideMix [7] is a pioneering method that combines sample selection and SSL for LNL and has achieved state-of-the-art performance in recent years by dividing the training data into a set of labeled and unlabeled samples and assigning pseudo-labels to the unlabeled samples, which is called co-divide. The development of advanced state-of-the-art methods based on DivideMix is an active research topic [8]. However, DivideMix, when dividing the training data using a model that has memorized noisy labels, can lead to bias in which the number of labels for certain classes either increases or decreases according to the errors in the memorized labels.…”
Section: Introductionmentioning
confidence: 99%
“…DivideMix [7] is a pioneering method that combines sample selection and SSL for LNL and has achieved state-of-the-art performance in recent years by dividing the training data into a set of labeled and unlabeled samples and assigning pseudo-labels to the unlabeled samples, which is called co-divide. The development of advanced state-of-the-art methods based on DivideMix is an active research topic [8]. However, DivideMix, when dividing the training data using a model that has memorized noisy labels, can lead to bias in which the number of labels for certain classes either increases or decreases according to the errors in the memorized labels.…”
Section: Introductionmentioning
confidence: 99%