2019
DOI: 10.1007/978-3-030-31756-0_7
|View full text |Cite
|
Sign up to set email alerts
|

Deep Neural Networks for Corrupted Labels

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
50
0
2

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 45 publications
(78 citation statements)
references
References 28 publications
1
50
0
2
Order By: Relevance
“…There have been investigations [37]- [39] towards the learning of discriminative models using noisy labeled datasets. By the small-loss approach, which updates the network with only the samples that have a small loss [40], it is possible to reduce the impact of noise samples in DANN [16] for UDA. Another strategy [41]- [43] is to detect noisy annotations by using multiple networks to create different viewpoints of each sample and these methods have produced better outcomes.…”
Section: Methodsmentioning
confidence: 99%
“…There have been investigations [37]- [39] towards the learning of discriminative models using noisy labeled datasets. By the small-loss approach, which updates the network with only the samples that have a small loss [40], it is possible to reduce the impact of noise samples in DANN [16] for UDA. Another strategy [41]- [43] is to detect noisy annotations by using multiple networks to create different viewpoints of each sample and these methods have produced better outcomes.…”
Section: Methodsmentioning
confidence: 99%
“…Learning from noisy labels is an important and well-studied problem with a vast literature (Bar et al, 2021;Frénay & Verleysen, 2013;Gamberger et al, 1999;Jiang et al, 2018;Kumar & Amid, 2021;Liu & Tao, 2015;Majidi et al, 2021;Natarajan et al, 2013;Pleiss et al, 2020;Ren et al, 2018) -see (Song et al, 2022) for a recent survey. The fundamental difference between our setting and papers in this literature is that the noise introduced by the teacher is structured, and this is a crucial observation we utilize in our design.…”
Section: Learning With Noisy Data Techniquesmentioning
confidence: 99%
“…This line of works regulate the importance of each sample in the parameter estimation to reduce the noise effect [11,[28][29][30][31]. It can be implemented in the perspectives of the label re-weighting or the sample pair re-weighting based on the modeling.…”
Section: Learning Via Sample Re-weightingmentioning
confidence: 99%
“…Wang et al, [33] utilized the local intrinsic dimensionality to dynamically adapt the weight between the label and the prediction for Bootstrapping [10]. Jiang et al, [29,30] designed a curriculum to measure the quality of each sample, and added the learned weights into the loss function to reduce the label noise effect. Shu et al, [34] followed the similar motivation but constructed an explicit mapping to reweight the training samples under label noise.…”
Section: Learning Via Sample Re-weightingmentioning
confidence: 99%