2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017
DOI: 10.1109/cvpr.2017.240
|View full text |Cite
|
Sign up to set email alerts
|

Making Deep Neural Networks Robust to Label Noise: A Loss Correction Approach

Abstract: We present a theoretically grounded approach to train deep neural networks, including recurrent networks, subject to class-dependent label noise. We propose two procedures for loss correction that are agnostic to both application domain and network architecture. They simply amount to at most a matrix inversion and multiplication, provided that we know the probability of each class being corrupted into another. We further show how one can estimate these probabilities, adapting a recent technique for noise estim… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

13
1,309
1
7

Year Published

2017
2017
2024
2024

Publication Types

Select...
5
1
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 1,161 publications
(1,388 citation statements)
references
References 21 publications
13
1,309
1
7
Order By: Relevance
“…The result shows that, the performance of these models is not much affected when small parts of the dataset were not precisely labeled [54]. Aiming at an automatic labelling procedure, we identified that historical weather data are suitable indicators for identifying whether a tweet is relevant to rainfall events, as pluvial floods are directly caused by heavy rainfalls and fast storms.…”
Section: Pre-processing and Training Preparationmentioning
confidence: 95%
“…The result shows that, the performance of these models is not much affected when small parts of the dataset were not precisely labeled [54]. Aiming at an automatic labelling procedure, we identified that historical weather data are suitable indicators for identifying whether a tweet is relevant to rainfall events, as pluvial floods are directly caused by heavy rainfalls and fast storms.…”
Section: Pre-processing and Training Preparationmentioning
confidence: 95%
“…For example, this can be represented as a noise or confusion matrix between the clean and the noisy labels, as explained in Section 3. Having its roots in statistics (Dawid and Skene, 1979), this or similar ideas have been recently studied in NLP (Fang and Cohn, 2016;Hedderich and Klakow, 2018;Paul et al, 2019), image classification (Mnih and Hinton, 2012;Sukhbaatar et al, 2015;Dgani et al, 2018) and general machine learning settings (Bekker and Goldberger, 2016;Patrini et al, 2017;Hendrycks et al, 2018). All of these methods, however, do not take the features into account that are used to represent the instances during classification.…”
Section: Related Workmentioning
confidence: 99%
“…In general, the existing methods for handling noisy labels can be divided into four categories: noise transition estimation, noise‐robust loss design, label adjustment, and sample selection. Specifically, The methods from the first category mainly focus on estimating the noise transition matrix . In particular, Goldberger et al proposed using a noise adaptation layer to model the noise transition matrix.…”
Section: Introductionmentioning
confidence: 99%
“…Specifically, The methods from the first category mainly focus on estimating the noise transition matrix. [29][30][31][32] In particular, Goldberger et al 29 proposed using a noise adaptation layer to model the noise transition matrix. Patrini et al 30 proposed estimating the noise transition matrix through a two-step strategy.…”
Section: Introductionmentioning
confidence: 99%