2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018
DOI: 10.1109/cvpr.2018.00906
|View full text |Cite
|
Sign up to set email alerts
|

Iterative Learning with Open-set Noisy Labels

Abstract: Large-scale datasets possessing clean label annotations are crucial for training Convolutional Neural Networks (CNNs). However, labeling large-scale data can be very costly and error-prone, and even high-quality datasets are likely to contain noisy (incorrect) labels. Existing works usually employ a closed-set assumption, whereby the samples associated with noisy labels possess a true class contained within the set of known classes in the training data. However, such an assumption is too restrictive for many a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
228
1

Year Published

2018
2018
2021
2021

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 298 publications
(232 citation statements)
references
References 28 publications
(44 reference statements)
0
228
1
Order By: Relevance
“…Other approaches include jointly modeling labels and worker quality [15], creating a robust method to learn in open-set noisy label situations [34] and attempting to prune the correct samples [6,2,24]. Ding et al [2] suggested pruning the correct samples based on softmax outputs.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Other approaches include jointly modeling labels and worker quality [15], creating a robust method to learn in open-set noisy label situations [34] and attempting to prune the correct samples [6,2,24]. Ding et al [2] suggested pruning the correct samples based on softmax outputs.…”
Section: Related Workmentioning
confidence: 99%
“…Regarding noisy data, while PL provides CNN the wrong information (red balloon), with a higher chance, NL can provide CNN the correct information (blue balloon) because a dog is clearly not a bird. approaches address this problem by applying a number of techniques and regularization terms along with Positive Learning (PL), a typical supervised learning method for training CNNs that "input image belongs to this label" [6,2,34,20,3,39,26,30,22,33,21]. However, when the CNN is trained with images and mismatched labels, wrong information is being provided to the CNN.…”
Section: Introductionmentioning
confidence: 99%
“…The network is much more robust against labeling errors when they are randomly distributed, compared to biased labeling from the use of a deterministic pre-labeling. Training networks with erroneous labels is further studied in [188]- [191]. The impact on weak or erroneous labels on the performance of deep learning based semantic segmentation is investigated in [192], [193].…”
Section: ) Data Quality and Alignmentmentioning
confidence: 99%
“…Compared to the previous graph-based classifiers [18], [20], [21], [23], [29], [40], [41], [45], [46], by adopting edge convolution, iteratively updating graph and operating GLR, we learn a deeper feature representation, and assign the degree of freedom for learning the underlying data structure. Given noisy training labels, in contrast to the classical robust DNN-based classifiers [4], [7], [15], [16], [35], [39], we bring together the regularization benefits of GLR and the benefits of the proposed loss functions to perform more robust deep metric learning. We further adopt a rank-sampling strategy to find those training samples with high predictive performance that benefits inference.…”
Section: Novelty With Respect To Reviewed Literaturementioning
confidence: 99%
“…Θ is an edge attention activation function (see (7) for the particular function we used) that estimates how much attention should be given to each edge and Y r = {Ẏ r = [−1, 1] M , [−1, 1] N −M } is the restored classifier signal obtained via (3) starting from the classifier signal in the previous iteration, Y r−1 . π ψi,ψj is the amount of attention, i.e., edge loss weights, assigned to the edge connecting vertices ψ i and ψ j .…”
Section: B W-netmentioning
confidence: 99%