2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021
DOI: 10.1109/iccv48922.2021.00013
|View full text |Cite
|
Sign up to set email alerts
|

NGC: A Unified Framework for Learning with Open-World Noisy Data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
40
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 45 publications
(41 citation statements)
references
References 20 publications
0
40
0
Order By: Relevance
“…Recently, Khosla et al (2020) propose supervised contrastive learning (SCL), an approach that aggregates data from the same class as the positive set and obtains improved performance on various supervised learning tasks. The success of SCL has motivated a series of works to apply CL to a number of weakly supervised learning tasks, including noisy label learning (Li et al, 2021a;Wu et al, 2021), semi-supervised learning Zhang et al, 2021), etc. Despite promising empirical results, however, these works, lack theoretical understandings.…”
Section: Related Workmentioning
confidence: 99%
“…Recently, Khosla et al (2020) propose supervised contrastive learning (SCL), an approach that aggregates data from the same class as the positive set and obtains improved performance on various supervised learning tasks. The success of SCL has motivated a series of works to apply CL to a number of weakly supervised learning tasks, including noisy label learning (Li et al, 2021a;Wu et al, 2021), semi-supervised learning Zhang et al, 2021), etc. Despite promising empirical results, however, these works, lack theoretical understandings.…”
Section: Related Workmentioning
confidence: 99%
“…Second, the network is fine-tuned on a reliable dataset. Additionally, the methods ProtoMix [31] and NGC [51] work in one stage. They jointly perform the generation of pseudo labels and Sup-CL to combat noisy labels.…”
Section: Contrastive Learningmentioning
confidence: 99%
“…Another linear layer following the feature extractor is used as classifier. When minimizing L pc , we apply mixup [31] to improve the generalization which has been shown to be effective for learning with noisy labels [29].…”
Section: Example Reweightingmentioning
confidence: 99%
“…long-tailed learning) have been studied for many years. When dealing with label noise, the most popular approach is sample selection where correctly-labeled examples are identified by capturing the training dynamics of DNNs [11,29]. When dealing with class imbalance, many existing works propose to reweight examples or design unbiased loss functions by taking into account the class distribution of training set [26,3,8].…”
Section: Introductionmentioning
confidence: 99%