2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00515
|View full text |Cite
|
Sign up to set email alerts
|

Jo-SRC: A Contrastive Approach for Combating Noisy Labels

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
63
0
8

Year Published

2022
2022
2022
2022

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 112 publications
(71 citation statements)
references
References 26 publications
0
63
0
8
Order By: Relevance
“…For FL, we select four algorithms covering representative FL strategies related to FedNoiL: (i) FedAvg [24], (ii) personalized FL: APFL [8], (iii) robust FL: Krum [5], and CFL [30]. For NLL, we consider two SoTA methods: (i) DMix [20] and (ii) JoSRC [40]. We denote these baselines in the form of "FL" + "NLL".…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…For FL, we select four algorithms covering representative FL strategies related to FedNoiL: (i) FedAvg [24], (ii) personalized FL: APFL [8], (iii) robust FL: Krum [5], and CFL [30]. For NLL, we consider two SoTA methods: (i) DMix [20] and (ii) JoSRC [40]. We denote these baselines in the form of "FL" + "NLL".…”
Section: Methodsmentioning
confidence: 99%
“…Co-teaching [12] lets two networks to select clean training data for each other. JoCoR [36] selects small-loss (cross-entropy and co-regularization loss) samples as clean data, while JoSRC [40] selects data based on Jensen-Shannon divergence. Another line of works [20,25] applies semi-supervised learning [4,32] that treats wrongly-labeled samples as unlabeled and assigns them with pseudo labels.…”
Section: Related Workmentioning
confidence: 99%
“…Acc. Standard CE 84.03 CleanNet [16] 83.95 Decoupling [23] 85.53 Co-teaching [9] 61.91 Co-teaching+ [43] 81.61 JoCoR [36] 77.94 Jo-SRC [41] 86.66 Co-learning [30] 87.57 Tripartite 88.34…”
Section: Food-101n Methodsmentioning
confidence: 99%
“…Accuracy comparison on the Food-101N. The result of Jo-SRC is from [41], the rest of results are from [30].…”
Section: Food-101n Methodsmentioning
confidence: 99%
“…This problem is even more crucial in the medical field, given that the annotation quality requires great expertise. Therefore, understanding, modeling, and learning with noisy labels has gained great momentum in recent research efforts [5,23,8,17,20,11,28,40,16,34,47,41,49,36,48].…”
Section: Introductionmentioning
confidence: 99%