2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022
DOI: 10.1109/cvpr52688.2022.00994
|View full text |Cite
|
Sign up to set email alerts
|

FedCorr: Multi-Stage Federated Learning for Label Noise Correction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
27
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 40 publications
(31 citation statements)
references
References 8 publications
0
27
0
Order By: Relevance
“…Moreover, in the settings of pair flipping noise and asymmetry noise, the test accuracies of FedLNL are close to that of FedAvg over the clean data (denoted as FedAvg (clean)), indicating that FedLNL nearly achieves the upper bound of its test accuracy. For CIFAR-100, FedLNL improves the test accuracy by up to 25.98% compared with the second best scheme FedCorr (Xu et al 2022).…”
Section: Discussionmentioning
confidence: 98%
See 2 more Smart Citations
“…Moreover, in the settings of pair flipping noise and asymmetry noise, the test accuracies of FedLNL are close to that of FedAvg over the clean data (denoted as FedAvg (clean)), indicating that FedLNL nearly achieves the upper bound of its test accuracy. For CIFAR-100, FedLNL improves the test accuracy by up to 25.98% compared with the second best scheme FedCorr (Xu et al 2022).…”
Section: Discussionmentioning
confidence: 98%
“…Most of them need an additional clean dataset to guide the selection, while such a clean dataset is not always available. The second category utilizes a label-correction mechanism to relabel noisy labels, based on the representations extracted from the training data, e.g., the nearest neighbors in the embedding space (Tsouvalas et al 2022) and the prediction of the global model (Xu et al 2022). The third category (e.g., RoFL (Yang et al 2022b) and FedLSR (Jiang et al 2022)) leverages self-supervised learning to obtain more robust representations.…”
Section: Label-noise Learning Schemesmentioning
confidence: 99%
See 1 more Smart Citation
“…For example, research has shown that label noise can negatively impact model performance (Ke et al, 2022). Additionally, methods have been proposed to correct labels that have been corrupted by noise (Fang & Ye, 2022;Xu et al, 2022). Recently, Jothimurugesan et al (2022) also investigate the concept shift problem under the assumption that clients do not have concept shifts at the beginning of the training.…”
Section: A Proof Of Em Stepsmentioning
confidence: 99%
“…In F-LNL, a global neural network model is fine-tuned via distributed learning across multiple local clients with noisy samples. Like (Xu et al 2022), we here also assume some local clients have noisy labels (namely noisy clients), while others have only clean labels (namely clean clients).…”
Section: Introductionmentioning
confidence: 99%