2022
DOI: 10.48550/arxiv.2204.04677
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

FedCorr: Multi-Stage Federated Learning for Label Noise Correction

Abstract: Federated learning (FL) is a privacy-preserving distributed learning paradigm that enables clients to jointly train a global model. In real-world FL implementations, client data could have label noise, and different clients could have vastly different label noise levels. Although there exist methods in centralized learning for tackling label noise, such methods do not perform well on heterogeneous label noise in FL settings, due to the typically smaller sizes of client datasets and data privacy requirements in… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(6 citation statements)
references
References 13 publications
0
2
0
Order By: Relevance
“…In this method, the client selection is performed based on the random selection criteria. Additionally, the Power-of-Choice [39] method has been used for managing biased client selection and has received better convergence rates compared to the random client selection method. Better performance in terms of faster convergence and more consistent management is made possible using methods like arbitrary client selection [44] and adaptive client selection [45] by assessing and updating probabilities pertaining to the client relationships.…”
Section: Node Selection and Dropping Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…In this method, the client selection is performed based on the random selection criteria. Additionally, the Power-of-Choice [39] method has been used for managing biased client selection and has received better convergence rates compared to the random client selection method. Better performance in terms of faster convergence and more consistent management is made possible using methods like arbitrary client selection [44] and adaptive client selection [45] by assessing and updating probabilities pertaining to the client relationships.…”
Section: Node Selection and Dropping Methodsmentioning
confidence: 99%
“…With the help of this framework, the local data efficiency is seen to improve, and the communication cost is reduced, while overall improving end-to-end performance. Another framework, the Fedcorr [39] multi-stage selection framework, used for managing noise of local clients while independently managing all the clients, has been noted for increasing the fairness of the federated learning process by adjusting the client selection strategies and overall improving the efficiency of the results. The DRFL [40] method is used to improve the weight accorded to each client and thereby increase the fairness of the result.…”
Section: Node Selection and Dropping Methodsmentioning
confidence: 99%
“…Although DNNs are prone to overfitting with noisy label samples, the memorization effect can still be leveraged to facilitate learning from clean samples in a simple pattern during the initial stage of training, before gradually memorizing the noisy samples [1]. Based on this effect, various studies [34,13,18,33] have filtered noisy samples based on their loss values. However, these methods overlook the overfitting property of DNNs, which severely decreases the discriminatory capability of empirical loss [1,36].…”
Section: Loss Transformermentioning
confidence: 99%
“…However, they are not directly transferable to FL for the two fundamental issues: 1) highly heterogeneous noise distribution among clients causes local model divergence that cannot be addressed by robust loss functions [22]; 2) sample selection approaches for centralized setting are infeasible when dealing with limited training data on each client, which overfit the noisy data and lead to unstable performance. Recently, several methods have been proposed to mitigate the problem of noisy labels in FL [26,6,35,34,33]. These methods mainly focus on obtaining data with clean labels by modeling noise probability or utilizing the memorization effect of Deep Neural Networks (DNNs).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation