2022
DOI: 10.3389/fnbot.2022.1041553
|View full text |Cite
|
Sign up to set email alerts
|

DWFed: A statistical- heterogeneity-based dynamic weighted model aggregation algorithm for federated learning

Abstract: Federated Learning is a distributed machine learning framework that aims to train a global shared model while keeping their data locally, and previous researches have empirically proven the ideal performance of federated learning methods. However, recent researches found the challenge of statistical heterogeneity caused by the non-independent and identically distributed (non-IID), which leads to a significant decline in the performance of federated learning because of the model divergence caused by non-IID dat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 31 publications
0
2
0
Order By: Relevance
“…Aggregation in FL: As the first study that proposed FL also highlighted the importance of the non-IID data problem and suggested an aggregation method that is a weighted averaging scheme (FedAvg) based on clients' dataset size to overcome this challenge, many studies were inspired by this work and tried to enhance it. For instance, the work in [50] delayed the aggregation process by sending the model back to some clients for further training to enhance the model, while the study in [105] enhanced the calculation of the weights to be based on indices of statistical heterogeneity instead of just the client dataset size. While the study in [131] adds to the FedAvg a regularization to lower the excess risk, the study in [133] also uses regularization in its aggregation scheme to penalize the diverging model.…”
Section: Discussionmentioning
confidence: 99%
“…Aggregation in FL: As the first study that proposed FL also highlighted the importance of the non-IID data problem and suggested an aggregation method that is a weighted averaging scheme (FedAvg) based on clients' dataset size to overcome this challenge, many studies were inspired by this work and tried to enhance it. For instance, the work in [50] delayed the aggregation process by sending the model back to some clients for further training to enhance the model, while the study in [105] enhanced the calculation of the weights to be based on indices of statistical heterogeneity instead of just the client dataset size. While the study in [131] adds to the FedAvg a regularization to lower the excess risk, the study in [133] also uses regularization in its aggregation scheme to penalize the diverging model.…”
Section: Discussionmentioning
confidence: 99%
“…, 𝐷 to represent the federated dataset. The goal of the clients is to collaboratively train a global model without compromising local data [8] . As shown in Fig.…”
Section: Federated Learning-pcamentioning
confidence: 99%