2020
DOI: 10.1109/msp.2020.2975749
|View full text |Cite
|
Sign up to set email alerts
|

Federated Learning: Challenges, Methods, and Future Directions

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
1,453
0
4

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 2,823 publications
(1,460 citation statements)
references
References 16 publications
3
1,453
0
4
Order By: Relevance
“…On top of these methods, one can further enhance communication efficiency, by additionally quantizing gradients [191], removing insignificant gradients, i.e., sparsification [192], opportunistic uploading based on gradient magnitudes [193], and adaptively adjusting the communication intervals [31], [194]. For more details on state dimensionality reduction and advanced FL and FD frameworks in both supervised learning and RL, the readers are encouraged to check [186], [195], and [196].…”
Section: ) Each Head Device Updates Its Primal Variables Asmentioning
confidence: 99%
“…On top of these methods, one can further enhance communication efficiency, by additionally quantizing gradients [191], removing insignificant gradients, i.e., sparsification [192], opportunistic uploading based on gradient magnitudes [193], and adaptively adjusting the communication intervals [31], [194]. For more details on state dimensionality reduction and advanced FL and FD frameworks in both supervised learning and RL, the readers are encouraged to check [186], [195], and [196].…”
Section: ) Each Head Device Updates Its Primal Variables Asmentioning
confidence: 99%
“…The growing computational power of edge devices allows us to leave the data decentralized and push the network computation to the client, which is also ideal from a privacy aspect. The expanding area of federated learning [20], [37], [38] explores developing methods to achieve the goal of learning from highly distributed and heterogeneous data through aggregating locally trained models on remote devices, such as smartphones and wearables. In this case, the intention is to minimize the following objective [37]:…”
Section: Federated Learningmentioning
confidence: 99%
“…To produce a global model, Federated Averaging algorithm [20] is typically used to accumulate client updates after every round of local training t as with Equation 3. The research interest in federated learning revolves around improving communication efficiency [39], personalization [40], fault tolerance [41], privacy preservation [42] as well as looking into the theoretical underpinning of the federated optimization [37]. Specifically, the recent work deals with learning a unified model to solve a single as well as multiple tasks [43].…”
Section: Federated Learningmentioning
confidence: 99%
“…Federated Learning (FL) is a DAI model where multi-agents collaboratively share their local knowledge for faster convergence and to make better decisions. FL concept, applications, challenges, and methods have been reviewed in [ 38 , 39 ]. Smith and Hollinger [ 40 ] developed a distributed robotic system that collaboratively shares their knowledge for a single goal (environment exploration).…”
Section: Background and Related Workmentioning
confidence: 99%