2020
DOI: 10.1109/mcom.001.1900461
|View full text |Cite
|
Sign up to set email alerts
|

Federated Learning for Wireless Communications: Motivation, Opportunities, and Challenges

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
202
0
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 482 publications
(203 citation statements)
references
References 4 publications
0
202
0
1
Order By: Relevance
“…Note that due to closed-loop exchanges (locally trained model update followed by a globally aggregated model update triggering the next iteration of local training), the delay in completing GFL training may sometimes be around a few minutes (10 or more). As recorded recently for the Google keyboard programme [11]. This is not permissible in UAVs applications considered in this work.…”
Section: Federated Learning With Blockchainmentioning
confidence: 99%
“…Note that due to closed-loop exchanges (locally trained model update followed by a globally aggregated model update triggering the next iteration of local training), the delay in completing GFL training may sometimes be around a few minutes (10 or more). As recorded recently for the Google keyboard programme [11]. This is not permissible in UAVs applications considered in this work.…”
Section: Federated Learning With Blockchainmentioning
confidence: 99%
“…In [24], [37], [39], [41], [87]- [90], it was discussed that FL is a promising technique for future intelligent networks due to its superior performance features and added benefits. Edge caching solutions which are based on FL algorithms can guarantee smart models, reduced content delivery latency, improved content acquisition reliability, and improved energy efficiency, all while ensuring preservation of personal data privacy and security.…”
Section: F Edge Cachingmentioning
confidence: 99%
“…However, traditional DL schemes are cloudcentric and they require a stream of raw training data to be sent and processed in a centralized server [39], [74]. The process of sending streams of raw training data to a centralized server can result in several challenges, including slow response to real-time events in latency sensitive applications, excessive network communication resource consumption, increased network traffic, high energy consumption, and reduced privacy of training data [39], [43], [79], [87], [124], [137], [138]. Therefore, traditional DL frameworks may not be suitable in application scenarios with large-scale data that require low latency, efficiency, and scalability [87], [137].…”
Section: ) Dl-based Edge Cachingmentioning
confidence: 99%
See 1 more Smart Citation
“…Deep reinforcement learning aims at maximizing the reward when the agent takes an action at the particular state. to the central machine learning processors [74][75][76][77][78]. Therefore federated learning is the decentralized machine learning approach which keeps the data at the generation point itself and then locally trained models are only transmitted to the central processor.…”
Section: Recommendation Via Q Learningmentioning
confidence: 99%