Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2019 IEEE Global Communications Conference (GLOBECOM) 2019
DOI: 10.1109/globecom38437.2019.9013160
|View full text |Cite
|
Sign up to set email alerts
|

Performance Optimization of Federated Learning over Wireless Networks

Abstract: In this paper, the convergence time of federated learning (FL), when deployed over a realistic wireless network, is studied. In particular, a wireless network is considered in which wireless users transmit their local FL models (trained using their locally collected data) to a base station (BS). The BS, acting as a central controller, generates a global FL model using the received local FL models and broadcasts it back to all users. Due to the limited number of resource blocks (RBs) in a wireless network, only… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
41
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 87 publications
(45 citation statements)
references
References 40 publications
0
41
0
Order By: Relevance
“…The proposed integrated GRU and CNN predictive model build the relationship between output I T +1 and the input time series historical illumination distribution I 1 , I 2 , 路 路 路 , I t , 路 路 路 , I T using the weight parameters. To build this relationship, a batch gradient descent approach is used to train the weight matrices which are initially generated randomly via a uniform distribution [29].…”
Section: Integrated Gru and Cnn Predictive Model Trainingmentioning
confidence: 99%
See 2 more Smart Citations
“…The proposed integrated GRU and CNN predictive model build the relationship between output I T +1 and the input time series historical illumination distribution I 1 , I 2 , 路 路 路 , I t , 路 路 路 , I T using the weight parameters. To build this relationship, a batch gradient descent approach is used to train the weight matrices which are initially generated randomly via a uniform distribution [29].…”
Section: Integrated Gru and Cnn Predictive Model Trainingmentioning
confidence: 99%
“…Theorem 1: For problem (29), the optimal user association u ij,T +1 and transmit power P i,T +1 can be respectively expressed as:…”
Section: B User Association and Power Efficiency With Fixed Uav Deplmentioning
confidence: 99%
See 1 more Smart Citation
“…In stark contrast to the conventional machine learning methods that run in a data center, FL usually operates at the network edge and brings the models directly to the devices for training, where only the resultant parameters shall be sent to the edge servers that reside in an access point (AP). This salient feature of on-device training brings along great advantages of eliminating the large communication overheads as well as preserving data privacy, and hence making FL particularly relevant for mobile applications [6][7][8][9][10]. However, as the AP needs to link a large number of user equipments (UEs) over a limited spectrum, only a portion of the UEs can be selected to access the radio channel and send their trained updates in each global aggregation [5,11].…”
Section: Introductionmentioning
confidence: 99%
“…Test performance of the cases which have different subchannels and users: (M, N )= (4, 2),(8,4),(10,5).…”
mentioning
confidence: 99%