2020
DOI: 10.1109/tcomm.2019.2944169
|View full text |Cite
|
Sign up to set email alerts
|

Scheduling Policies for Federated Learning in Wireless Networks

Abstract: Motivated by the increasing computational capacity of wireless user equipments (UEs), e.g., smart phones, tablets, or vehicles, as well as the increasing concerns about sharing private data, a new machine learning model has emerged, namely federated learning (FL), that allows a decoupling of data acquisition and computation at the central unit. Unlike centralized learning taking place in a data center, FL usually operates in a wireless edge network where the communication medium is resource-constrained and unr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

4
281
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 516 publications
(285 citation statements)
references
References 32 publications
4
281
0
Order By: Relevance
“…Then, we give a brief definition of centralized learning. Definition 3 (Centralized learning problem under expectation-based model): Given the proposed local loss function in (13), the global loss function can be written as…”
Section: B Convergence Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…Then, we give a brief definition of centralized learning. Definition 3 (Centralized learning problem under expectation-based model): Given the proposed local loss function in (13), the global loss function can be written as…”
Section: B Convergence Analysismentioning
confidence: 99%
“…A series of research concentrating on reducing the overhead in the updating step was to transmit the compressed gradient vector via exploiting the quantization scheme [11], [12]. Another research focused on scheduling the edge devices to save the transmission bandwidth [13]- [17]. Specifically, some novel updating rules were worked out, which only allowed the edge devices with significant training improvement [14], or the fast responding devices [15], to transmit their gradient vectors in each uploading round.…”
Section: Introductionmentioning
confidence: 99%
“…The work in [23], [29] promoted the use of network slicing technique to effectively allocate network resources to provide performance guarantees for URLLC services. In [9], [24], [25], [27], [28], [30] it was discussed that 5G networks would experience a shift from the conventional cloud computing setting to edge computing systems. Edge computing systems deploy computational power to the network edges to meet the requirements of low latency, high reliability, as well as supporting resource- constrained nodes which are reachable only over unreliable network connections.…”
Section: A Motivationmentioning
confidence: 99%
“…In Line 7, the agent interacts with the environment and AP to obtain the next sate information and the rewards respectively and keeps on updating its local policy θ i θ i θ i . In Line 10, the agents exchange their θ i θ i θ i values so that other agents can learn an optimal policy and the new agents can learn faster based on experience gathered from other agents [23,24], as shown in Fig. 5.…”
Section: Agent Learningmentioning
confidence: 99%