2019
DOI: 10.48550/arxiv.1909.11875
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Federated Learning in Mobile Edge Networks: A Comprehensive Survey

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
27
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 31 publications
(27 citation statements)
references
References 132 publications
0
27
0
Order By: Relevance
“…This sub-section discusses problem formulation whose purpose is to jointly minimize computation time and PER. From (13), it is clear that the PER has a proportional effect on DFL model error. Therefore, we get the intuition of 𝐶 𝑝 to reflect PER on a global DFL model in our system model.…”
Section: Problem Formulationmentioning
confidence: 99%
See 1 more Smart Citation
“…This sub-section discusses problem formulation whose purpose is to jointly minimize computation time and PER. From (13), it is clear that the PER has a proportional effect on DFL model error. Therefore, we get the intuition of 𝐶 𝑝 to reflect PER on a global DFL model in our system model.…”
Section: Problem Formulationmentioning
confidence: 99%
“…• Privacy leakage: A malicious end-device/aggregation server can infer sensitive information of devices using the learning model updates [12], [13]. Therefore, we must ensure complete privacy preservation in FL.…”
mentioning
confidence: 99%
“…The focus of these works above is DNN inference at the edge. For the edge learning that considers the DNN training, many existing works target at the fast and cost-efficient federated learning scheme in order to train a commonly-shared model across multiple devices [32]. Along a different line, we consider the fast model learning with respect to a specific end device and leverage a multitude of device-edge-cloud resources to training acceleration.…”
Section: B Edge-basedmentioning
confidence: 99%
“…The server builds the global model by averaging all gradients across the network [8]. Then, the coordinating server broadcasts the new updated model to all clients [9]. Each client uploads its local model to the server and then downloads the global model to do on-device inference using a cloud-distributed model.…”
Section: Introductionmentioning
confidence: 99%