2020
DOI: 10.1109/twc.2020.3003744
|View full text |Cite
|
Sign up to set email alerts
|

HFEL: Joint Edge Association and Resource Allocation for Cost-Efficient Hierarchical Federated Edge Learning

Abstract: Federated Learning (FL) has been proposed as an appealing approach to handle data privacy issue of mobile devices compared to conventional machine learning at the remote cloud with raw user data uploading. By leveraging edge servers as intermediaries to perform partial model aggregation in proximity and relieve core network transmission overhead, it enables great potentials in low-latency and energy-efficient FL. Hence we introduce a novel Hierarchical Federated Edge Learning (HFEL) framework in which model ag… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
133
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 296 publications
(175 citation statements)
references
References 31 publications
1
133
0
Order By: Relevance
“…Fig. 8 shows the convergence of the algorithm in the final model loss, total γ k i ← γ * i as defined in (19).…”
Section: Simulation Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Fig. 8 shows the convergence of the algorithm in the final model loss, total γ k i ← γ * i as defined in (19).…”
Section: Simulation Resultsmentioning
confidence: 99%
“…The selected users are allowed to determine the amount of data samples used for the model training. Two level aggregation for FL is proposed in [19] in which an intermediate model aggregation can be performed at the edge server where the final model aggregation is performed at the cloud server. The joint optimization of radio and computing resource management in MEC has been studied thoroughly in previous works.…”
Section: A Related Work 1) Resource Management In Flmentioning
confidence: 99%
“…The proposed PerFit framework leverages edge computing to augment the computing capability of individual devices via computation offloading to mitigate the straggle effect. If we further conduct local model aggregation at the edge server, it also helps to reduce the communication overhead by avoiding massive devices to directly communicate with the cloud server over the expensive backbone network bandwidth [15]. Moreover, by performing personalization, we can deploy lightweight personalized models at some resourcelimited devices (e.g., by model pruning or transfer learning).…”
Section: Cloud-edge Framework For Personalized Federated Learningmentioning
confidence: 99%
“…where L is the number of local iteration, and α k /2 denotes the effective capacitance coefficient of the k-th UD's computing chipset [21].…”
Section: System Modelmentioning
confidence: 99%