2021
DOI: 10.1109/twc.2021.3088910
|View full text |Cite
|
Sign up to set email alerts
|

Energy-Efficient Resource Management for Federated Edge Learning With CPU-GPU Heterogeneous Computing

Abstract: Edge machine learning involves the deployment of learning algorithms at the network edge to leverage massive distributed data and computation resources to train artificial intelligence (AI) models.Among others, the framework of federated edge learning (FEEL) is popular for its data-privacy preservation. FEEL coordinates global model training at an edge server and local model training at edge devices that are connected by wireless links. This work contributes to the energy-efficient implementation of FEEL in wi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
57
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 102 publications
(57 citation statements)
references
References 36 publications
0
57
0
Order By: Relevance
“…In DL inference scenarios, each operator before partitioning needs an exclusive resource (supported by TVM), and all operators run based on common default settings of mainstream DL frameworks [15,23,27,29,30]. After partitioning, some operators can be partitioned into two sub-operators on edge servers that can improve DL inference performance.…”
Section: Challenge Analysismentioning
confidence: 99%
See 3 more Smart Citations
“…In DL inference scenarios, each operator before partitioning needs an exclusive resource (supported by TVM), and all operators run based on common default settings of mainstream DL frameworks [15,23,27,29,30]. After partitioning, some operators can be partitioned into two sub-operators on edge servers that can improve DL inference performance.…”
Section: Challenge Analysismentioning
confidence: 99%
“…EOP improves DL inference performance on edge servers by: 1) accurately estimating operator execution on heterogeneous resources, and 2) efficiently partitioning key operators. Note that a lot of work are mainly on edge-cloud collaborations [11,29,30] and EOP is non-intrusion with them. As such, we discuss the related work from these two research aspects.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…While current federated ML algorithms mostly aim at improving the prediction accuracy of obtained models, a few researchers have considered the carbon footprint of the training process. For example, Zeng et al [13] investigated the energy consumption minimization problem by joint management of the computation and communication resources. Their algorithm aims to achieve an equilibrium among the four dimensions of resources, i.e., bandwidth, CPU-GPU operating period, CPU-GPU frequency, and CPU-GPU workload partition.…”
mentioning
confidence: 99%