2021
DOI: 10.36227/techrxiv.16732651.v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Lyapunov-based Optimization of Edge Resources for Energy-Efficient Adaptive Federated Learning

Abstract: The aim of this paper is to propose a novel dynamic resource allocation strategy for energy-efficient adaptive federated learning at the wireless network edge, with latency and learning performance guarantees. We consider a set of devices collecting local data and uploading processed information to an edge server, which runs stochastic gradient-based algorithms to perform continuous learning and adaptation. Hinging on Lyapunov stochastic optimization tools, we dynamically optimize radio parameters (e.g., set o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(4 citation statements)
references
References 9 publications
0
4
0
Order By: Relevance
“…The aim of this paper is to jointly allocate radio (i.e., set of transmitting devices, powers, quantization bits and RISs reflectivity parameters) and computation (i.e., CPU cycles at devices and at server) resources to minimize the long-term average system power consumption in (7), with constraints on the average learning performance and the average latency in (6). Let Gt and αt be task-dependent learning performance and convergence rate metrics, respectively, which will be explained in the sequel (see, e.g., (9)). Then, the problem can be cast as:…”
Section: Ris-aided Adaptive Federated Learningmentioning
confidence: 99%
See 3 more Smart Citations
“…The aim of this paper is to jointly allocate radio (i.e., set of transmitting devices, powers, quantization bits and RISs reflectivity parameters) and computation (i.e., CPU cycles at devices and at server) resources to minimize the long-term average system power consumption in (7), with constraints on the average learning performance and the average latency in (6). Let Gt and αt be task-dependent learning performance and convergence rate metrics, respectively, which will be explained in the sequel (see, e.g., (9)). Then, the problem can be cast as:…”
Section: Ris-aided Adaptive Federated Learningmentioning
confidence: 99%
“…The rationale for the selection of the surrogates comes from the assumption that the true performance metrics Gt and αt typically show a non-increasing behavior with respect to the quantization bits {bi,t}i∈S t and the batch size Bt (examples will be given in the sequel). In other words, a finer representation of the data typically leads to better learning performance [8,9].Thus, Gt and αt have only to be non-increasing functions of the quantization bits and the batch size. After some algebra manipulations (omitted due to the lack of space), the method requires to solve the following deterministic problem at each time-slot t:…”
Section: Algorithmic Design Via Stochastic Optimizationmentioning
confidence: 99%
See 2 more Smart Citations