2022
DOI: 10.1109/mwc.006.2100699
|View full text |Cite
|
Sign up to set email alerts
|

Actions at the Edge: Jointly Optimizing the Resources in Multi-Access Edge Computing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 22 publications
(8 citation statements)
references
References 12 publications
0
7
0
Order By: Relevance
“…Then, EPSL performs last-layer activations' gradient aggregation on the dimension of client devices. In other words, each client device employs ⌈φb⌉ out of b of its lastlayer activations' gradients for aggregation with other client devices, after which the aggregated activations' gradients go 2 The server can execute FP process for multiple client devices in either a serial or parallel fashion, where the latency in (16) would not be affected. µ j (̟ ℓc+ℓs−1 − ̟ j ) denote the computation workload of the server-side BP process for each data sample (excluding the last layer), where ̟ j is the BP computation workload of propagating the first j layer neural network for one data sample.…”
Section: A Training Latency Computationmentioning
confidence: 99%
See 1 more Smart Citation
“…Then, EPSL performs last-layer activations' gradient aggregation on the dimension of client devices. In other words, each client device employs ⌈φb⌉ out of b of its lastlayer activations' gradients for aggregation with other client devices, after which the aggregated activations' gradients go 2 The server can execute FP process for multiple client devices in either a serial or parallel fashion, where the latency in (16) would not be affected. µ j (̟ ℓc+ℓs−1 − ̟ j ) denote the computation workload of the server-side BP process for each data sample (excluding the last layer), where ̟ j is the BP computation workload of propagating the first j layer neural network for one data sample.…”
Section: A Training Latency Computationmentioning
confidence: 99%
“…Although training models for a reasonable number of client devices might not be a crucial issue for a sufficiently powerful cloud computing center, it is arguably overwhelming to a resource-constrained edge server as the number of served client devices increases. Particularly, edge computing servers in 5G and beyond can be (small) base stations and access points usually equipped with limited capabilities [15,16]. On the other hand, communication latency is a limiting factor due to the large volume of cutlayer data and model exchange involved in SL.…”
Section: Introductionmentioning
confidence: 99%
“…Minimizing the number of update rounds, efficient spectral usage, and cost-effective communication are a few challenges of federated learning. Multiaccess edge computing (MEC) is an emerging technology [423] that has the potential to ensure optimal coordination between the spectral, storage, and computing resources within the limited power and latency budget. By sharing the communication resources in proximity, MEC has the potential to meet the spectral demand in data collection and distributed training and inference.…”
Section: K Increased Demand Of Communication Resourcesmentioning
confidence: 99%
“…Multi-access edge computing (MEC) has been identified as a promising architecture for computing services that aims to provide real-time or low latency services to end-users located in close proximity [1], [2]. One of the primary techniques utilized in MEC is computation offloading, which enables computing tasks to be processed locally or offloaded to an edge server (ES) based on the availability of local computing resources and transmission conditions [3].…”
Section: Introductionmentioning
confidence: 99%