2022
DOI: 10.1109/jiot.2022.3162581
|View full text |Cite
|
Sign up to set email alerts
|

Energy-Efficient Service Placement for Latency-Sensitive Applications in Edge Computing

Abstract: Edge computing is a promising solution to host artificial intelligence (AI) applications that enable real-time insights on user-generated and device-generated data. This requires edge computing resources (storage and compute) to be widely deployed close to end devices. Such edge deployments require a large amount of energy to run as edge resources are typically overprovisioned to flexibly meet the needs of time-varying user demand with a low latency. Moreover, AI applications rely on deep neural network (DNN) … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(3 citation statements)
references
References 38 publications
(56 reference statements)
0
3
0
Order By: Relevance
“…𝐷(π‘ π‘™π‘Ž 𝑖.π‘Ž ) = 𝑏 π‘Ž * π‘ π‘™π‘Ž 𝑖.π‘Ž + 𝑣 π‘Ž (32) The parameters 𝑏 π‘Ž and 𝑣 π‘Ž , which are negative and positive, respectively, are parametrically defined for each service [18].…”
Section: A Simulation Settingmentioning
confidence: 99%
See 1 more Smart Citation
“…𝐷(π‘ π‘™π‘Ž 𝑖.π‘Ž ) = 𝑏 π‘Ž * π‘ π‘™π‘Ž 𝑖.π‘Ž + 𝑣 π‘Ž (32) The parameters 𝑏 π‘Ž and 𝑣 π‘Ž , which are negative and positive, respectively, are parametrically defined for each service [18].…”
Section: A Simulation Settingmentioning
confidence: 99%
“…In this field, only a limited number of studies have tackled the challenges of service placement or user traffic routing, specifically specially focusing on certain aspects of the heterogeneity of fog nodes [32][33][34][35][36][37][38]. However, none of the existing research has addressed the problem of user traffic routing and service placement in the fog computing layer, aiming to minimize the service provisioning cost, particularly in the presence of ad hoc and fixed (dedicated) fog computing nodes.…”
Section: Introductionmentioning
confidence: 99%
“…In recent years, Deep Neural Networks (DNN) have become one of the most commonly used machine learning (ML) algorithms in a wide range of AI applications. The development towards deeper neural networks results in higher computational requirements, which make it difficult to implement large models on a resource-constrained edge device with a satisfying computing speed, memory, and energy cost requirements [1]. To overcome these challenges, pruning network structure and quantizing floating-point precision and various pruning techniques have become one of the most promising offline solutions for reducing the computation consumption of inference with acceptable accuracy loss [2].…”
Section: Introductionmentioning
confidence: 99%