2021
DOI: 10.1109/jiot.2020.3042428
|View full text |Cite
|
Sign up to set email alerts
|

Operating Latency Sensitive Applications on Public Serverless Edge Cloud Platforms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 37 publications
(8 citation statements)
references
References 20 publications
0
8
0
Order By: Relevance
“…Vertical offloading is usually motivated by seeking a node with higher computational resources (hardware and/or network bandwidth) in a superior tier [57,72]. For example, some microservice architectures that employ neural networks through Deep Learning offload the training model to run in a fog or cloud node seeking a higher processing power, and the inference model stays on the edge node [85]. Vertical offloading usually works in explicit mode, where the microservice moves to another host.…”
Section: Rq4 -How Does the Microservice Offloading Process Work?mentioning
confidence: 99%
“…Vertical offloading is usually motivated by seeking a node with higher computational resources (hardware and/or network bandwidth) in a superior tier [57,72]. For example, some microservice architectures that employ neural networks through Deep Learning offload the training model to run in a fog or cloud node seeking a higher processing power, and the inference model stays on the edge node [85]. Vertical offloading usually works in explicit mode, where the microservice moves to another host.…”
Section: Rq4 -How Does the Microservice Offloading Process Work?mentioning
confidence: 99%
“…• We use Euclidean distance as our difference measure because it is the most extensively utilized technique. For example, if we have two points ( 1 α , 1 β ) and [5] Ensemble Learning Methods. It is a recent data driven techniques that makes a set of ML algorithm and then modifies new data points by choosing the best from the previous models.…”
Section: Implementation Stepsmentioning
confidence: 99%
“…To solve these problems it is necessary to compute the data nearer to the IoT device which is implemented by edge computing (5)(6)(7)(8). The benefits of IoT with edge computing are low latency, location awareness, real-time processing, more proficient data management, resilience, and scalability.…”
Section: Introductionmentioning
confidence: 99%
“…The pricing model usually depends on the memory, duration, and the number of executions of a sequence/workflow of functions. The authors of [203] adapt the cloud native approach and related operating techniques for latency sensitive IoT applications operated on public serverless platforms. They argue that solely adding cloud resources to the edge is not enough and other mechanisms and operation layers are required to achieve the desired level of quality.…”
Section: F Economic Aspects Of Edge Placementmentioning
confidence: 99%