“…Extensions. This paper extends our early work [18] on application placement in a static multilayer Fog in three areas. 1) We model the evolving Fog as a dynamic multilayer graph based on the incremental network changes and device availability; 2) We design a new dynamic placement algorithm based on an incremental multilayer resource partitioning method considering infrastructure changes; 3) We compare our results against three state-of-the-art methods using two applications running on a real testbed.…”
Fog computing platforms became essential for deploying low-latency applications at the network's edge. However, placing and managing time-critical applications over a Fog infrastructure with many heterogeneous and resource-constrained devices over a dynamic network is challenging. This paper proposes an incremental multilayer resource-aware partitioning (M-RAP) method that minimizes resource wastage and maximizes service placement and deadline satisfaction in a dynamic Fog with many application requests. M-RAP represents the heterogeneous Fog resources as a multilayer graph, partitions it based on the network structure and resource types, and constantly updates it upon dynamic changes in the underlying Fog infrastructure. Finally, it identifies the device partitions for placing the application services according to their resource requirements, which must overlap in the same low-latency network partition. We evaluated M-RAP through extensive simulation and two applications executed on a real testbed. The results show that M-RAP can place 1.6 times as many services, satisfy deadlines for 43 % more applications, lower their response time by up to 58 %, and reduce resource wastage by up to 54 % compared to three state-of-the-art methods.
“…Extensions. This paper extends our early work [18] on application placement in a static multilayer Fog in three areas. 1) We model the evolving Fog as a dynamic multilayer graph based on the incremental network changes and device availability; 2) We design a new dynamic placement algorithm based on an incremental multilayer resource partitioning method considering infrastructure changes; 3) We compare our results against three state-of-the-art methods using two applications running on a real testbed.…”
Fog computing platforms became essential for deploying low-latency applications at the network's edge. However, placing and managing time-critical applications over a Fog infrastructure with many heterogeneous and resource-constrained devices over a dynamic network is challenging. This paper proposes an incremental multilayer resource-aware partitioning (M-RAP) method that minimizes resource wastage and maximizes service placement and deadline satisfaction in a dynamic Fog with many application requests. M-RAP represents the heterogeneous Fog resources as a multilayer graph, partitions it based on the network structure and resource types, and constantly updates it upon dynamic changes in the underlying Fog infrastructure. Finally, it identifies the device partitions for placing the application services according to their resource requirements, which must overlap in the same low-latency network partition. We evaluated M-RAP through extensive simulation and two applications executed on a real testbed. The results show that M-RAP can place 1.6 times as many services, satisfy deadlines for 43 % more applications, lower their response time by up to 58 %, and reduce resource wastage by up to 54 % compared to three state-of-the-art methods.
“…Researchers studied various areas such as application placement, security, increasing response time, load balancing, request management, resource management, network optimization, etc. Spinnewynet al [7], Brogi et al [8], Cao et al [9], Mouradian et al [10], Kim et al [11], Mahmud et al [12][13], Baranwal et al [14], Kayal et.al [15], Xia et al [16],Mann [17] and Smani et al [18]consider application placement.…”
Section: Related Workmentioning
confidence: 99%
“…Mann [17] does application placement for individual Fog colonies and reduces the scalability problem. Smani et al [18] propose a resource-aware multilayer partitioning method to minimize resource wastage and maximize service placement and deadline satisfaction rate in a Fog environment. Sofiaet et al [19], in their paper, to effectively control energy consumption, use a Non-dominated Sorting Genetic Algorithm (NSGA-II) and artificial neural network (ANN) to predict virtual machines based on task characteristics and resource characteristics.…”
Today, Fog computing plays an essential role in Human life. One of the challenges in the Fog and Cloud environment is hierarchical service process; requests are sent to Fog, and if Fog is unable to provide service, they are sent to Cloud, which is a time-consuming process. This paper presents a framework that specifies when a request is sent, in which environment it can be serviced, and provides interfaces to properly manage nodes and domains and manage the service of requests. In these administrative interfaces, the most suitable domain is determined using SAW method of Game Theory and user expectations for placing the application. Then the gateway of the specified domain suggests the most appropriate node using PSO algorithm. Because the application placement is based on the expectations of the users, it increases the QoE. The proposed method is implemented in the iFogSim and its results have been evaluated with authentic articles. It was observed proposed method has better performance and better service speed than the state-of-the-art research works and a significant improvement in service response time.
“…4) Data transfer reduction: Najafabadi Samani [11] proposed a multilayer partitioning method to minimize the resource wastage of Fog infrastructure that selects devices in resource partitions closest to the end-user. However, this work focuses on latency-sensitive workflows in Fog and Edge, isolated from the Cloud.…”
Processing rapidly growing data encompasses complex workflows that utilize the Cloud for high-performance computing and the Fog and Edge devices for low-latency communication. For example, autonomous driving applications require inspection, recognition, and classification of road signs for safety inspection assessments, especially on crowded roads. Such applications are among the famous research and industrial exploration topics in computer vision and machine learning. In this work, we design a road sign inspection workflow consisting of 1) encoding and framing tasks of video streams captured by camera sensors embedded in the vehicles, and 2) convolutional neural network (CNN) training and inference models for accurate visual object recognition. We explore a matching theoretic algorithm named CODA [1] to place the workflow on the computing continuum, targeting the workflow processing time, data transfer intensity, and energy consumption as objectives. Evaluation results on a real computing continuum testbed federated among four Cloud, Fog, and Edge providers reveal that CODA achieves 50 %-60 % lower completion time, 33 %-59 % lower CO 2 emissions, and 19 %-45 % lower data transfer intensity compared to two stateof-the-art methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.