Abstract:Driven by the increasing popularity of the microservice architecture, we see an increase in services with unknown demand pattern located in the edge network. Predeployed instances of such services would be idle most of the time, which is economically infeasible. Also, the finite storage capacity limits the amount of deployed instances we can offer. Instead, we present an on-demand deployment scheme using the Docker platform. In Docker, service images consist of layers, each layer adding specific functionality.… Show more
“…First, edge servers cannot host all possible services for their resource constraints. Second, demand patterns are non-stationary/not known apriori, which means demand is subject to change over time and space as the locations of mobile users change [11]. Service placement decisions should be dynamic and change over time because both demand and consumer proximity to server locations change [12].…”
Section: A Mec Environmentmentioning
confidence: 99%
“…Unlike VMs, containers have a strong dependency on the host operating system kernel. They share more resources of the host operating system in common [11], [39], such as their embedded libraries and the local file system. On the one hand, sharing common resources helps them have a smaller footprint size than VMs, allowing hundreds of containers to be hosted on a physical machine.…”
The advent of new cloud-based applications such as mixed reality, online gaming, autonomous driving, and healthcare has introduced infrastructure management challenges to the underlying service network. Multi-access edge computing (MEC) extends the cloud computing paradigm and leverages servers near end-users at the network edge to provide a cloud-like environment. The optimum placement of services on edge servers plays a crucial role in the performance of such service-based applications. Dynamic service placement problem addresses the adaptive configuration of application services at edge servers to facilitate end-users and those devices that need to offload computation tasks. While reported approaches in the literature shed light on this problem from a particular perspective, a panoramic study of this problem reveals the research gaps in the big picture. This paper introduces the dynamic service placement problem and outline its relations with other problems such as task scheduling, resource management, and caching at the edge. We also present a systematic literature review of existing dynamic service placement methods for MEC environments from networking, middleware, applications, and evaluation perspectives. In the first step, we review different MEC architectures and their enabling technologies from a networking point of view. We also introduce different cache deployment solutions in network architectures and discuss their design considerations. The second step investigates dynamic service placement methods from a middleware viewpoint. We review different service packaging technologies and discuss their trade-offs. We also survey the methods and identify eight research directions that researchers follow. Our study categorises the research objectives into six main classes, proposing a taxonomy of design objectives for the dynamic service placement problem. We also investigate the reported methods and devise a solutions taxonomy comprising six criteria. In the third step, we concentrate on the application layer and introduce the applications that can take advantage of dynamic service placement. The fourth step investigates evaluation environments used to validate the solutions, including simulators and testbeds. We introduce real-world datasets such as edge server locations, mobility traces, and service requests used to evaluate the methods. We compile a list of open issues and challenges categorised by various viewpoints in the last step.
“…First, edge servers cannot host all possible services for their resource constraints. Second, demand patterns are non-stationary/not known apriori, which means demand is subject to change over time and space as the locations of mobile users change [11]. Service placement decisions should be dynamic and change over time because both demand and consumer proximity to server locations change [12].…”
Section: A Mec Environmentmentioning
confidence: 99%
“…Unlike VMs, containers have a strong dependency on the host operating system kernel. They share more resources of the host operating system in common [11], [39], such as their embedded libraries and the local file system. On the one hand, sharing common resources helps them have a smaller footprint size than VMs, allowing hundreds of containers to be hosted on a physical machine.…”
The advent of new cloud-based applications such as mixed reality, online gaming, autonomous driving, and healthcare has introduced infrastructure management challenges to the underlying service network. Multi-access edge computing (MEC) extends the cloud computing paradigm and leverages servers near end-users at the network edge to provide a cloud-like environment. The optimum placement of services on edge servers plays a crucial role in the performance of such service-based applications. Dynamic service placement problem addresses the adaptive configuration of application services at edge servers to facilitate end-users and those devices that need to offload computation tasks. While reported approaches in the literature shed light on this problem from a particular perspective, a panoramic study of this problem reveals the research gaps in the big picture. This paper introduces the dynamic service placement problem and outline its relations with other problems such as task scheduling, resource management, and caching at the edge. We also present a systematic literature review of existing dynamic service placement methods for MEC environments from networking, middleware, applications, and evaluation perspectives. In the first step, we review different MEC architectures and their enabling technologies from a networking point of view. We also introduce different cache deployment solutions in network architectures and discuss their design considerations. The second step investigates dynamic service placement methods from a middleware viewpoint. We review different service packaging technologies and discuss their trade-offs. We also survey the methods and identify eight research directions that researchers follow. Our study categorises the research objectives into six main classes, proposing a taxonomy of design objectives for the dynamic service placement problem. We also investigate the reported methods and devise a solutions taxonomy comprising six criteria. In the third step, we concentrate on the application layer and introduce the applications that can take advantage of dynamic service placement. The fourth step investigates evaluation environments used to validate the solutions, including simulators and testbeds. We introduce real-world datasets such as edge server locations, mobility traces, and service requests used to evaluate the methods. We compile a list of open issues and challenges categorised by various viewpoints in the last step.
“…Yuzhou Huang et al [43] proposed intelligent edge computing through training the model in the cloud and offloading it to the edge based on the Docker that enables the prediction model to be operated in the edge platform. Piet Smet et al [44] proposed a mechanism to deploy specific functionalities to the layers in edge computing using Docker. Jihun Ha et al [45] proposed a mechanism of deploying the web services to the edge platform based on Docker for managing service in the smart factory.…”
Leveraging the edge computing paradigm, computing resources are deployed in the network edge to provide heterogeneous services. Edge computing delivers sensing and actuating services to the Internet from the constrained Internet of Things (IoT) devices. Meanwhile, management of various elements is provided by offloading sufficient computing and storage to the edge of the networks for the IoT environments such as home, factory, and private spaces without cloud servers. In this paper, we propose an enhanced service framework based on microservice management and client support provider for efficient user experiments in the edge computing environment. For providing the edge computing service and management in the network edge, this paper presents an edge-computing architecture that provides various functions through microservice modules on the edge platform engine. Through the microservices, the interfaces are provided to the client to access the device, data, and additional services. Using Docker, the microservice modules are deployed in the edge platform to provide the services. However, the services and management functions need to be presented to the clients based on the friendly user interfaces. For providing the user interfaces of the services and Docker engine to the clients, the client support service provider is developed and deployed in the network edge based on the edge platform. Therefore, the proposed edge platform provides the services and management to the users for accessing the resources and functions through visualized interfaces in the IoT environment based on edge computing. The performance of our proposed system can be checked through the test result screen and delay time. Compared to controlling edge computing by using a command-line tool for users, we made it easy for general users who are not computer savvy to access edge services through a graphic user interface. And by measuring the delay time and comparing the execution time, it can be seen that the proposed system operates faster.
“…We consider the pure mathematical approaches first, where a vast number of them is aimed at determining the best allocation of VMs/containers under various constraints. A fairly comprehensive and recent model suited to placing Docker containers on edge nodes is [17], which uses an Integer Linear Programming (ILP) problem formulation. Modeling the edge network as a network of queues is also found in some works, e.g.…”
Section: Edge Computing Modeling and Simulationmentioning
Edge computing has been proposed to cope with the challenging requirements of future applications, like mobile augmented reality, since it shortens significantly the distance, hence the latency, between the end users and the processing servers. On the other hand, serverless computing is emerging among cloud technologies to respond to the need of highly scalable event-driven execution of stateless tasks. In this paper, we first investigate the convergence of the two to enable very low-latency execution of short-lived stateless tasks, whose computation is offloaded from the user terminal to servers hosted by or close to edge devices. We tackle in particular the research challenge of selecting the best executor, based on real-time measurements and simple, yet effective, prediction algorithms. Second, we propose a performance evaluation framework specifically designed for an accurate assessment of algorithms and protocols in edge computing environments, where the nodes may have very heterogeneous networking and processing capabilities. The proposed framework relies on the use of real components on lightweight virtualization mixed with simulated computation and is well-suited to the analysis of several applications and network environments. Using our framework, we evaluate our proposed architecture and algorithms in small-and large-scale edge computing scenarios, showing that our solution achieves similar or better delay performance than a centralized solution, with far less network utilization.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.