Abstract-Edge computing has emerged as a new paradigm to bring cloud applications closer to users for increased performance. ISPs have the opportunity to deploy private edge-clouds in their infrastructure to generate additional revenue by providing ultra-low latency applications to local users. We envision a rapid increase in the number of such applications for "edge" networks in the near future with virtual/augmented reality (VR/AR), networked gaming, wearable cognitive assistance, autonomous driving and IoT analytics having already been proposed for edgeclouds instead of the central clouds to improve performance. This raises new challenges as the complexity of the resource allocation problem for multiple services with latency deadlines (i.e., which service to place at which node of the edge-cloud in order to satisfy the latency constraints) becomes significant. In this paper, we propose a set of practical, uncoordinated strategies for service placement in edge-clouds. Through extensive simulations using both synthetic and real-world trace data, we demonstrate that uncoordinated strategies can perform comparatively well with the optimal placement solution, which satisfies the maximum amount of user requests.
New and emerging applications in the entertainment (e.g., Virtual/Augmented Reality), IoT and automotive domains will soon demand response times an order of magnitude smaller than can be achieved by the current "client-to-cloud" network model. Edge-and Fog-computing have been proposed as the promise to deal with such extremely latency-sensitive applications. According to Edge-/Fog-Computing, computing resources are available at the edge of the network for applications to run their virtualised instances. We assume a distributed computing environment, where In-Network Computing Providers (INCPs) deploy and lease edge resources, while Application Service Providers (AppSPs) have the opportunity to rent those resources to meet their application's latency demands. We build an auctionbased resource allocation and provisioning mechanism which produces a map of application instances in the edge computing infrastructure (hence, acronymed Edge-MAP). Edge-MAP takes into account users' mobility (i.e., users connecting to different cell stations over time) and the limited computing resources available in edge micro-clouds to allocate resources to bidding applications. On the micro-level, Edge-MAP relies on Vickrey-English-Dutch (VED) auctions to perform robust resource allocation, while on the macro-level it fosters competition among neighbouring INCPs. In contrast to related studies in the area, Edge-MAP can scale to any number of applications, adapt to dynamic network conditions rapidly and reallocate resources in polynomial time. Our evaluation demonstrates Edge-MAP's capability of taking into account the inherent challenges of the provisioning problem we consider.
An increasing number of Low Latency Applications (LLAs) in the entertainment, IoT, and automotive domains require response times that challenge the traditional application provisioning using distant Data Centres. The fog computing paradigm extends cloud computing at the edge and middle-tier locations of the network, providing response times an order of magnitude smaller than those that can be achieved by the current "client-to-cloud" network model. Here, we address the challenges of provisioning heavily stateful LLA in the setting where fog infrastructure consists of third-party computing resources, i.e., cloudlets, that come in the form of "data centres in the box". We introduce FogSpot, a charging mechanism for on-path, on-demand, application provisioning. In FogSpot, cloudlets offer their resources in the form of Virtual Machines (VMs) via markets, collocated with the cloudlets, that interact with forwarded users' application requests for VMs in real time. FogSpot associates each cloudlet with a price based on applications' demand. The proposed mechanism's design takes into account the characteristics of cloudlets' resources, such as their limited elasticity, and LLAs' attributes, like their expected QoS gain and engagement duration. Lastly, FogSpot guarantees the end users' requests truthfulness while focusing in maximising either each cloudlet's revenue or resource utilisation.
Edge computing has emerged as a new paradigm to bring cloud applications closer to users for increased performance. Unlike back-end cloud systems which consolidate their resources in a centralized data center location with virtually unlimited capacity, edge-clouds comprise distributed resources at various "computation spots", each with very limited capacity. In this paper, we consider Function-as-a-Service (FaaS) edge-clouds where application providers deploy their latency-critical functions that process user requests with strict response time deadlines. In this setting, we investigate the problem of resource provisioning and allocation. After formulating the optimal solution, we propose resource allocation and provisioning algorithms across the spectrum of fully-centralized to fully-decentralized. We evaluate the performance of these algorithms in terms of their ability to utilize CPU resources and meet request deadlines under various system parameters. Our results indicate that practical decentralized strategies, which require no coordination among computation spots, achieve performance that is close to the optimal fullycentralized strategy with coordination overheads.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.