The adoption of advanced Internet of Things (IoT) technologies has impressively improved in recent years by placing such services at the extreme Edge of the network. There are, however, specific Quality of Service (QoS) trade-offs that must be considered, particularly in situations when workloads vary over time or when IoT devices are dynamically changing their geographic position. This article proposes an innovative capillary computing architecture, which benefits from mainstream Fog and Cloud computing approaches and relies on a set of new services, including an Edge/Fog/Cloud Monitoring System and a Capillary Container Orchestrator. All necessary Microservices are implemented as Docker containers, and their orchestration is performed from the Edge computing nodes up to Fog and Cloud servers in the geographic vicinity of moving IoT devices. A car equipped with a Motorhome Artificial Intelligence Communication Hardware (MACH) system as an Edge node connected to several Fog and Cloud computing servers was used for testing. Compared to using a fixed centralized Cloud provider, the service response time provided by our proposed capillary computing architecture was almost four times faster according to the 99th percentile value along with a significantly smaller standard deviation, which represents a high QoS.
Time-critical applications, such as early warning systems or live event broadcasting, present particular challenges. They have hard limits on Quality of Service constraints that must be maintained, despite network fluctuations and varying peaks of load. Consequently, such applications must adapt elastically on-demand, and so must be capable of reconfiguring themselves, along with the underlying cloud infrastructure, to satisfy their constraints. Software engineering tools and methodologies currently do not support such a paradigm. In this paper, we describe a framework that has been designed to meet these objectives, as part of the EU SWITCH project. SWITCH offers a flexible co-programming architecture that provides an abstraction layer and an underlying infrastructure environment, which can help to both specify and support the life cycle of time-critical cloud native applications. We describe the architecture, design and implementation of the SWITCH components and describe how such tools are applied to three time-critical real-world use cases.
The perspective of online services such as Internet of Things (IoT) applications has impressively evolved over the last recent years as they are becoming more and more time-sensitive, maintained at decentralized locations and easily affected by the changing workload intensity at runtime. As a consequence, an up-and-coming trend has been emerging from previously centralized computation to distributed edge computing in order to address these new concerns. The goal of the present paper is therefore twofold. At first, to analyze modern types of edge computing applications and their auto-scaling challenges to offer desirable performance in conditions where the workload dynamically changes. Secondly, to present a new taxonomy of auto-scaling applications. This taxonomy thoroughly considers edge computing paradigm and its complementary technologies such as container-based visualization. CCS Concepts Computer systems organization → Cloud computing • Computing methodologies → Distributed computing methodologies.
Abstract-Many new Internet of Things (IoT) applications such a disaster early warning systems, video-streaming, automated driving and similar, are increasingly being built by using advanced component based software engineering approaches. Software components can include various executable images, such as container or Virtual Machine images, scripts and others. Achieving adequate Quality of Service (QoS) for such applications is still a challenging issue due to runtime variations in running conditions intrinsic to the cloud, edge and fog environments. These types of systems should therefore be continuously monitored and hence adapted at various levels including infrastructure, container and application levels. In this work, we present an adaptation method using a new Incremental Learning approach based on Multi-Level Monitoring data. The method dynamically generates a set of rules representing a performance prediction model that allow us to find potential performance bottlenecks and then propose suitable application adaptation actions. Adaptation possibilities in this work include (1) live-migration of application components (such as containers) from the current infrastructure to another one with different characteristics, such as CPU, memory, disk or bandwidth capacity, and (2) dynamic horizontal or vertical scaling of container-based application instances to offer better fitted resource capacities.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.