Sensor web systems, cyber-physical systems, and the so-called Internet of Things are concepts that share a set of common characteristics. The nature of such systems is highly dynamic and very heterogeneous and issues such as interoperability, energy consumption, or resource management must be properly managed to ensure the operation of the applications within the required quality of service level. In this context, base technologies such as component based software engineering or Service Oriented Architecture can play a central role. Model driven development and middleware technologies also aid in the design, development, and operation of such systems. This paper presents a middleware solution that provides runtime support for the complete lifecycle management of a system consisting of several concurrent applications running over a set of distributed infrastructure nodes. The middleware builds up on top of a general purpose component model and is driven by a quality of service aware self-configuration algorithm that provides stateful reconfiguration capabilities in face of both internal (application triggered) and external (application unaware) reconfiguration events. The platform has been deployed over an automated warehouse supervision system that serves as a case study.
The software of elevators requires maintenance over several years to deal with new functionality, correction of bugs or legislation changes. To automatically validate this software, test oracles are necessary. A typical approach in industry is to use regression oracles. These oracles have to execute the test input both, in the software version under test and in a previous software version. This practice has several issues when using simulation to test elevators dispatching algorithms at system level. These issues include a long test execution time and the impossibility of re-using test oracles both at different test levels and in operation. To deal with these issues, we propose DARIO, a test oracle that relies on regression learning algorithms to predict the Qualify of Service of the system. The regression learning algorithms of this oracle are trained by using data from previously tested versions. An empirical evaluation with an industrial case study demonstrates the feasibility of using our approach in practice. A total of five regression learning algorithms were validated, showing that the regression tree algorithm performed best. For the regression tree algorithm, the accuracy when predicting verdicts by DARIO ranged between 79 to 87%.
Software systems that are embedded in autonomous Cyber-Physical Systems (CPSs) usually have a large life-cycle, both during its development and in maintenance. This software evolves during its life-cycle in order to incorporate new requirements, bug fixes, and to deal with hardware obsolescence. The current process for developing and maintaining this software is very fragmented, which makes developing new software versions and deploying them in the CPSs extremely expensive. In other domains, such as web engineering, the phases of development and operation are tightly connected, making it possible to easily perform software updates of the system, and to obtain operational data that can be analyzed by engineers at development time. However, in spite of the rise of new communication technologies (e.g., 5G) providing an opportunity to acquire Design-Operation Continuum Engineering methods in the context of CPSs, there are still many complex issues that need to be addressed, such as the ones related with hardware-software co-design. Therefore, the process of Design-Operation Continuum Engineering for CPSs requires substantial changes with respect to the current fragmented software development process. In this paper, we build a taxonomy for Design-Operation Continuum Engineering of CPSs based on case studies from two different industrial domains involving CPSs (elevation and railway). This taxonomy is later used to elicit requirements from these two case studies in order to present a blueprint on adopting Design-Operation Continuum Engineering in any organization developing CPSs.
Sensory environments for healthcare are commonplace nowadays. A patient monitoring system in such an environment deals with sensor data capture, transmission and processing in order to provide on-the-spot support for monitoring the vulnerable and critical patients. A fault in such a system can be hazardous on the health of the patient. Therefore, such a system must be dependable and ensure reliability, fault-tolerance, safety and other critical aspects, in order to deploy it in real scenario. Also, the management of the infrastructure resources must be efficient and the eventual system reconfiguration must be reliably performed. This paper encounters some of these issues and proposes a component platform with specific support for several QoS aspects, namely fault tolerance, safe inter-component communication and resource management. The platform adopts the Service Component Architecture (SCA) model and defines a Data Distribution Service (DDS) binding, which provides the fault tolerance and the required safety-ensuring techniques and measures, as defined in the IEC Multimed Tools Appl
The fast growth in the amount of connected devices with computing capabilities in the past years has enabled the emergence of a new computing layer at the Edge. Despite being resource-constrained if compared with cloud servers, they offer lower latencies than those achievable by Cloud computing. The combination of both Cloud and Edge computing paradigms can provide a suitable infrastructure for complex applications’ quality of service requirements that cannot easily be achieved with either of these paradigms alone. These requirements can be very different for each application, from achieving time sensitivity or assuring data privacy to storing and processing large amounts of data. Therefore, orchestrating these applications in the Cloud–Edge computing raises new challenges that need to be solved in order to fully take advantage of this layered infrastructure. This paper proposes an architecture that enables the dynamic orchestration of applications in the Cloud–Edge continuum. It focuses on the application’s quality of service by providing the scheduler with input that is commonly used by modern scheduling algorithms. The architecture uses a distributed scheduling approach that can be customized in a per-application basis, which ensures that it can scale properly even in setups with high number of nodes and complex scheduling algorithms. This architecture has been implemented on top of Kubernetes and evaluated in order to asses its viability to enable more complex scheduling algorithms that take into account the quality of service of applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.