Models can help software engineers to reason about design-time decisions before implementing a system. This paper focuses on models that deal with non-functional properties, such as reliability and performance. To build such models, one must rely on numerical estimates of various parameters provided by domain experts or extracted by other similar systems. Unfortunately, estimates are seldom correct. In addition, in dynamic environments, the value of parameters may change over time. We discuss an approach that addresses these issues by keeping models alive at run time and feeding a Bayesian estimator with data collected from the running system, which produces updated parameters. The updated model provides an increasingly better representation of the system. By analyzing the updated model at run time, it is possible to detect or predict if a desired property is, or will be, violated by the running implementation. Requirement violations may trigger automatic reconfigurations or recovery actions aimed at guaranteeing the desired goals. We illustrate a working framework supporting our methodology and apply it to an example in which a Web service orchestrated composition is modeled through a discrete time Markov chain. Numerical simulations show the effectiveness of the approach
Unpredictable changes continuously affect software systems and may have a severe impact on their quality of service, potentially jeopardizing the system's ability to meet the desired requirements. Changes may occur in critical components of the system, clients' operational profiles, requirements, or deployment environments.The adoption of software models and model checking techniques at run time may support automatic reasoning about such changes, detect harmful configurations, and potentially enable appropriate (self-)reactions. However, traditional model checking techniques and tools may not be simply applied as they are at run time, since they hardly meet the constraints imposed by on-the-fly analysis, in terms of execution time and memory occupation. This paper precisely addresses this issue and focuses on reliability models, given in terms of Discrete Time Markov Chains, and probabilistic model checking. It develops a mathematical framework for run-time probabilistic model checking that, given a reliability model and a set of requirements, statically generates a set of expressions, which can be efficiently used at run-time to verify system requirements. An experimental comparison of our approach with existing probabilistic model checkers shows its practical applicability in run-time verification.
Abstract. Modern software systems are increasingly requested to be adaptive to changes in the environment in which they are embedded. Moreover, adaptation often needs to be performed automatically, through self-managed reactions enacted by the application at run time. Off-line, human-driven changes should be requested only if self-adaptation cannot be achieved successfully. To support this kind of autonomic behavior, software systems must be empowered by a rich run-time support that can monitor the relevant phenomena of the surrounding environment to detect changes, analyze the data collected to understand the possible consequences of changes, reason about the ability of the application to continue to provide the required service, and finally react if an adaptation is needed. This paper focuses on non-functional requirements, which constitute an essential component of the quality that modern software systems need to exhibit. Although the proposed approach is quite general, it is mainly exemplified in the paper in the context of service-oriented systems, where the quality of service (QoS) is regulated by contractual obligations between the application provider and its clients. We analyze the case where an application, exported as a service, is built as a composition of other services. Non-functional requirements-such as reliability and performance-heavily depend on the environment in which the application is embedded. Thus changes in the environment may ultimately adversely affect QoS satisfaction. We illustrate an approach and support tools that enable a holistic view of the design and run-time management of adaptive software systems. The approach is based on formal (probabilistic) models that are used at design time to reason about dependability of the application in quantitative terms. Models continue to exist at run time to enable continuous verification and detection of changes that require adaptation.
Formal verification is used to establish the compliance of software and hardware systems with important classes of requirements. System compliance with functional requirements is frequently analysed using techniques such as model checking, and theorem proving. In addition, a technique called quantitative verification supports the analysis of the reliability, performance, and other quality-of-service (QoS) properties of systems that exhibit stochastic behaviour. In this paper, we extend the applicability of quantitative verification to the common scenario when the probabilities of transition between some or all states of the Markov models analysed by the technique are unknown, but observations of these transitions are available. To this end, we introduce a theoretical framework, and a tool chain that establish confidence intervals for the QoS properties of a software system modelled as a Markov chain with uncertain transition probabilities. We use two case studies from different application domains to assess the effectiveness of the new quantitative verification technique. Our experiments show that disregarding the above source of uncertainty may significantly affect the accuracy of the verification results, leading to wrong decisions, and low-quality software systems.
Several application domains involve detecting complex situations and reacting to them. This asks for a Complex Event Processing (CEP) engine specifically designed to timely process low level event notifications to identify higher level composite events according to a set of user-defined rules. Several CEP engines and accompanying rule languages have been proposed. Their primary focus on performance often led to an oversimplified modeling of the external world where events happens, which is not suited to satisfy the demand of real-life applications. In particular, they are unable to consider, model, and propagate the uncertainty that exists in most scenarios. Moving from this premise, we present CEP2U (Complex Event Processing under Uncertainty),a novel model for dealing with uncertainty in CEP. We apply CEP2U to an existing CEP language –TESLA–, showing how it seamlessly integrate with modern rule languages by supporting all the operators they commonly offer. Moreover, we implement CEP2U on top of the T-Rex CEP engine and perform a detailed study of its performance, measuring a limited overhead that demonstrates its practical applicability. The discussion presented in this paper, together with the experiments we conducted, show how CEP2U provides a valuable combination of expressiveness, efficiency, and ease of use
Modern software-intensive systems often interact with an environment whose behavior changes over time, often unpredictably. The occurrence of changes may jeopardize their ability to meet the desired requirements. It is therefore desirable to design software in a way that it can self-adapt to the occurrence of changes with limited, or even without, human intervention. Self-adaptation can be achieved by bringing software models and model checking to run time, to support perpetual automatic reasoning about changes. Once a change is detected, the system itself can predict if requirements violations may occur and enable appropriate counter-actions. However, existing mainstream model checking techniques and tools were not conceived for run-time usage; hence they hardly meet the constraints imposed by on-the-fly analysis in terms of execution time and memory usage. This paper addresses this issue and focuses on perpetual satisfaction of non-functional requirements, such as reliability or energy consumption. Its main contribution is the description of a mathematical framework for run-time efficient probabilistic model checking. Our approach statically generates a set of verification conditions that can be efficiently evaluated at run time as soon as changes occur. The proposed approach also supports sensitivity analysis, which enables reasoning about the effects of changes and can drive effective adaptation strategies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.