2017
DOI: 10.1007/978-3-319-66197-1_24
|View full text |Cite
|
Sign up to set email alerts
|

Towards Inverse Uncertainty Quantification in Software Development (Short Paper)

Abstract: With the purpose of delivering more robust systems, this paper revisits the problem of Inverse Uncertainty Quantification that is related to the discrepancy between the measured data at runtime (while the system executes) and the formal specification (i.e., a mathematical model) of the system under consideration, and the value calibration of unknown parameters in the model. We foster an approach to quantify and mitigate system uncertainty during the development cycle by combining Bayesian reasoning and online … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
11
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
3
1

Relationship

4
3

Authors

Journals

citations
Cited by 11 publications
(11 citation statements)
references
References 15 publications
0
11
0
Order By: Relevance
“…The proposed strategy maximizes the probability to reach the uncertain components of the system under test (SUT) during test case generation. The online MBT activity uses a Inverse Uncertainty Quantification (IUQ) approach [5], [6] to assess the discrepancy between measured data at run-time (while the system executes) and a Markov Decision Process (MDP) model describing the expected behavior (including uncertainty) of the SUT. To this purpose, tests feed a Bayesian inference calibrator that continuously learns from test data to tune the uncertain components of the system model.…”
Section: Introductionmentioning
confidence: 99%
“…The proposed strategy maximizes the probability to reach the uncertain components of the system under test (SUT) during test case generation. The online MBT activity uses a Inverse Uncertainty Quantification (IUQ) approach [5], [6] to assess the discrepancy between measured data at run-time (while the system executes) and a Markov Decision Process (MDP) model describing the expected behavior (including uncertainty) of the SUT. To this purpose, tests feed a Bayesian inference calibrator that continuously learns from test data to tune the uncertain components of the system model.…”
Section: Introductionmentioning
confidence: 99%
“…They aim at achieving better quality of service and ensuring the required functionality in a fail-soft manner even in hostile or error conditions realizing the so called self-* properties [4], such as self-optimizing (when operating conditions change), self-reconfiguration (when a goal changes), self-healing (that allows the system to perceive that it is not operating correctly and therefore make the necessary adjustments to restore itself to normal operation autonomously), and so forth. Moreover, another main concern in the construction of self-adaptive software systems is the uncertainty [5,6,7] underlying the knowledge used for decision making. In fact, due to unpredictable changes, for example in the environment, a self-adaptive system may have no control over new unexpected processes that influence the environment and the system's organization itself that (especially in a distributed setting) may fluctuate dynamically.…”
Section: Introductionmentioning
confidence: 99%
“…Our major objective is to show the effectiveness of METRIC in statistical hypothesis testing (rather than functional testing) of uncertain software systems by measuring both the accuracy and the effort of the inference process. We show a comparative evaluation between our approach and traditional pseudorandom model-based test case generation algorithms, thus showing the convenience of METRIC.This approach has been introduced by Camilli et al [5,9,10] through a preliminary sketch of a testing method under uncertainty supported by a prototypal software implementation. Here, we provide an extended presentation of the approach as part of a comprehensive methodology.…”
mentioning
confidence: 99%
“…Specific methods and techniques used to achieve our goal of uncertainty quantification compose a methodology, so called METRIC ‡ . The methodology follows an inverse uncertainty quantification (IUQ) approach [4,5], meaning that it mainly reasons at runtime (on the discrepancy between evidence and the uncertain design-time assumptions) and then it propagates back the posterior knowledge to calibrate the initially uncertain design-time model. To this purpose, METRIC works by combining Bayesian inference [6] and (on-the-fly) model-based test case generation based on infinite horizon optimization algorithms [7].…”
mentioning
confidence: 99%
See 1 more Smart Citation