Managing Trade-Offs in Adaptable Software Architectures 2017
DOI: 10.1016/b978-0-12-802855-1.00013-7
|View full text |Cite
|
Sign up to set email alerts
|

An Overview on Quality Evaluation of Self-Adaptive Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 18 publications
(11 citation statements)
references
References 10 publications
0
11
0
Order By: Relevance
“…Raibulet et al has recently presented an overview of some existing approaches for the evaluation of self-adaptive systems. 13 According to their analysis on the published works, when quality attributes (such as performance and reliability) are considered, these attributes are evaluated at runtime in 95% of the cases, and at design time in 5% of them. The authors also pointed out that none of these approaches associated these evaluations to a tool, which may hamper the evaluative part.…”
Section: Motivation and Challengesmentioning
confidence: 99%
“…Raibulet et al has recently presented an overview of some existing approaches for the evaluation of self-adaptive systems. 13 According to their analysis on the published works, when quality attributes (such as performance and reliability) are considered, these attributes are evaluated at runtime in 95% of the cases, and at design time in 5% of them. The authors also pointed out that none of these approaches associated these evaluations to a tool, which may hamper the evaluative part.…”
Section: Motivation and Challengesmentioning
confidence: 99%
“…For example, an SAS can be evaluated based on its failure density, cost, or time to adapt. Raibulet et al proposed a taxonomy for evaluating SAS [108] which includes evaluation scope, evaluation time, evaluation mechanism, evaluation perspective, and evaluation type. The study proposed by Chen et.al [18] shows that using a genetic algorithm for predicting Quality of Service (QoS) provided higher accuracy results.…”
Section: Related Workmentioning
confidence: 99%
“…While playing back a recording of a real failure trace as input for a simulated system improves the credibility of the simulation-based experiments, the resulting output of a single recorded real failure trace lacks generality. Employing a single recorded real failure trace as input for SHS evaluation only supports single experiment run and does not support any claim on certain qualitative evaluation metrics, such as resilience, reliability, and robustness testing [3]. While such a trace contains realistic characteristics of occurrences of failures, it only captures one possible future for the simulated SHS and fails to cover a large and representative spectrum of the input space.…”
Section: Recorded Real Failure Tracementioning
confidence: 99%
“…The system is also subject to changes in unforeseen ways as a result of the adaptation [2]. On the other hand, these systems are designed to be operated in highly dynamic environments, and therefore require continuous monitoring of their behavior and execution environment [3].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation