“…To explicitly consider latency 4 Examples of such work are: Camara and de Lemos [2012]; Ehlers et al [2011]; Haupt [2012]; Neti and Mueller [2007]; Qun et al [2005]; Salehie and Tahvildari [2006]; Schmitt et al [2011]. 5 Such approaches either use observed and manually adjusted failure traces (e.g., Garlan and Schmerl [2002]; Haesevoets et al [2009]; Ippoliti and Zhou [2012]), probabilistic or simple random failure traces (e.g., Anaya et al [2014]; Chan and Bishop [2009]; Piel et al [2011]), or deterministic failure traces (e.g., Angelopoulos et al [2014]; Carzaniga et al [2008]; Casanova et al [2013]; Di Marco et al [2013]; Griffith et al [2009]; Hassan et al [2015]; Magalhaes and Silva [2015]; Perino [2013]). similarly to , the latency of each repair rule needs to be estimated and then added to the execution time of the rule.…”
Section: Related Workmentioning
confidence: 99%
“…Single model. To consider a naive failure profile model that-to the best of our knowledge-has been used in many existing work on self-healing (e.g., Angelopoulos et al [2014]; Carzaniga et al [2008]; Casanova et al [2013]; Di Marco et al [2013]; Magalhaes and Silva [2015]; Perino [2013]), we construct the single failure profile model. In this model, failures are not correlated so that they arrive individually and not in groups.…”
Section: Compares Thementioning
confidence: 99%
“…Examples of approaches that use simulation to evaluate self-healing systems are:Anaya et al [2014];Angelopoulos et al [2014];Camara and de Lemos [2012];Carzaniga et al [2008];Casanova et al [2013];Chan and Bishop [2009]; DiMarco et al [2013];Ehlers et al [2011];Garlan and Schmerl [2002];Griffith et al [2009];Haesevoets et al [2009];Hassan et al [2015];Haupt [2012];Ippoliti and Zhou [2012];Magalhaes and Silva [2015];Neti and Mueller [2007];Perino [2013];Piel et al [2011];Qun et al [2005];Salehie and Tahvildari [2006];Schmitt et al [2011].ACM Trans. Autonom.…”
Self-adaptation can be realized in various ways. Rule-based approaches prescribe the adaptation to be executed if the system or environment satisfies certain conditions. They result in scalable solutions but often with merely satisfying adaptation decisions. In contrast, utility-driven approaches determine optimal decisions by using an often costly optimization, which typically does not scale for large problems. We propose a rule-based and utility-driven adaptation scheme that achieves the benefits of both directions such that the adaptation decisions are optimal, whereas the computation scales by avoiding an expensive optimization. We use this adaptation scheme for architecture-based self-healing of large software systems. For this purpose, we define the utility for large dynamic architectures of such systems based on patterns that define issues the self-healing must address. Moreover, we use pattern-based adaptation rules to resolve these issues. Using a pattern-based scheme to define the utility and adaptation rules allows us to compute the impact of each rule application on the overall utility and to realize an incremental and efficient utility-driven self-healing. In addition to formally analyzing the computational effort and optimality of the proposed scheme, we thoroughly demonstrate its scalability and optimality in terms of reward in comparative experiments with a static rule-based approach as a baseline and a utility-driven approach using a constraint solver. These experiments are based on different failure profiles derived from real-world failure logs. We also investigate the impact of different failure profile characteristics on the scalability and reward to evaluate the robustness of the different approaches.
S. Ghahremani et al.approaches Kephart and Das 2007] combine both phases. Adaptation is executed for specific events and under specific conditions by adaptation rules. In such approaches, events trigger the rules that subsequently check their conditions. If the conditions are fulfilled, the actions of the rules are applied and result in the envisioned changes. Thus, the applicable rules are identified (matched) and executed to adapt the system configuration at runtime. The main strengths of such approaches are the readability, elegance, and the efficient processing of the rules. The drawbacks are the fact that the adaptation decisions are often only satisfying and the limited expressiveness of rules since rules typically just relate events to actions [Fleurey and Solberg 2009] without defining and performing any further computation for analysis and planning (e.g., to identify optimal actions). On the other hand, utility-driven approaches [Esfahani et al. 2013;Kephart and Walsh 2004] determine optimal adaptation decisions by using optimization techniques for planning that are guided by a utility function. A utility function determines how valuable each possible system configuration is, and the optimization aims at identifying optimal configurations. However, the optimization usually prevents the ...
“…To explicitly consider latency 4 Examples of such work are: Camara and de Lemos [2012]; Ehlers et al [2011]; Haupt [2012]; Neti and Mueller [2007]; Qun et al [2005]; Salehie and Tahvildari [2006]; Schmitt et al [2011]. 5 Such approaches either use observed and manually adjusted failure traces (e.g., Garlan and Schmerl [2002]; Haesevoets et al [2009]; Ippoliti and Zhou [2012]), probabilistic or simple random failure traces (e.g., Anaya et al [2014]; Chan and Bishop [2009]; Piel et al [2011]), or deterministic failure traces (e.g., Angelopoulos et al [2014]; Carzaniga et al [2008]; Casanova et al [2013]; Di Marco et al [2013]; Griffith et al [2009]; Hassan et al [2015]; Magalhaes and Silva [2015]; Perino [2013]). similarly to , the latency of each repair rule needs to be estimated and then added to the execution time of the rule.…”
Section: Related Workmentioning
confidence: 99%
“…Single model. To consider a naive failure profile model that-to the best of our knowledge-has been used in many existing work on self-healing (e.g., Angelopoulos et al [2014]; Carzaniga et al [2008]; Casanova et al [2013]; Di Marco et al [2013]; Magalhaes and Silva [2015]; Perino [2013]), we construct the single failure profile model. In this model, failures are not correlated so that they arrive individually and not in groups.…”
Section: Compares Thementioning
confidence: 99%
“…Examples of approaches that use simulation to evaluate self-healing systems are:Anaya et al [2014];Angelopoulos et al [2014];Camara and de Lemos [2012];Carzaniga et al [2008];Casanova et al [2013];Chan and Bishop [2009]; DiMarco et al [2013];Ehlers et al [2011];Garlan and Schmerl [2002];Griffith et al [2009];Haesevoets et al [2009];Hassan et al [2015];Haupt [2012];Ippoliti and Zhou [2012];Magalhaes and Silva [2015];Neti and Mueller [2007];Perino [2013];Piel et al [2011];Qun et al [2005];Salehie and Tahvildari [2006];Schmitt et al [2011].ACM Trans. Autonom.…”
Self-adaptation can be realized in various ways. Rule-based approaches prescribe the adaptation to be executed if the system or environment satisfies certain conditions. They result in scalable solutions but often with merely satisfying adaptation decisions. In contrast, utility-driven approaches determine optimal decisions by using an often costly optimization, which typically does not scale for large problems. We propose a rule-based and utility-driven adaptation scheme that achieves the benefits of both directions such that the adaptation decisions are optimal, whereas the computation scales by avoiding an expensive optimization. We use this adaptation scheme for architecture-based self-healing of large software systems. For this purpose, we define the utility for large dynamic architectures of such systems based on patterns that define issues the self-healing must address. Moreover, we use pattern-based adaptation rules to resolve these issues. Using a pattern-based scheme to define the utility and adaptation rules allows us to compute the impact of each rule application on the overall utility and to realize an incremental and efficient utility-driven self-healing. In addition to formally analyzing the computational effort and optimality of the proposed scheme, we thoroughly demonstrate its scalability and optimality in terms of reward in comparative experiments with a static rule-based approach as a baseline and a utility-driven approach using a constraint solver. These experiments are based on different failure profiles derived from real-world failure logs. We also investigate the impact of different failure profile characteristics on the scalability and reward to evaluate the robustness of the different approaches.
S. Ghahremani et al.approaches Kephart and Das 2007] combine both phases. Adaptation is executed for specific events and under specific conditions by adaptation rules. In such approaches, events trigger the rules that subsequently check their conditions. If the conditions are fulfilled, the actions of the rules are applied and result in the envisioned changes. Thus, the applicable rules are identified (matched) and executed to adapt the system configuration at runtime. The main strengths of such approaches are the readability, elegance, and the efficient processing of the rules. The drawbacks are the fact that the adaptation decisions are often only satisfying and the limited expressiveness of rules since rules typically just relate events to actions [Fleurey and Solberg 2009] without defining and performing any further computation for analysis and planning (e.g., to identify optimal actions). On the other hand, utility-driven approaches [Esfahani et al. 2013;Kephart and Walsh 2004] determine optimal adaptation decisions by using optimization techniques for planning that are guided by a utility function. A utility function determines how valuable each possible system configuration is, and the optimization aims at identifying optimal configurations. However, the optimization usually prevents the ...
“…Single is a failure model generating failure traces where failures are not correlated and arrive individually, not in groups. Our SLR in Section 3 revealed that most of the existing work investigating the performance of SHS employ naive failure traces similar to traces generated by the Single failure model (see [29,[31][32][33]35,72]).…”
Section: Probabilistic Failure Model Fitted To Real Datamentioning
Evaluating the performance of self-adaptive systems is challenging due to their interactions with often highly dynamic environments. In the specific case of self-healing systems, the performance evaluations of self-healing approaches and their parameter tuning rely on the considered characteristics of failure occurrences and the resulting interactions with the self-healing actions. In this paper, we first study the state-of-the-art for evaluating the performances of self-healing systems by means of a systematic literature review. We provide a classification of different input types for such systems and analyse the limitations of each input type. A main finding is that the employed inputs are often not sophisticated regarding the considered characteristics for failure occurrences. To further study the impact of the identified limitations, we present experiments demonstrating that wrong assumptions regarding the characteristics of the failure occurrences can result in large performance prediction errors, disadvantageous design-time decisions concerning the selection of alternative self-healing approaches, and disadvantageous deployment-time decisions concerning parameter tuning. Furthermore, the experiments indicate that employing multiple alternative input characteristics can help with reducing the risk of premature disadvantageous design-time decisions.
“…A class of security solutions that has attracted researchers over the last twenty years consists of enabling software applications to become immune against attacks [1][2][3][4]. This is a challenging area as it integrates several domains including anomaly-based intrusion detection and prevention [5][6][7][8][9][10][11], application partitioning and sandboxing [12][13][14][15], automatic error detection and patching [3,4], as well as collaborative application communities [1,16,17]. To this end, not only can these fields be leveraged and combined in different ways, but they can also be approached from different perspectives and using different techniques.…”
As cyber threats are permanently jeopardizing individuals privacy and organizations’ security, there have been several efforts to empower software applications with built-in immunity. In this paper, we present our approach to immune applications through application-level, unsupervised, outlier-based intrusion detection and prevention. Our framework allows tracking application domain objects all along the processing lifecycle. It also leverages the application business context and learns from production data, without creating any training burden on the application owner. Moreover, as our framework uses runtime application instrumentation, it incurs no additional cost on the application provider. We build a fine-grained and rich-feature application behavioral model that gets down to the method level and its invocation context. We define features to be independent from the variable structure of method invocation parameters and returned values, while preserving security-relevant information. We implemented our framework in a Java environment and evaluated it on a widely-used, enterprise-grade, and open-source ERP. We tested several unsupervised outlier detection algorithms and distance functions. Our framework achieved the best results in terms of effectiveness using the Local Outlier Factor algorithm and the Clark distance, while the average instrumentation overhead per intercepted call remains acceptable.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.