Abstract:Abstract-Modern enterprise applications have to satisfy increasingly stringent Quality-of-Service requirements. To ensure that a system meets its performance requirements, the ability to predict its performance under different configurations and workloads is essential. Architecture-level performance models describe performance-relevant aspects of software architectures and execution environments allowing to evaluate different usage profiles as well as system deployment and configuration options. However, build… Show more
“…It is a mature modeling language for model-driven quality analysis of component-based software architectures [3] and has been used in a number of industry-relevant case studies [19], [20], [21], [22], [23], [24]. Currently supported quality predictions include performance, reliability, and maintenance costs, however, in this paper the focus is on performance modeling.…”
“…The system is implemented as a Java Enterprise application deployed on two servers using an Oracle database. Resource demands have been determined using an estimation technique based on measured response times and resource utilization [22].…”
Abstract-During the last decade, researchers have proposed a number of model transformations enabling performance predictions. These transformations map performance-annotated software architecture models into stochastic models solved by means of analytical or numerical analysis or by system simulation. However, so far, a detailed quantitative evaluation of the accuracy and efficiency of different transformations is missing, making it hard to select an adequate transformation for a given context. This paper provides an in-depth comparison and quantitative evaluation of representative model transformations to, e.g., Queueing Petri Nets and Layered Queueing Networks. The semantic gaps between typical source model abstractions and the different analysis techniques are revealed. The accuracy and efficiency of each transformation are evaluated by considering four case studies representing systems of different size and complexity. The presented results and insights gained from the evaluation help software architects and performance engineers to select the appropriate transformation for a given context, thus significantly improving the usability of model transformations for performance prediction.
“…It is a mature modeling language for model-driven quality analysis of component-based software architectures [3] and has been used in a number of industry-relevant case studies [19], [20], [21], [22], [23], [24]. Currently supported quality predictions include performance, reliability, and maintenance costs, however, in this paper the focus is on performance modeling.…”
“…The system is implemented as a Java Enterprise application deployed on two servers using an Oracle database. Resource demands have been determined using an estimation technique based on measured response times and resource utilization [22].…”
Abstract-During the last decade, researchers have proposed a number of model transformations enabling performance predictions. These transformations map performance-annotated software architecture models into stochastic models solved by means of analytical or numerical analysis or by system simulation. However, so far, a detailed quantitative evaluation of the accuracy and efficiency of different transformations is missing, making it hard to select an adequate transformation for a given context. This paper provides an in-depth comparison and quantitative evaluation of representative model transformations to, e.g., Queueing Petri Nets and Layered Queueing Networks. The semantic gaps between typical source model abstractions and the different analysis techniques are revealed. The accuracy and efficiency of each transformation are evaluated by considering four case studies representing systems of different size and complexity. The presented results and insights gained from the evaluation help software architects and performance engineers to select the appropriate transformation for a given context, thus significantly improving the usability of model transformations for performance prediction.
“…These privacy checks build on the utilization of runtime models that reflect application components, their interactions, and deployments. Existing runtime model approaches (e.g., [3], [6], [13], [18]) do not provide this information. Moreover, providing the required runtime model is challenging as cloud solutions grant limited access to their internals.…”
Cloud providers as well as cloud customers are obliged to comply with privacy regulations. In particular, these regulations prescribe compliance to geo-location policies that define at which geographical locations personal data may be stored or processed. However, cloud elasticity dynamically adapts computing resources to workload changes by replicating and migrating components as well as included data among data centers. As a result, data might be moved to different geographical locations, thereby violating geo-location policies. Current approaches for cloud monitoring and compliance fall short in detecting relevant cases of such policy violations, particularly cases that involve data transfers among data centers. We address this gap by exploiting runtime models for the analysis of privacy violations during runtime. In this paper, we introduce architectural runtime models that reflect information about application components, their interactions, and their cloud deployments. We combine push-based heartbeat monitoring with event processing, and graph grammars for efficiently updating those models. An empirical evaluation based on a prototypical implementation applied to Amazon EC2 and the CoCoME case study indicates that the runtime model approach accurately and efficiently reflects changes of cloud applications.
“…Examples are the size of the memory, and the processor speed. Many prevalent techniques ( [6], [7]) that evaluate QoS and the trust of service compositions not only lack the consideration of an explicit context in their evaluations, but also operate at later phases (such as testing or maintenance phases) of the system development lifecycle. These techniques are used to perform post-analysis of the software design, identify its faults, and improve the design in the subsequent iterations.…”
The emergence of ubiquitous computing and the wide adoption of smart phones over the past few years require many Web Services to function in a context-aware manner. In such services, not only the functional attributes, but also the QoS attributes (e.g., response time) and the trust (i.e., the degree of compliance of a service to its specification) depend on the context of the services. Hence, when designing systems composed out of such services, it is important to consider the entire system execution context in addition to its functional and non-functional requirements. Achieving this goal requires the consideration of the context to trust-QoS dependencies, and interaction patterns between individual services. In this work, we tackle this challenge by first proposing a model that uses the context-QoS dependency information of individual services and inter-service interaction patterns to make predictions about the QoS and trust of compositions at the design phase. Then we apply the prediction model to select the optimum set of service to create the composed system. Our approach allows for better design and implementation decisions about composed systems in the early phases of the software lifecycle thereby reducing cost, time, and effort. The preliminary results show that the proposed model provides more accurate predictions and results in optimum composed systems than the prevalent approaches.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.