Proceedings of the 4th ACM/SPEC International Conference on Performance Engineering 2013
DOI: 10.1145/2479871.2479937
|View full text |Cite
|
Sign up to set email alerts
|

Model-based performance testing in the cloud using the mbpet tool

Abstract: We present an approach for performance testing of software services. We use Probabilistic Timed Automata to model the workload of the system, by describing how different user types interact with the system. We use these models to generate load in real-time and we measure different performance indicators. An in-house developed tool, MBPeT, is used to support our approach. We exemplify with an auction web service case study and show how performance information about the system under test can be collected.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(10 citation statements)
references
References 4 publications
0
8
0
Order By: Relevance
“…Timed Automata Compared to the Markov Chain and the Stochastic Form-oriented Models, Probability Timed Automata is an abstraction which provides support for user action modeling as well as timing delays [108], [109], [110]. Similar to the Markov chain model, a Probabilistic Timed Automata contains a set of states and transition probabilities between states.…”
Section: ) Testing Loads Derived Using Probabilisticmentioning
confidence: 99%
“…Timed Automata Compared to the Markov Chain and the Stochastic Form-oriented Models, Probability Timed Automata is an abstraction which provides support for user action modeling as well as timing delays [108], [109], [110]. Similar to the Markov chain model, a Probabilistic Timed Automata contains a set of states and transition probabilities between states.…”
Section: ) Testing Loads Derived Using Probabilisticmentioning
confidence: 99%
“…However, efficiency was specified in (13) requirements, from which we conclude that efficiency is difficult to document and quantify. We found few examples of quantified efficiency requirements: (1) The external server data store containing RLCS status for use by external systems shall be updated once per minute, 9 and (2) The system must accomplish 90% for transactions in less than 1 second. 10 The examples show that it is possible to quantify efficiency.…”
Section: Discussionmentioning
confidence: 99%
“…In the SMS, there were threats related to the data extraction methods. (1) We may have missed some papers because two databases used by Dias Neto et al [19] we did not have access to. To keep this to a minimum we made sure that we use the SCOPUS database, which includes publications from different technical publishers.…”
Section: Threats To Validitymentioning
confidence: 99%
“…In addition, test inputs for load testing can be generated simultaneously while the real SUT executes (online) or independently (offline). In the offline approach, test loads are designed from the source code using static analysis techniques, such as data flow analysis and symbolic execution , using the operational profile (workload characterization) or using design models annotated with statistics derived from the operational profile and past data, such as Unified Modeling Language (UML) use‐case diagrams , Markov chains , and probabilistic time automata . In the online approach, there is feedback from the real system to the test generation process to refine test loads.…”
Section: Literature Reviewmentioning
confidence: 99%