2020
DOI: 10.1002/smr.2276
|View full text |Cite
|
Sign up to set email alerts
|

Towards reducing the time needed for load testing

Abstract: The performance of large‐scale systems must be thoroughly tested under various levels of workload, as load‐related issues can have a disastrous impact on the system. However, load testing often requires a large amount of time, running from hours to even days. In our prior work, we reduced the execution time of a load test by detecting repetitiveness in individual performance metric values, such as CPU utilization, that are observed during the test. However, as we explain in this paper, disregarding combination… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0
2

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(9 citation statements)
references
References 27 publications
0
7
0
2
Order By: Relevance
“…JMH 1 is the de-facto standard framework for writing and executing software microbenchmarks (in the following simply called benchmarks) for Java. Benchmarks operate on the same level of granularity as unit tests, i.e., statement/method level, and are similarly defined in code and configured through annotations.…”
Section: Java Microbenchmark Harness (Jmh)mentioning
confidence: 99%
See 2 more Smart Citations
“…JMH 1 is the de-facto standard framework for writing and executing software microbenchmarks (in the following simply called benchmarks) for Java. Benchmarks operate on the same level of granularity as unit tests, i.e., statement/method level, and are similarly defined in code and configured through annotations.…”
Section: Java Microbenchmark Harness (Jmh)mentioning
confidence: 99%
“…To lower the time spent in performance testing activities, previous research applied techniques to select which commits to test [24,45] or which tests to run [3,14], to prioritize tests that are more likely to expose slowdowns [39], and to stop load tests once they become repetitive [1,2] or do not improve result accuracy [20]. However, none of these approaches are tailored to and consider characteristics of software microbenchmarks and enable running full benchmark suites, reduce the overall runtime, while still maintaining the same result quality.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Other approaches aim to reduce the execution time for application benchmarks: AlGhamdi et al (2016 , 2020 ) proposed to stop the benchmark run when the system reaches a repetitive performance state, and He et al (2019) devised a statistical approach based on kernel density estimation to stop once a benchmark is unlikely to produce a different result with more repetitions. Such approaches can only be combined with our analysis and optimization under certain conditions.…”
Section: Related Workmentioning
confidence: 99%
“…Traditionally, research on performance testing focussed mostly on load testing, such as identifying problems and reporting on case studies (Weyuker and Vokolos 2000;Menascé 2002;Jiang and Hassan 2015). More recent work focussed on industrial applicability (Nguyen et al 2014;Foo et al 2015;Chen et al 2019) and reducing the time spent in load testing activities (AlGhamdi et al 2016;AlGhamdi et al 2020;He et al 2019).…”
Section: Performance Testingmentioning
confidence: 99%