2016 IEEE International Conference on Software Maintenance and Evolution (ICSME) 2016
DOI: 10.1109/icsme.2016.46
|View full text |Cite
|
Sign up to set email alerts
|

An Automated Approach for Recommending When to Stop Performance Tests

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
34
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 27 publications
(34 citation statements)
references
References 27 publications
0
34
0
Order By: Relevance
“…364 (49%) benchmark suites run for an hour or less, which is probably acceptable, even in CI environments. However, 110 (15%) suites take longer than 3 hours, with 22 projects (3%) 2 https://console.cloud.google.com/bigquery?project=fh-bigquery&page= dataset&d=github_extracts&p=fh-bigquery 3 VectorGroupByOperatorBench.testAggCount exceeding 12 hours runtime. For example, the popular collections library eclipse/eclipse-collections has a total benchmark suite runtime of over 16 days, executing 515 benchmarks with 2,575 parameter combinations.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…364 (49%) benchmark suites run for an hour or less, which is probably acceptable, even in CI environments. However, 110 (15%) suites take longer than 3 hours, with 22 projects (3%) 2 https://console.cloud.google.com/bigquery?project=fh-bigquery&page= dataset&d=github_extracts&p=fh-bigquery 3 VectorGroupByOperatorBench.testAggCount exceeding 12 hours runtime. For example, the popular collections library eclipse/eclipse-collections has a total benchmark suite runtime of over 16 days, executing 515 benchmarks with 2,575 parameter combinations.…”
Section: Resultsmentioning
confidence: 99%
“…To lower the time spent in performance testing activities, previous research applied techniques to select which commits to test [24,45] or which tests to run [3,14], to prioritize tests that are more likely to expose slowdowns [39], and to stop load tests once they become repetitive [1,2] or do not improve result accuracy [20]. However, none of these approaches are tailored to and consider characteristics of software microbenchmarks and enable running full benchmark suites, reduce the overall runtime, while still maintaining the same result quality.…”
Section: Introductionmentioning
confidence: 99%
“…Performance testing is a time-consuming task [ASSH16]. However, our approach requires multiple iterations of conducting performance tests.…”
Section: Discussionmentioning
confidence: 99%
“…We monitor the CPU usage during the workload for every 10 seconds. In particular, similar to prior research [SSJH17,ASSH16], CPU percentage of the monitored process between two timestamps are calculated as the CPU usage of the corresponding workload during the period.…”
Section: Experimental Environmentmentioning
confidence: 99%
“…Performance Testing: Performance testing has traditionally received research attention in form of system testing such as load and stress testing [14,34], with more recent research focusing on industrial applicability [8] and reducing the execution time [1,10]. Academic studies on software microbenchmarking, the unit test equivalent for performance testing, have not received as much attention as studies on load testing, though Stefan et al [30] and Leitner and Bezemer [19] quantitatively and qualitatively studied microbenchmarking practices in Java OSS recently.…”
Section: Related Workmentioning
confidence: 99%