2017 IEEE International Conference on Software Testing, Verification and Validation (ICST) 2017
DOI: 10.1109/icst.2017.17
|View full text |Cite
|
Sign up to set email alerts
|

Perphecy: Performance Regression Test Selection Made Simple but Effective

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
34
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 30 publications
(35 citation statements)
references
References 14 publications
1
34
0
Order By: Relevance
“…The closest studies are the ones from De Oliveira et al [25] and Albert et al [26], which fit the context of this study. However, both studies leveraged only a subset of proxies investigated in this paper, focused on different testing problems and techniques.…”
Section: Performance Proxiessupporting
confidence: 68%
See 1 more Smart Citation
“…The closest studies are the ones from De Oliveira et al [25] and Albert et al [26], which fit the context of this study. However, both studies leveraged only a subset of proxies investigated in this paper, focused on different testing problems and techniques.…”
Section: Performance Proxiessupporting
confidence: 68%
“…However, both studies leveraged only a subset of proxies investigated in this paper, focused on different testing problems and techniques. De Oliveira et al [25] investigated performance proxies in the context of regression testing. Albert et al [26] proposed three performance proxies for symbolic execution and showed their benefits on example programs.…”
Section: Performance Proxiesmentioning
confidence: 99%
“…SMBs are different from unit tests, e.g., they test the "usual" path of a program rather then the exceptional, are highly parameterized which results in multiple SMBs having the same call path, and have a result distribution compared to a binary outcome. Research has focused on whether a commit should be performance tested at all [13,28], or it employed performance impact prediction as a driver for prioritization and selection [5,23]. It is unclear though how traditional TCP/RTS techniques perform for SMBs and whether performance impact is the only important goal to optimize for.…”
Section: Rq 3: Reducing Benchmark Execution Timementioning
confidence: 99%
“…Performance test regression selection and prioritization research so far explored testing only performance-critical commits [13], or focused on particular types such as collection-intensive software [23] and concurrent classes [25]. De Oliveira et al [5] propose selection of individual benchmarks based on static and dynamic data that assess whether a code change affects the performance of each benchmark. In my research, I want to explore other properties that are essential in performance test prioritization and selection such as importance and and result quality.…”
Section: Related Workmentioning
confidence: 99%
“…Performance test regression selection research so far explored testing only performance-critical commits [2,20], or focused on particular types such as collectionintensive software [36] and concurrent classes [40]. [10] propose selection of individual benchmarks based on static and dynamic data that assess whether a code change aects the performance of each benchmark. [7] tackle performance-regression testing through stochastic performance logic (SPL) which lets developers describe performance assertions in hypothesis-test-style logical equations.…”
Section: Related Workmentioning
confidence: 99%