Proceedings of the 35th IEEE/ACM International Conference on Automated Software Engineering 2020
DOI: 10.1145/3324884.3416573
|View full text |Cite
|
Sign up to set email alerts
|

Identifying software performance changes across variants and versions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
8
0
1

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 20 publications
(12 citation statements)
references
References 33 publications
1
8
0
1
Order By: Relevance
“…We execute each benchmark for 3 trials consisting of 20 iterations of 1s duration. This configuration is in line with other recent performance engineering works, e.g., Blackburn et al (2016), Chen et al (2020), andMühlbauer et al (2020). Nonetheless, it does not ensure that the measurements are stable, i.e., measurement variability is low.…”
Section: Internal Validitysupporting
confidence: 66%
“…We execute each benchmark for 3 trials consisting of 20 iterations of 1s duration. This configuration is in line with other recent performance engineering works, e.g., Blackburn et al (2016), Chen et al (2020), andMühlbauer et al (2020). Nonetheless, it does not ensure that the measurements are stable, i.e., measurement variability is low.…”
Section: Internal Validitysupporting
confidence: 66%
“…Mühlbauer et al [28] investigated the history of software performances to isolate when a performance shift happens over time. If we know evolution can impact the performance of a configurable software, we do not actually know if and how much it can impact a performance prediction model.…”
Section: Impacts Of Evolution On Configuration Performancementioning
confidence: 99%
“…Existing works [7], [11], [19], [20], [21], [22], [23], [24], [25], [26], [27], [28] have attempted to investigate the transfer challenge. However, they investigated it in the context of homogeneous feature spaces and not for the case of system code evolving across different versions, i.e.…”
Section: Introductionmentioning
confidence: 99%
“…These values are in line with best practice, i.e., Georges et al (2007) suggest 30 iterations, and cover a range that previous research used for their performance measurements. For example, 5 iterations are used by Blackburn et al (2004), Jangda et al (2019), andMühlbauer et al (2020); 10 iterations are used by Selakovic and Pradel (2016), Song and Lu (2017), and Kaltenecker et al (2019); 20 iterations are used by Laaber and Leitner (2018) and were the default for Java Microbenchmarking Harness (JHM) benchmarks (Shipilev 2018); and 30 iterations are used by Curtsinger and Berger (2013), Blackburn et al (2016), andChen et al (2020).…”
Section: Number Of Benchmark Iterationsmentioning
confidence: 99%