2019
DOI: 10.48550/arxiv.1911.12643
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Predicting Performance of Software Configurations: There is no Silver Bullet

Abstract: Many software systems offer configuration options to tailor their functionality and non-functional properties (e.g., performance). Often, users are interested in the (performance-)optimal configuration, but struggle to find it, due to missing information on influences of individual configuration options and their interactions. In the past, various supervised machine-learning techniques have been used to predict the performance of all configurations and to identify the optimal one.In the literature, there is a … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 57 publications
0
6
0
Order By: Relevance
“…The software engineering community has also invested significant research into finding techniques for optimal application-level configurations. Among the most recent work is an excellent empirical study by Grebhahn et al [27]. The authors evaluated several popular techniques in Software Engineering, including k-Nearest Neighbours, Support Vector Regression and Random Forests for creating surrogate performance models.…”
Section: E Search-based Software Engineeringmentioning
confidence: 99%
See 1 more Smart Citation
“…The software engineering community has also invested significant research into finding techniques for optimal application-level configurations. Among the most recent work is an excellent empirical study by Grebhahn et al [27]. The authors evaluated several popular techniques in Software Engineering, including k-Nearest Neighbours, Support Vector Regression and Random Forests for creating surrogate performance models.…”
Section: E Search-based Software Engineeringmentioning
confidence: 99%
“…We propose to use the Mann-Whitney U(MWU) test or the Wilcoxon rank-sum test. The choice of this test is motivated by the wide-spread support of the Wilcoxon rank-sum test for these types of studies [30], [27]. For our study, we chose the significance threshold α = 0.01.…”
Section: A Distribution Of Samples and Significance Testsmentioning
confidence: 99%
“…1 Global performance-influence models are typically built by measuring execution time under different configurations [64]. The models can be built using white-box techniques [71,72,76], machinelearning approaches [20,[22][23][24]35], or a brute-force approach. Details on these techniques are beyond the scope of this paper.…”
Section: Identifying Influencing Optionsmentioning
confidence: 99%
“…On the other hand, to mitigate the limitation of the small sample, both SPLConqueror [42] and DECART [12] incorporate several sampling heuristics to select a set of representative configurations for training. However, it takes extra time and effort to determine the appropriate sampling strategy for each system since there is no universal optimal sampling heuristics [10]. Instead, PerLasso [14] and DeepPerf [13] focus on restricting the performance model to be sparse with 𝐿 1 regularization.…”
Section: Introductionmentioning
confidence: 99%