2022 IEEE Congress on Evolutionary Computation (CEC) 2022
DOI: 10.1109/cec55065.2022.9870439
|View full text |Cite
|
Sign up to set email alerts
|

Identifying minimal set of Exploratory Landscape Analysis features for reliable algorithm performance prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 24 publications
0
4
0
Order By: Relevance
“…Additionally, the availability of instances allows for the evaluation of AutoML approaches, e.g., algorithm selection, by enabling a cross-validation approach across instances. This generally leads to better results than cross-validation on a function-level, since the functions themselves have been designed specifically to contain different challenges to the optimizer, so leaving a function out of the training set generally makes the transfer between train and test challenging [13,23].…”
Section: The Bbob Problem Suitementioning
confidence: 99%
See 1 more Smart Citation
“…Additionally, the availability of instances allows for the evaluation of AutoML approaches, e.g., algorithm selection, by enabling a cross-validation approach across instances. This generally leads to better results than cross-validation on a function-level, since the functions themselves have been designed specifically to contain different challenges to the optimizer, so leaving a function out of the training set generally makes the transfer between train and test challenging [13,23].…”
Section: The Bbob Problem Suitementioning
confidence: 99%
“…However, the challenge with using BBOB for algorithm selection lies in the evaluation of the results. One method is a leave-one-function-out technique [23], which uses 23 functions for training and the remaining one for testing. This approach tends to show poor performance since each problem has been designed to represent different high-level challenges for the optimization algorithm.…”
Section: Introductionmentioning
confidence: 99%
“…Three randomly selected Differential Evolution (DE) configurations are included in the analysis. Their hyper-parameters are set as presented in [13]. The population size of the algorithm is set equal to the problem dimension (10).…”
Section: Experimental Designmentioning
confidence: 99%
“…The LOPO performance prediction can be stated as an ML task known as zero-shot learning [10]- [12]. Nikolikj et al introduced RF+clust for LOPO algorithm performance prediction [13]. The approach calibrates the prediction obtained by a Random Forest (RF) model [14] for a given test problem with a weighted mean of the algorithm performance on problems from the training data that are the most similar to the test problem, based on their feature representation.…”
Section: Introductionmentioning
confidence: 99%