Proceedings of the Genetic and Evolutionary Computation Conference Companion 2021
DOI: 10.1145/3449726.3463167
|View full text |Cite
|
Sign up to set email alerts
|

Tuning as a means of assessing the benefits of new ideas in interplay with existing algorithmic modules

Abstract: Introducing new algorithmic ideas is a key part of the continuous improvement of existing optimization algorithms. However, when introducing a new component into an existing algorithm, assessing its potential benefits is a challenging task. Often, the component is added to a default implementation of the underlying algorithm and compared against a limited set of other variants. This assessment ignores any potential interplay with other algorithmic ideas that share the same base algorithm, which is critical in … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
5

Relationship

4
6

Authors

Journals

citations
Cited by 37 publications
(14 citation statements)
references
References 34 publications
0
14
0
Order By: Relevance
“…The selected CMA-ES configuration has the following hyper-parameters: Active update = FALSE, Elitism = TRUE, Orthogonal Sampling = TRUE, Sequential selection = FALSE, Threshold Convergence = TRUE, Step Size Adaptation = tpa, Mirrored Sampling = mirrored, Quasi-Gaussian Sampling = halton, Recombination Weights = default, Restart Strategy = BIPOP. [17] contains additional information about the hyper-parameters of the modular CMA-ES. Only one randomly selected configuration has been presented as a proof of concept about the analysis, but in our GitHub repository [23], there are results for another 14 CMA-ES configurations, which makes this analysis a personalized task.…”
Section: Datamentioning
confidence: 99%
“…The selected CMA-ES configuration has the following hyper-parameters: Active update = FALSE, Elitism = TRUE, Orthogonal Sampling = TRUE, Sequential selection = FALSE, Threshold Convergence = TRUE, Step Size Adaptation = tpa, Mirrored Sampling = mirrored, Quasi-Gaussian Sampling = halton, Recombination Weights = default, Restart Strategy = BIPOP. [17] contains additional information about the hyper-parameters of the modular CMA-ES. Only one randomly selected configuration has been presented as a proof of concept about the analysis, but in our GitHub repository [23], there are results for another 14 CMA-ES configurations, which makes this analysis a personalized task.…”
Section: Datamentioning
confidence: 99%
“…Even the self-adaptive algorithms such as Evolution Strategies (ES) [4] depend on settings such as learning rates and population size. Also, for the hybrid algorithms and modular algorithm frameworks such as the modular CMA-ES [13,34], the selection of operators has significant impact on the algorithms' performance.…”
Section: Related Work 21 Algorithm Configurationmentioning
confidence: 99%
“…The logger is integrated into a wide range of existing tools for benchmarking, including problem suites such as PBO (Doerr et al, 2020) and the W-model (Weise et al, 2020) for discrete optimization and COCO's BBOB (Hansen et al, 2021) for the continuous case. On the algorithm side, IOHexperimenter has been connected to several modular algorithm frameworks, such as modular GA (Ye et al, 2021) and modular CMA-ES (de Nobel et al, 2021). Additionally, output generated by the included loggers is compatible with the IOHanalyzer module (Wang et al, 2020) for interactive performance analysis.…”
Section: Functionalitymentioning
confidence: 99%