2021
DOI: 10.1109/access.2021.3066135
|View full text |Cite
|
Sign up to set email alerts
|

How Does the Number of Objective Function Evaluations Impact Our Understanding of Metaheuristics Behavior?

Abstract: Comparing various metaheuristics based on an equal number of objective function evaluations has become standard practice. Many contemporary publications use a specific number of objective function evaluations by the benchmarking sets definitions. Furthermore, many publications deal with the recurrent theme of late stagnation, which may lead to the impression that continuing the optimization process could be a waste of computational capabilities. But is it? Recently, many challenges, issues, and questions have … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 23 publications
(6 citation statements)
references
References 51 publications
0
5
0
Order By: Relevance
“…Despite the importance of theoretical studies, empirical comparisons between evolutionary algorithms seem to be more popular than theoretical ones, even though they are always limited by the scale of problems, setting of comparison rules, and algorithms chosen for the competition. A relatively wide-scale comparison between up to 30 various evolutionary algorithms has been presented in numerous papers [29], [30], [31], [54], [55]. In addition, each year multiple novel algorithms compete in different competitions on Evolutionary Computations (e.g.…”
Section: Literature Reviewmentioning
confidence: 99%
See 1 more Smart Citation
“…Despite the importance of theoretical studies, empirical comparisons between evolutionary algorithms seem to be more popular than theoretical ones, even though they are always limited by the scale of problems, setting of comparison rules, and algorithms chosen for the competition. A relatively wide-scale comparison between up to 30 various evolutionary algorithms has been presented in numerous papers [29], [30], [31], [54], [55]. In addition, each year multiple novel algorithms compete in different competitions on Evolutionary Computations (e.g.…”
Section: Literature Reviewmentioning
confidence: 99%
“…The impact of versatile other control parameters on the performance of specific kinds of Evolutionary Algorithms has also been addressed multiple times [64], [65], [66], [67]. As shown in numerous comparison papers, the performance of specific Evolutionary Algorithms would also depend on the number of allowed function calls [35], [55], [68], [69]. It is also known that the choice of the specific statistical test may affect the choice of the best algorithms [70], [71], [72], [73].…”
Section: Literature Reviewmentioning
confidence: 99%
“…We present several plot representations of our Bayesian Optimisation in the supplementary information. In the plots, the number of function evaluations relates to the iteration number of the objective function, the min objective is the minimum value that the objective function has reached up to the current iteration, and the estimated minimum objectives are the mean values of the posterior distribution of the Gaussian process model of the objective function [25]. We also map the hyper-parameter variables to the classification performance metrics to determine the optimal hyper-parameters.…”
Section: E Hyperparameter Optimisationmentioning
confidence: 99%
“…With the increasing number of nature-inspired algorithms, various benchmarking tests have been developed to examine their performance [46]. These include testing the algorithms on different types of functions [47], [48], and checking the number of objective function evaluations they use [49].…”
Section: B Nature-inspired Algorithmsmentioning
confidence: 99%