2019
DOI: 10.1016/j.swevo.2018.10.002
|View full text |Cite
|
Sign up to set email alerts
|

Benchmarking evolutionary algorithms for single objective real-valued constrained optimization – A critical review

Abstract: Benchmarking plays an important role in the development of novel search algorithms as well as for the assessment and comparison of contemporary algorithmic ideas. This paper presents common principles that need to be taken into account when considering benchmarking problems for constrained optimization. Current benchmark environments for testing Evolutionary Algorithms are reviewed in the light of these principles. Along with this line, the reader is provided with an overview of the available problem domains i… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
28
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
4

Relationship

1
7

Authors

Journals

citations
Cited by 53 publications
(32 citation statements)
references
References 35 publications
0
28
0
1
Order By: Relevance
“…Nowadays, in order to compare several algorithms it is crucial to ensure that improvement in results are not due to stochastic differences in runs. Thus, it is mandatory to apply statistical tests to clarify the statistical significance of the performance gaps found among algorithms [343]. However, the majority of benchmarks in Table 1 do not consider the application of statistical testing, neither in their experimental setting (the number of runs is lower than those recommended for them), nor in the comparison criterion established by the organizers.…”
Section: Dynamic Optimizationmentioning
confidence: 99%
“…Nowadays, in order to compare several algorithms it is crucial to ensure that improvement in results are not due to stochastic differences in runs. Thus, it is mandatory to apply statistical tests to clarify the statistical significance of the performance gaps found among algorithms [343]. However, the majority of benchmarks in Table 1 do not consider the application of statistical testing, neither in their experimental setting (the number of runs is lower than those recommended for them), nor in the comparison criterion established by the organizers.…”
Section: Dynamic Optimizationmentioning
confidence: 99%
“…These competitions include various types of benchmark problem suites, like: single-objective, large-scale, noisy, multiobjective, and constrained optimization. The CEC competitions provide specific test environments for detailed algorithm assessment and comparison [26]. Mainly, the test environment is especially popular for benchmarking the EAs.…”
Section: The Cec Function Benchmark Suitesmentioning
confidence: 99%
“…These conditions can also be further advanced to limit the maximum iterations without improvement or reaching the desired objective function value [54]. Apart from the budget limitation, some benchmark recommendations also suggest additional termination condition of an error rate lower than a predefined threshold, for example, set to E −8 [55].…”
Section: Is There Any Standard Evaluations Practice?mentioning
confidence: 99%
“…The performance measures often include the mean statistical error of the best, worst, mean and median solutions, and their standard deviations. However, since the No Free Lunch theorem [55] states, there is no "universal" best performing algorithm to solve any possible problem, solely performanceoriented experiments usually cannot lead to general assumptions. An utter win of one algorithm on one set of problems does not mean that the algorithm would be usable on a different set.…”
Section: Is There Any Standard Evaluations Practice?mentioning
confidence: 99%