2019
DOI: 10.3390/math7030232
|View full text |Cite
|
Sign up to set email alerts
|

What Can We Learn from Multi-Objective Meta-Optimization of Evolutionary Algorithms in Continuous Domains?

Abstract: Properly configuring Evolutionary Algorithms (EAs) is a challenging task made difficult by many different details that affect EAs’ performance, such as the properties of the fitness function, time and computational constraints, and many others. EAs’ meta-optimization methods, in which a metaheuristic is used to tune the parameters of another (lower-level) metaheuristic which optimizes a given target function, most often rely on the optimization of a single property of the lower-level method. In this paper, we … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
6
0
1

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
4

Relationship

3
5

Authors

Journals

citations
Cited by 10 publications
(8 citation statements)
references
References 37 publications
1
6
0
1
Order By: Relevance
“…This initial overhead is not only aimed at letting the retrieval system achieve the best possible final performance, but also and, more importantly, at finding a “standard” setting for the GA that can both directly guarantee good performance, and, being a good “starting guess”, minimize the fine-tuning time when transferred to other benchmarks. Our expectation, finally confirmed by the results achieved, was supported by the literature on meta-optimization, which shows that the dependence of the performance of an optimizer on the parameters of a meta-optimizer is less critical than the dependence of the final outcome of the optimization on the parameters of the optimizer [ 45 ]. If this sort of “vanishing gradient” effect did not occur as the level of meta-optimization increased, i.e., if tuning a meta-optimizer was as a critical problem as tuning the optimizer, the literature on meta-optimization would not be as extended as it actually is.…”
Section: Resultssupporting
confidence: 79%
See 1 more Smart Citation
“…This initial overhead is not only aimed at letting the retrieval system achieve the best possible final performance, but also and, more importantly, at finding a “standard” setting for the GA that can both directly guarantee good performance, and, being a good “starting guess”, minimize the fine-tuning time when transferred to other benchmarks. Our expectation, finally confirmed by the results achieved, was supported by the literature on meta-optimization, which shows that the dependence of the performance of an optimizer on the parameters of a meta-optimizer is less critical than the dependence of the final outcome of the optimization on the parameters of the optimizer [ 45 ]. If this sort of “vanishing gradient” effect did not occur as the level of meta-optimization increased, i.e., if tuning a meta-optimizer was as a critical problem as tuning the optimizer, the literature on meta-optimization would not be as extended as it actually is.…”
Section: Resultssupporting
confidence: 79%
“…GAs and, more in general, EAs have often been used for parameter tuning in their most obvious role of quality function optimizers of direct solutions to complex design problems, described by many variables with unknown dependencies among their parameters. However, they have also been used as metaoptimizers to tune either general methods, applicable in different contexts (as, for example, the design of neural networks or classifiers [ 42 , 43 , 44 ]) or even to tune other EAs [ 45 ].…”
Section: Related Workmentioning
confidence: 99%
“…Meta-optimization refers to the use of a metaheuristic to tune the parameters of another metaheuristic [25]. It has been used in both, single-objective optimization problems [25] [26], and MOPs [27] [28]. A related concept within the literature is Hyper-heuristics [29], which is a further extension to the framework of metaoptimization.…”
Section: Meta-optimization and Hyper-heuristicsmentioning
confidence: 99%
“…There are also some studies use a metaheuristic optimization algorithm to optimize the parameters of DE, such as [34,35].…”
Section: Related Work On Parameter Control Strategy For Dementioning
confidence: 99%