In this paper we present a conceptual framework for parameter tuning, provide a survey of tuning methods, and discuss related methodological issues. The framework is based on a three-tier hierarchy of a problem, an evolutionary algorithm (EA), and a tuner. Furthermore, we distinguish problem instances, parameters, and EA performance measures as major factors, and discuss how tuning can be directed to algorithm performance and/or robustness. For the survey part we establish different taxonomies to categorize tuning methods and review existing work. Finally, we elaborate on how tuning can improve methodology by facilitating well-funded experimental comparisons and algorithm analysis.
Abstract-Tuning the parameters of an evolutionary algorithm (EA) to a given problem at hand is essential for good algorithm performance. Optimizing parameter values is, however, a non-trivial problem, beyond the limits of human problem solving.In this light it is odd that no parameter tuning algorithms are used widely in evolutionary computing. This paper is meant to be stepping stone towards a better practice by discussing the most important issues related to tuning EA parameters, describing a number of existing tuning methods, and presenting a modest experimental comparison among them. The paper is concluded by suggestions for future researchhopefully inspiring fellow researchers for further work.Index Terms-evolutionary algorithms, parameter tuning I. BACKGROUND AND OBJECTIVESEvolutionary Algorithms (EA) form a rich class of stochastic search methods that share the basic principles of incrementally improving the quality of a set of candidate solutions by means of variation and selection [7], [5]. Algorithms in this class are all based on the same generic framework whose details need to be specified to obtain a particular EA. It is customary to call these details EA parameters, and designing an EA for a given application amounts to selecting good values for these parameters.Setting EA parameters is commonly divided into two cases, parameter tuning and parameter control [6]. In case of parameter control the parameter values are changing during an EA run. In this case one needs initial parameter values and suitable control strategies, that in turn can be deterministic, adaptive, or self-adaptive. Parameter tuning is easier in that the parameter values are not changing during a run, hence only a single value per parameter is required. Nevertheless, even the problem of tuning an EA for a given application is hard because there is a large number of options, but only little knowledge about the effect of EA parameters on EA performance. EA users mostly rely on conventions (mutation rate should be low), ad hoc choices (why not use uniform crossover), and experimental comparisons on a limited scale (testing combinations of three different crossover rates and three different mutation rates).The main objective of this paper is to illustrate the feasibility of using tuning algorithms, thereby motivating their usage. To this end, we describe three different approaches to algorithmic parameter tuning (meta-EA, meta-EDA, SPO) and show their (dis)advantages when tuning EA parameters for solving the Rastrigin function. While the limited scale (one single fitness landscape and one algorithm to be tuned) prevents general conclusions, we do obtain a convincing Vrije Universiteit Amsterdam, The Netherlands, {sksmit, gusz}@cs.vu.nl showcase and some very interesting insights whose generalization requires much more experimental research. II. PARAMETERS, TUNERS, AND UTILITY LANDSCAPESIntuitively, there is a difference between choosing a good crossover operator and choosing a good value for the related crossover rate p c . This differ...
I n this chapter we discuss the notion of Evolutionary Algorithm (EA) parameters and propose a distinction between EAs and EA instances, based on the type of parameters used to specify their details. Furthermore, we consider the most important aspects of the parameter tuning problem and give an overview of existing parameter tuning methods. Finally, we elaborate on the methodological issues involved here and provide recommendations for further development. Background and ObjectivesFinding appropriate parameter values for evolutionary algorithms (EA) is one of the persisting grand challenges of the evolutionary computing (EC) field. In general, EC researchers and practitioners all acknowledge that good parameter values are essential for good EA performance. However, very little effort is spent on studying the effect of EA parameters on EA performance and on tuning them. In practice, parameter values are mostly selected by conventions (mutation rate should be low), ad hoc choices (why not use uniform crossover), and experimental comparisons on a limited scale (testing combinations of three different crossover rates and three different mutation rates). Hence, there is a striking gap between the widely acknowledged importance of good parameter values and the widely exhibited ignorance concerning principled approaches to tune EA parameters.To this end, it is important to recall that the problem of setting EA parameters is commonly divided into two cases, parameter tuning and parameter control [14].
Abstract. We present an empirical study on the impact of different design choices on the performance of an evolutionary algorithm (EA). Four EA components are considered-parent selection, survivor selection, recombination and mutation-and for each component we study the impact of choosing the right operator and of tuning its free parameter(s). We tune 120 different combinations of EA operators to 4 different classes of fitness landscapes and measure the cost of tuning. We find that components differ greatly in importance. Typically the choice of operator for parent selection has the greatest impact, and mutation needs the most tuning. Regarding individual EAs however, the impact of design choices for one component depends on the choices for other components, as well as on the available amount of resources for tuning.
Abstract. Finding appropriate parameter values for Evolutionary Algorithms (EAs) is one of the persistent challenges of Evolutionary Computing. In recent publications we showed how the REVAC (Relevance Estimation and VAlue Calibration) method is capable to find good EA parameter values for single problems. Here we demonstrate that REVAC can also tune an EA to a set of problems (a whole test suite). Hereby we obtain robust, rather than problem-tailored, parameter values and an EA that is a 'generalist, rather than a 'specialist. The optimized parameter values prove to be different from problem to problem and also different from the values of the generalist. Furthermore, we compare the robust parameter values optimized by REVAC with the supposedly robust conventional values and see great differences. This suggests that traditional settings might be far from optimal, even if they are meant to be robust.Key words: parameter tuning, algorithm design, test suites, robustness Background and ObjectivesFinding appropriate parameter values for evolutionary algorithms (EA) is one of the persisting grand challenges of the evolutionary computing (EC) field. As explained by Eiben et al. in [8] this challenge can be addressed before the run of the given EA (parameter tuning) or during the run (parameter control). In this paper we focus on parameter tuning, that is, we are seeking good parameter values off-line and use these values for the whole EA run. In today's practice, this tuning problem is usually 'solved' by conventions (mutation rate should be low), ad hoc choices (why not use uniform crossover), and experimental comparisons on a limited scale (testing combinations of three different crossover rates and three different mutation rates). Until recently, there were not many workable alternatives. However, by the developments over last couple of years now there are a number of tuning methods and corresponding software packages that enable EA practitioners to perform tuning without much effort. In particular, REVAC [10,13] and SPOT [3,5,4] are well developed and documented.The main objective of this paper is to illustrate the advantage of using tuning algorithms in terms of improved EA performance. To this end, we will select a set
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.