Abstract-Tuning the parameters of an evolutionary algorithm (EA) to a given problem at hand is essential for good algorithm performance. Optimizing parameter values is, however, a non-trivial problem, beyond the limits of human problem solving.In this light it is odd that no parameter tuning algorithms are used widely in evolutionary computing. This paper is meant to be stepping stone towards a better practice by discussing the most important issues related to tuning EA parameters, describing a number of existing tuning methods, and presenting a modest experimental comparison among them. The paper is concluded by suggestions for future researchhopefully inspiring fellow researchers for further work.Index Terms-evolutionary algorithms, parameter tuning
I. BACKGROUND AND OBJECTIVESEvolutionary Algorithms (EA) form a rich class of stochastic search methods that share the basic principles of incrementally improving the quality of a set of candidate solutions by means of variation and selection [7], [5]. Algorithms in this class are all based on the same generic framework whose details need to be specified to obtain a particular EA. It is customary to call these details EA parameters, and designing an EA for a given application amounts to selecting good values for these parameters.Setting EA parameters is commonly divided into two cases, parameter tuning and parameter control [6]. In case of parameter control the parameter values are changing during an EA run. In this case one needs initial parameter values and suitable control strategies, that in turn can be deterministic, adaptive, or self-adaptive. Parameter tuning is easier in that the parameter values are not changing during a run, hence only a single value per parameter is required. Nevertheless, even the problem of tuning an EA for a given application is hard because there is a large number of options, but only little knowledge about the effect of EA parameters on EA performance. EA users mostly rely on conventions (mutation rate should be low), ad hoc choices (why not use uniform crossover), and experimental comparisons on a limited scale (testing combinations of three different crossover rates and three different mutation rates).The main objective of this paper is to illustrate the feasibility of using tuning algorithms, thereby motivating their usage. To this end, we describe three different approaches to algorithmic parameter tuning (meta-EA, meta-EDA, SPO) and show their (dis)advantages when tuning EA parameters for solving the Rastrigin function. While the limited scale (one single fitness landscape and one algorithm to be tuned) prevents general conclusions, we do obtain a convincing Vrije Universiteit Amsterdam, The Netherlands, {sksmit, gusz}@cs.vu.nl showcase and some very interesting insights whose generalization requires much more experimental research.
II. PARAMETERS, TUNERS, AND UTILITY LANDSCAPESIntuitively, there is a difference between choosing a good crossover operator and choosing a good value for the related crossover rate p c . This differ...