Over the last decade, a variety of evolutionary algorithms (EAs) have been proposed for solving multi-objective optimization problems. Especially more recent multi-objective evolutionary algorithms (MOEAs) have been shown to be efficient and superior to earlier approaches. In the development of new MOEAs, the strive is to obtain increasingly better performing MOEAs. An important question however is whether we can expect such improvements to converge onto a specific efficient MOEA that behaves best on a large variety of problems. The best MOEAs to date behave similarly or are individually preferable with respect to different performance indicators. In this paper, we argue that the development of new MOEAs cannot converge onto a single new most efficient MOEA because the performance of MOEAs shows characteristics of multi-objective problems. While we will point out the most important aspects for designing competent MOEAs in this paper, we will also indicate the inherent multi-objective trade-off in multi-objective optimization between proximity and diversity preservation. We will discuss the impact of this trade-off on the concepts and design of exploration and exploitation operators. We also present a general framework for competent MOEAs and show how current state-of-the-art MOEAs can be obtained by making choices within this framework. Furthermore, we show an example of how we can separate non-domination selection pressure from diversity preservation selection pressure and discuss the impact of changing the ratio between these components.
Learning the optimal probabilities of applying an exploration operator from a set of alternatives can be done by self-adaptation or by adaptive allocation rules. In this paper we consider the latter option. The allocation strategies discussed in the literature basically belong to the class of probability matching algorithms. These strategies adapt the operator probabilities in such a way that they match the reward distribution. In this paper we introduce an alternative adaptive allocation strategy, called the adaptive pursuit method. We compare this method with the probability matching approach in a non-stationary environment. Calculations and experimental results show the superior performance of the adaptive pursuit algorithm. If the reward distributions stay stationary for some time, the adaptive pursuit method converges rapidly and accurately to an operator probability distribution that results in a much higher probability of selecting the current optimal operator and a much higher average reward than with the probability matching strategy. Yet most importantly, the adaptive pursuit scheme remains sensitive to changes in the reward distributions, and reacts swiftly to non-stationary shifts in the environment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.