2018
DOI: 10.1017/s0963548318000275
|View full text |Cite
|
Sign up to set email alerts
|

Drift Analysis and Evolutionary Algorithms Revisited

Abstract: One of the easiest randomized greedy optimization algorithms is the following evolutionary algorithm which aims at maximizing a boolean function f : {0, 1} n → R. The algorithm starts with a random search point ξ ∈ {0, 1} n , and in each round it flips each bit of ξ with probability c/n independently at random, where c > 0 is a fixed constant. The thus created offspring ξ ′ replaces ξ if and only if f (ξ ′ ) ≥ f (ξ). The analysis of the runtime of this simple algorithm for monotone and for linear functions tur… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
50
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 50 publications
(50 citation statements)
references
References 19 publications
0
50
0
Order By: Relevance
“…In later proofs we will define potential functions on best-so-far solutions and prove bounds on the drift; these bounds then translate to expected run times with the use of the drift theorems from this section. We use formulations from [13] because they do not require finite search spaces, and they do not require that the potential forms a Markov chain. Instead, we will have random variables Z t (the current GP-tree) that follow a Markov chain, and the potential is some function of Z t .…”
Section: Drift Theorems and Preliminariesmentioning
confidence: 99%
See 2 more Smart Citations
“…In later proofs we will define potential functions on best-so-far solutions and prove bounds on the drift; these bounds then translate to expected run times with the use of the drift theorems from this section. We use formulations from [13] because they do not require finite search spaces, and they do not require that the potential forms a Markov chain. Instead, we will have random variables Z t (the current GP-tree) that follow a Markov chain, and the potential is some function of Z t .…”
Section: Drift Theorems and Preliminariesmentioning
confidence: 99%
“…We start with a theorem for additive drift. Theorem 3.2 (Additive Drift [7], formulation of [13]). Let pZ t q tPN 0 be random variables describing a Markov process with state space Z, and with a potential function α : Z Ñ S Ď r0, 8q, and assume αpZ 0 q " s 0 .…”
Section: Drift Theorems and Preliminariesmentioning
confidence: 99%
See 1 more Smart Citation
“…This suggests to try to generate the offspring in a way that the expected fitness gain over the best-so-far solution is maximized. Clearly, this is not a successful idea for each and every problem, as easily demonstrated by examples like the distance and the trap functions [DJW02], where the fitness leads the algorithm into a local optimum, or the difficult-tooptimize monotonic functions constructed in [DJS + 13, LS18,Len18], where the fitness leads to the optimum, but via a prohibitively long trajectory. Still, one might hope that for problems with a good fitness-distance correlation (and Om has the perfect fitness-distance correlation), maximizing the expected fitness gain is a good approach.…”
Section: Maximizing Drift Is Near-optimalmentioning
confidence: 99%
“…For the (1 + 1)-EA, while for any constant c < 1 it is easy to see that the algorithm needs time O(n log n) to find the optimum of any monotone function [9], it was shown in a sequence of papers [9,10,18] that for c > 2.13 . .…”
Section: Introductionmentioning
confidence: 99%