2012
DOI: 10.1016/j.tcs.2012.01.010
|View full text |Cite
|
Sign up to set email alerts
|

The use of tail inequalities on the probable computational time of randomized search heuristics

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
16
0

Year Published

2012
2012
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 18 publications
(17 citation statements)
references
References 31 publications
1
16
0
Order By: Relevance
“…e 2 nµ 2 = (1 + o(1)) e 2 n 3 µ 2 M 2 and p min = min{ M enµ , 1 eµ } = M enµ , noting that M = o(n) and M ≤ n. Apart from a small improvement in the constants, stemming from slightly more careful estimates, the result on the expected runtime is the same as the one in [Wit06], which is E[T ] ≤ 3en max{µ ln(en), n}, see Theorem 1 of [Wit06] and recall that we count the number of iterations, that is, we ignore the µ fitness evaluations of the initial individuals. The tail bound, as discussed earlier, is stronger than the one in [ZLLH12] due to the stronger Chernoff bounds for sums of geometric random variables which are available now.…”
Section: Performance Of the (µ + 1) Ea On Leadingonesmentioning
confidence: 88%
See 2 more Smart Citations
“…e 2 nµ 2 = (1 + o(1)) e 2 n 3 µ 2 M 2 and p min = min{ M enµ , 1 eµ } = M enµ , noting that M = o(n) and M ≤ n. Apart from a small improvement in the constants, stemming from slightly more careful estimates, the result on the expected runtime is the same as the one in [Wit06], which is E[T ] ≤ 3en max{µ ln(en), n}, see Theorem 1 of [Wit06] and recall that we count the number of iterations, that is, we ignore the µ fitness evaluations of the initial individuals. The tail bound, as discussed earlier, is stronger than the one in [ZLLH12] due to the stronger Chernoff bounds for sums of geometric random variables which are available now.…”
Section: Performance Of the (µ + 1) Ea On Leadingonesmentioning
confidence: 88%
“…What comes closest to this work is the paper [ZLLH12], which also tries to establish runtime analysis beyond expectations in a formalized manner. The notion proposed in [ZLLH12], called probable computational time L(δ), is the smallest time T such that the algorithm under investigation within the first T fitness evaluations finds an optimal solution with probability at least 1 − δ.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The fitness-level method has also been applied to other elitist optimization methods, including elitist ant colony optimizers [12,27] and a binary particle swarm optimizer [37]. It gives rise to powerful tail inequalities [40] and it can be used to prove lower bounds as well, when combined with additional knowledge on transition probabilities [35]. Finally, Lehre [22] recently showed that the fitness-level method can be extended towards non-elitist EAs with additional mild conditions on transition probabilities and the population size.…”
Section: Theorem 2 (Fitness-level Method) For Two Setsmentioning
confidence: 99%
“…There were slight variations on this considering the number of iterations/generations or fitness evaluations. Jansen and Zarges [7] and Zhou et al [18] pointed out that there is a gap between the empirical results and the theoretical results obtained on the optimization time. Theoretical research most often yields asymptotic results on finding the global optimum while practitioners concern more about achieving some good result within a reasonable time budget.…”
Section: Introductionmentioning
confidence: 99%