2010
DOI: 10.1007/978-3-642-12239-2_47
|View full text |Cite
|
Sign up to set email alerts
|

A New Selection Ratio for Large Population Sizes

Abstract: Motivated by parallel optimization, we study the Self-Adaptation algorithm for large population sizes. We first show that the current version of this algorithm does not reach the theoretical bounds, then we propose a very simple modification, in the selection part of the evolution process. We show that this simple modification leads to big improvement of the speed-up when the population size is large.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
10
1

Year Published

2010
2010
2020
2020

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(13 citation statements)
references
References 6 publications
(12 reference statements)
2
10
1
Order By: Relevance
“…We have investigated in particular large values of λ. Our results suggest that the optimal μ is monotonously increasing in λ as opposed to the rule μ = min{ λ 4 , d} proposed in [9] but that however this latter rule gives a convergence rate close to the optimal one. We have confirmed as well that for the rules μ = λ 4 and λ 2 , the convergence rate does not scale linearly in ln(λ) and is thus sub-optimal.…”
Section: Numerical Experimentscontrasting
confidence: 59%
See 2 more Smart Citations
“…We have investigated in particular large values of λ. Our results suggest that the optimal μ is monotonously increasing in λ as opposed to the rule μ = min{ λ 4 , d} proposed in [9] but that however this latter rule gives a convergence rate close to the optimal one. We have confirmed as well that for the rules μ = λ 4 and λ 2 , the convergence rate does not scale linearly in ln(λ) and is thus sub-optimal.…”
Section: Numerical Experimentscontrasting
confidence: 59%
“…2 (left), we plotted the normalized optimal convergence rate values and the normalized optimal convergence rates relative to the rule μ = min{ , d} , as a function of λ (log-scale for λ) for dimensions 2, 10, 30 and 100 (from top to bottom). Right: Plots of the values μ th (solid lines with markers) giving the optimal μ relative to the quadratic approximation (9) together with extremity of range of μ values (shown with markers) giving convergence rates up to a precision of 0.2 from the optimal numerical value. The dimensions represented are 2, 10, 30 and 100 (from bottom to top).…”
Section: Numerical Experimentsmentioning
confidence: 98%
See 1 more Smart Citation
“…In order to check the efficiency of the on-line tuning of c c , c 1 , c µ done by Self-CMA-ES, it should be compared to the off-line tuning of the same parameters (e.g., using SMAC, see Section 2.1) on the plain CMA-ES. However, because it was demonstrated in [3,21] that the performance of CMA-ES (or other Evolution Strategies) with a large λ was highly dependent on µ and the adaptation of σ, and also because SMAC experiments are very costly, it was decided to run one single SMAC campaign, tuning µ and σ 0 , the initial value for σ, for both algorithms (using the adaptation scheme advocated in [3,21] is left for further work), and c c , c 1 , c µ for CMA-ES. Table 1 describes the experimental conditions.…”
Section: On-line Vs Off-line Tuning Of C C C 1 C µmentioning
confidence: 99%
“…And increasing λ without any further parameter tuning has been experimentally demonstrated to perform poorly for CMA-ES and other types of Evolution Strategies: [3] proposes a new update strategy for the global step-size; [21,22] suggests to modify the ratio between number of parents and number of offspring. This paper investigates another approach to improve the performance of CMA-ES in a distributed setting: assuming some given number of cores, the use of computing resources is optimized by fixing the population size λ to this number of cores 1 .…”
Section: Introductionmentioning
confidence: 99%