2023
DOI: 10.48550/arxiv.2303.15543
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

The Impact of Asynchrony on Parallel Model-Based EAs

Abstract: In a parallel EA one can strictly adhere to the generational clock, and wait for all evaluations in a generation to be done. However, this idle time limits the throughput of the algorithm and wastes computational resources. Alternatively, an EA can be made asynchronous parallel. However, EAs using classic recombination and selection operators (GAs) are known to suffer from an evaluation time bias, which also influences the performance of the approach. Model-Based Evolutionary Algorithms (MBEAs) are more scalab… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 19 publications
0
1
0
Order By: Relevance
“…Evidence of this effect can be seen in Figure 6 where the mean amount of time spent per generation is significantly lower for CoDeepNEAT-AES than for CoDeepNEAT. While asynchronous EAs are known to have such evaluation bias, and efforts have been developed to avoid it (Guijt et al, 2023), it is not undesirable in the case of neuroevolution. Discovering DNNs that are faster to train is often a secondary goal of many architecture search algorithms, and as Figures 5 and 7 show, CoDeepNEAT-AES is able to achieve the same quality of solutions as CoDeepNEAT while taking much less time.…”
Section: Discussion and Future Workmentioning
confidence: 99%
“…Evidence of this effect can be seen in Figure 6 where the mean amount of time spent per generation is significantly lower for CoDeepNEAT-AES than for CoDeepNEAT. While asynchronous EAs are known to have such evaluation bias, and efforts have been developed to avoid it (Guijt et al, 2023), it is not undesirable in the case of neuroevolution. Discovering DNNs that are faster to train is often a secondary goal of many architecture search algorithms, and as Figures 5 and 7 show, CoDeepNEAT-AES is able to achieve the same quality of solutions as CoDeepNEAT while taking much less time.…”
Section: Discussion and Future Workmentioning
confidence: 99%