2022
DOI: 10.1007/978-3-031-19787-1_3
|View full text |Cite
|
Sign up to set email alerts
|

EAGAN: Efficient Two-Stage Evolutionary Architecture Search for GANs

Abstract: Generative adversarial networks (GANs) have proven successful in image generation tasks. However, GAN training is inherently unstable. Although many works try to stabilize it by manually modifying GAN architecture, it requires much expertise. Neural architecture search (NAS) has become an attractive solution to search GANs automatically. The early NAS-GANs search only generators to reduce search complexity but lead to a sub-optimal GAN. Some recent works try to search both generator (G) and discriminator (D), … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
10
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 9 publications
(10 citation statements)
references
References 32 publications
0
10
0
Order By: Relevance
“…According to the optimized used, existing NAS algorithms can be broadly grouped into three different categories: reinforcement learning (RL) based NAS methods [68], [69], gradientbased NAS methods [39], and evolutionary computation-based NAS (ENAS) methods [59], [70]. The RL-based methods depend on high-computation resources due to using tens of actions to get a positive reward, i.e., often requiring thousands of graphics processing cards (GPUs) with several days even on median-scale data set [42]. The gradient-based methods are more efficient than the RL-based methods.…”
Section: B Neural Architecture Searchmentioning
confidence: 99%
See 4 more Smart Citations
“…According to the optimized used, existing NAS algorithms can be broadly grouped into three different categories: reinforcement learning (RL) based NAS methods [68], [69], gradientbased NAS methods [39], and evolutionary computation-based NAS (ENAS) methods [59], [70]. The RL-based methods depend on high-computation resources due to using tens of actions to get a positive reward, i.e., often requiring thousands of graphics processing cards (GPUs) with several days even on median-scale data set [42]. The gradient-based methods are more efficient than the RL-based methods.…”
Section: B Neural Architecture Searchmentioning
confidence: 99%
“…Furthermore, existing NAS methods train each discovered architecture individually to obtain their corresponding performance, resulting in substantial computational overhead. Ying [42] proposed a weight-sharing strategy to enhance search efficiency by constructing a large computational graph, in which each subgraph represents a neural network architecture. Consequently, all sub-network architectures can be evaluated without separate training by sharing weights within the large network.…”
Section: B Neural Architecture Searchmentioning
confidence: 99%
See 3 more Smart Citations