2021
DOI: 10.48550/arxiv.2102.08578
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Evolving GAN Formulations for Higher Quality Image Synthesis

Abstract: Generative Adversarial Networks (GANs) have extended deep learning to complex generation and translation tasks across different data modalities. However, GANs are notoriously difficult to train: Mode collapse and other instabilities in the training process often degrade the quality of the generated results, such as images. This paper presents a new technique called TaylorGAN for improving GANs by discovering customized loss functions for each of its two networks. The loss functions are parameterized as Taylor … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 30 publications
0
2
0
Order By: Relevance
“…A large number of experiments on real image datasets showed that EMOCGAN is superior to advanced methods in terms of visual fidelity of appearance and target prominence. Gonzalez et al (2021) developed a rule TaylorGAN using loss function element learning to evaluate GANs. The purpose is to improve the quality of the generated image, and enhance the GANs by Taylor expansion of the loss function of the generator and the discriminator network.…”
Section: Evolved Gans In Multi-objective Optimizationmentioning
confidence: 99%
“…A large number of experiments on real image datasets showed that EMOCGAN is superior to advanced methods in terms of visual fidelity of appearance and target prominence. Gonzalez et al (2021) developed a rule TaylorGAN using loss function element learning to evaluate GANs. The purpose is to improve the quality of the generated image, and enhance the GANs by Taylor expansion of the loss function of the generator and the discriminator network.…”
Section: Evolved Gans In Multi-objective Optimizationmentioning
confidence: 99%
“…Other approaches propose improving GANs by discovering customized loss functions for each of its networks. For example, TaylorGAN [9] treats the GAN losses as Taylor expansions and optimizes custom definitions through multi-objective evolution. These losses were meant to act as an alternative to traditional GAN losses such as Wasserstein loss [1] or minimax loss.…”
Section: Related Workmentioning
confidence: 99%
“…While evolutionary computation (EC) has been utilized before to help optimize and search for successful GAN architectures [28] and loss functions [9], we could not find an evolutionary system that can perform neural architecture search for a SimGAN, adjust custom loss functions, train and evaluate models across multiple objectives, and optimize hyperparameters simultaneously while being customizable to fit any data problem, including 1D data.…”
Section: Introductionmentioning
confidence: 99%