2020
DOI: 10.1007/978-3-030-58574-7_33
|View full text |Cite
|
Sign up to set email alerts
|

AE-OT-GAN: Training GANs from Data Specific Latent Distribution

Abstract: Though generative adversarial networks (GANs) are prominent models to generate realistic and crisp images, they are unstable to train and suffer from the mode collapse/mixture. The problems of GANs come from approximating the intrinsic discontinuous distribution transform map with continuous DNNs. The recently proposed AE-OT model addresses the discontinuity problem by explicitly computing the discontinuous optimal transform map in the latent space of the autoencoder. Though have no mode collapse/mixture, the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
5

Relationship

1
9

Authors

Journals

citations
Cited by 13 publications
(5 citation statements)
references
References 13 publications
0
4
0
Order By: Relevance
“…Thus, Dist-GAN slows down the convergence of the discriminator and alleviates gradient vanishing effectively. An et al [48] designed AE-OT-GAN, which consists of an auto-encoder and a discriminator, to alleviate the mode collapse problem in the image generation field. In AE-OT-GAN, the AE and GAN share the same module, which plays the role of the decoder and generator in the AE and GAN, respectively.…”
Section: Combination Of Gan and Aementioning
confidence: 99%
“…Thus, Dist-GAN slows down the convergence of the discriminator and alleviates gradient vanishing effectively. An et al [48] designed AE-OT-GAN, which consists of an auto-encoder and a discriminator, to alleviate the mode collapse problem in the image generation field. In AE-OT-GAN, the AE and GAN share the same module, which plays the role of the decoder and generator in the AE and GAN, respectively.…”
Section: Combination Of Gan and Aementioning
confidence: 99%
“…This formulation leads to the generative model [132]. (Moreover, a few models with some technical variations [4,5,98] are related to this category, but for simplicity we do not describe them here. )…”
Section: Combinationmentioning
confidence: 99%
“…Optimal transport (OT) is a powerful tool to compute the Wasserstein distance between probability measures and widely used to model various natural and social phenomena, including economics (Galichon 2016), optics (Glimm and Oliker 2003), biology (Schiebinger et al 2019), physics (Jordan, Kinderlehrer, and Otto 1998) and in other scientific fields. Recently, OT has been successfully applied in machine learning and statistics, such as parameter estimation in Bayesian non-parametric models (Nguyen 2013), computer vision (Arjovsky, Chintala, and Bottou 2017;Courty et al 2017;Tolstikhin et al 2018;An et al 2020a;Lei et al 2020;An et al 2020b), and natural language processing (Kusner et al 2015;Yurochkin et al 2019). In these areas, the complex probability measures are approximated by summations of Dirac measures supported on the samples.…”
Section: Introductionmentioning
confidence: 99%