2017
DOI: 10.48550/arxiv.1707.04385
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

f-GANs in an Information Geometric Nutshell

Abstract: Nowozin et al showed last year how to extend the GAN principle to all f -divergences. The approach is elegant but falls short of a full description of the supervised game, and says little about the key player, the generator: for example, what does the generator actually converge to if solving the GAN game means convergence in some space of parameters? How does that provide hints on the generator's design and compare to the flourishing but almost exclusively experimental literature on the subject?In this paper,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2017
2017
2018
2018

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 24 publications
(56 reference statements)
0
2
0
Order By: Relevance
“…Hu et al (2017) provides a new formulation of GANs and variational autoencoders (VAEs), and thus unifies the most two popular methods to train deep generative models. We'd like to mention other recent interesting research on GANs, e.g., (Guo et al, 2017;Sinn & Rawat, 2017;Nock et al, 2017;Mescheder et al, 2017;Tolstikhin et al, 2017;Heusel et al, 2017).…”
Section: Related Workmentioning
confidence: 97%
“…Hu et al (2017) provides a new formulation of GANs and variational autoencoders (VAEs), and thus unifies the most two popular methods to train deep generative models. We'd like to mention other recent interesting research on GANs, e.g., (Guo et al, 2017;Sinn & Rawat, 2017;Nock et al, 2017;Mescheder et al, 2017;Tolstikhin et al, 2017;Heusel et al, 2017).…”
Section: Related Workmentioning
confidence: 97%
“…The first is to use stochastic variational inference (Kingma & Welling, 2013;Kingma et al, 2014) to optimize the lower bound of the data likelihood. The other is to use the samples as a proxy to minimize the distribution divergence between the model and the real through a two-player game (Goodfellow et al, 2014;Salimans et al, 2016), maximum mean discrepancy (Li et al, 2015;Dziugaite et al, 2015;Li et al, 2017b), f-divergence (Nowozin et al, 2016;Nock et al, 2017), and the most recent Wasserstein distance Gulrajani et al, 2017).…”
Section: Introductionmentioning
confidence: 99%