2018
DOI: 10.48550/arxiv.1802.06132
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Interaction Matters: A Note on Non-asymptotic Local Convergence of Generative Adversarial Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
23
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
6
3
1

Relationship

1
9

Authors

Journals

citations
Cited by 34 publications
(23 citation statements)
references
References 0 publications
0
23
0
Order By: Relevance
“…In this paper, we only consider the statistical problem of how well GANs learn density, assuming the optimization, say (3.2), can be done to sufficient accuracy. Admittedly, computation of GANs is a considerably harder question (Mescheder, Nowozin, and Geiger, 2017;Daskalakis, Ilyas, Syrgkanis, and Zeng, 2017;Liang and Stokes, 2018;Arbel, Sutherland, Bińkowski, and Gretton, 2018;Lucic, Kurach, Michalski, Gelly, and Bousquet, 2017), which we leave as future work.…”
Section: Conclusion and Discussionmentioning
confidence: 99%
“…In this paper, we only consider the statistical problem of how well GANs learn density, assuming the optimization, say (3.2), can be done to sufficient accuracy. Admittedly, computation of GANs is a considerably harder question (Mescheder, Nowozin, and Geiger, 2017;Daskalakis, Ilyas, Syrgkanis, and Zeng, 2017;Liang and Stokes, 2018;Arbel, Sutherland, Bińkowski, and Gretton, 2018;Lucic, Kurach, Michalski, Gelly, and Bousquet, 2017), which we leave as future work.…”
Section: Conclusion and Discussionmentioning
confidence: 99%
“…Despite its popularity, this algorithm fails to converge even for simple bilinear zero-sum games [41,39,14,2,32]. This failure was fixed by adding negative momentum or by using primal-dual methods proposed by [22,21,8,13,15,33].…”
Section: Introductionmentioning
confidence: 99%
“…Most of existing theories focus on the statistical properties of GANs at the global-optimum [15,16,20,87]. However, on the training side, gradient descent ascent only enjoys efficient convergence to a global optimum when the loss function is convex-concave, or efficient convergence to a critical point in general settings [37,38,48,53,71,73,75,77,78]. Due to the extreme non-linearity of the networks in both the generator and the discriminator, it is highly unlikely that the training objective of GANs can be convex-concave.…”
Section: Introductionmentioning
confidence: 99%