2019
DOI: 10.1007/978-3-030-31978-6_7
|View full text |Cite
|
Sign up to set email alerts
|

Beyond Local Nash Equilibria for Adversarial Networks

Abstract: Save for some special cases, current training methods for Generative Adversarial Networks (GANs) are at best guaranteed to converge to a 'local Nash equilibrium' (LNE). Such LNEs, however, can be arbitrarily far from an actual Nash equilibrium (NE), which implies that there are no guarantees on the quality of the found generator or classifier. This paper proposes to model GANs explicitly as finite games in mixed strategies, thereby ensuring that every LNE is an NE. With this formulation, we propose a solution … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 20 publications
(12 citation statements)
references
References 22 publications
(34 reference statements)
0
12
0
Order By: Relevance
“…where x refers to the training data sampled from the data distribution p data (x), and z to a noise variable sampled from some prior p(z). The discriminator is thus trained to maximise V (D), while the generator aims at minimising V (D); the two networks play a minimax game until a Nash equilibrium is (hopefully) reached (Goodfellow et al, 2014;Che et al, 2016;Oliehoek et al, 2018).…”
Section: Wasserstein Generative Adversarial Network -Gradient Penaltmentioning
confidence: 99%
“…where x refers to the training data sampled from the data distribution p data (x), and z to a noise variable sampled from some prior p(z). The discriminator is thus trained to maximise V (D), while the generator aims at minimising V (D); the two networks play a minimax game until a Nash equilibrium is (hopefully) reached (Goodfellow et al, 2014;Che et al, 2016;Oliehoek et al, 2018).…”
Section: Wasserstein Generative Adversarial Network -Gradient Penaltmentioning
confidence: 99%
“…More formally, we can define a value function as follows: where x refers to the training data sampled from the data distribution p data (x), and z to a noise variable sampled from some prior p(z). The discriminator is thus trained to maximize V (D), while the generator aims at minimizing V (D); the two networks play a minimax game until a Nash equilibrium is (hopefully) reached (Goodfellow et al, 2014;Che et al, 2016;Oliehoek et al, 2018).…”
Section: Wasserstein Generative Adversarialmentioning
confidence: 99%
“…During training, both networks engage in competition until a Nash equilibrium is reached. In game theory, Nash equilibrium is a status where no player is able improve or deviate his payoff [40].…”
Section: The Generator and The Discriminatormentioning
confidence: 99%