2020 59th IEEE Conference on Decision and Control (CDC) 2020
DOI: 10.1109/cdc42340.2020.9304183
|View full text |Cite
|
Sign up to set email alerts
|

A game–theoretic approach for Generative Adversarial Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
2
1

Relationship

2
4

Authors

Journals

citations
Cited by 8 publications
(12 citation statements)
references
References 11 publications
0
12
0
Order By: Relevance
“…However, these properties are not necessarily enough to ensure convergence, hence, (quasi) Féjer monotonicity is often used in combination with convergence results on sequences of real numbers. These technical results have been used in many theoretical and computational applications that range from stochastic Nash equilibrium seeking [11,12,14] to machine learning [17,19,20].…”
Section: Lyapunov Decrease and Féjer Monotonicitymentioning
confidence: 99%
See 3 more Smart Citations
“…However, these properties are not necessarily enough to ensure convergence, hence, (quasi) Féjer monotonicity is often used in combination with convergence results on sequences of real numbers. These technical results have been used in many theoretical and computational applications that range from stochastic Nash equilibrium seeking [11,12,14] to machine learning [17,19,20].…”
Section: Lyapunov Decrease and Féjer Monotonicitymentioning
confidence: 99%
“…Since variational inequalities are the mathematical foundations of optimizationrelated problems, such as Nash equilibrium seeking [13,11,21], convex optimization [13,104,23] and machine learning [105,17], many works in the literature rely on the results presented in the previous sections to prove convergence of a given algorithm to a solution of a variational equilibrium problem. Specifically, they are applied to prove that a given algorithm converges to the solution of a variational inequality or to a zero of the sum of (monotone) operators.…”
Section: Applications Of Convergent Deterministic Sequencesmentioning
confidence: 99%
See 2 more Smart Citations
“…It is distinct from the extant results on distributed learning as in References [12–14,18,29,30], where only cooperation among agents exists, while learning for GANs inherently shows its nature of competition. In other words, the existing works 12–14,18 are intrinsically distributed optimisation problems (see References [31–35]) whereas training for GANs in this study is a distributed Nash equilibrium seeking problem with two coalitions (see References [21,23,28,36,37]).…”
Section: Introductionmentioning
confidence: 99%