2021 International Joint Conference on Neural Networks (IJCNN) 2021
DOI: 10.1109/ijcnn52387.2021.9534186
|View full text |Cite
|
Sign up to set email alerts
|

On Duality Gap as a Measure for Monitoring GAN Training

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 7 publications
0
3
0
Order By: Relevance
“…7(a) and 7(c) are due to the unstable training of GANs, which is a highly non-trivial non-convex non-concave minimax optimization. 50,51 Since all settings of dynamic PDGAN and original StyleGAN2/StyleGAN3 remain identical, except for the objective function Eq. ( 8) for the generator, it can be guaranteed that the performance gain is brought by the introduction of the pretrained teacher discriminator D and the sample-specific strategy.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…7(a) and 7(c) are due to the unstable training of GANs, which is a highly non-trivial non-convex non-concave minimax optimization. 50,51 Since all settings of dynamic PDGAN and original StyleGAN2/StyleGAN3 remain identical, except for the objective function Eq. ( 8) for the generator, it can be guaranteed that the performance gain is brought by the introduction of the pretrained teacher discriminator D and the sample-specific strategy.…”
Section: Resultsmentioning
confidence: 99%
“…The spikes of FID curves in Figs. 7(a) and 7(c) are due to the unstable training of GANs, which is a highly non-trivial non-convex non-concave minimax optimization 50 , 51…”
Section: Resultsmentioning
confidence: 99%
“…Note that duality gap is tailored towards GANs and primarily indicates a model's convergence or divergence, not the quality of generated samples [62]. Recently, Sidheekh et al [63,64] proposed two variants called perturbed and proximal duality gap, which are more accurate than the plain version, especially in cases where the two-player game need not converge to a Nash equilibrium for the generator to model P.…”
Section: Distribution Of Reconstruction Errors (Dre)mentioning
confidence: 99%