2017
DOI: 10.48550/arxiv.1712.04086
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

PacGAN: The power of two samples in generative adversarial networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
43
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 42 publications
(43 citation statements)
references
References 0 publications
0
43
0
Order By: Relevance
“…It proposes dynamics-based approaches for avoiding mode collapse showing local convergence to Nash-equilibrum. Recently, PacGAN [8] is introduced that simply extends discriminator input for packed with multiple samples showing state-of-the-art mode detection performance.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…It proposes dynamics-based approaches for avoiding mode collapse showing local convergence to Nash-equilibrum. Recently, PacGAN [8] is introduced that simply extends discriminator input for packed with multiple samples showing state-of-the-art mode detection performance.…”
Section: Related Workmentioning
confidence: 99%
“…We employ three metrics for quantitative evaluation: number of modes found, high quality sample ratio (HQS) [13], and additional distribution distance measurement Jensen-Shannon divergence (JSD). Number of Gaussian modes (8,25,25, and 27, respectively) found with generated samples and high quality sample ratio (HQS) are counted 20 times and mean and std. are calculated.…”
Section: Mixture Of Gaussiansmentioning
confidence: 99%
See 1 more Smart Citation
“…There exists a rich set of works improving classic generative models for alleviating missing modes, especially in the framework of GANs, by altering objective functions [13,14,15,10,16,17], changing training methods [18,19], modifying neural network architectures [2,20,21,22,23], or regularizing latent space distributions [4,24]. The general philosophy behind these improvements is to reduce the statistical distance between the generated distribution and target distribution by making the models easier to train.…”
Section: Related Workmentioning
confidence: 99%
“…In Figure 5 in Appendix J, we show example images from the three generators for the different flipping probabilities. We believe that the gain in using the proposed robust GANs will be larger, when we train to higher accuracy with larger networks and extensive hyper parameter tuning, with latest innovations in GAN architectures, for example [54,28,17,19,41].…”
Section: Cifar-10mentioning
confidence: 99%

Robustness of Conditional GANs to Noisy Labels

Thekumparampil,
Khetan,
Lin
et al. 2018
Preprint
Self Cite