2020 11th IEEE Annual Ubiquitous Computing, Electronics &Amp; Mobile Communication Conference (UEMCON) 2020
DOI: 10.1109/uemcon51285.2020.9298092
|View full text |Cite
|
Sign up to set email alerts
|

An Empirical Analysis of Generative Adversarial Network Training Times with Varying Batch Sizes

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(8 citation statements)
references
References 7 publications
0
5
0
Order By: Relevance
“…However, we had tested ADAN on a very limited dataset. Because GANs are notoriously sensitive to hyperparameter settings (18, 22, 27), it was unclear how robust ADAN would be in practice. Another promising method, PAF, had been tested primarily in terms of two monkeys’ online iBCI performance (9).…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…However, we had tested ADAN on a very limited dataset. Because GANs are notoriously sensitive to hyperparameter settings (18, 22, 27), it was unclear how robust ADAN would be in practice. Another promising method, PAF, had been tested primarily in terms of two monkeys’ online iBCI performance (9).…”
Section: Discussionmentioning
confidence: 99%
“…Correct optimization of GANs is also directly linked to proper tuning of the dynamics of learning during training (27, 41), which we investigated here in depth. Given the many GAN variants, there are still no comprehensive guidelines for a particular architecture (22). Consistent with this, we found that ADAN and Cycle-GAN differ substantially in their sensitivity to learning rate and batch size hyperparameters.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…For readers familiar with ordinal logistic regression, Datawig uses the same encoding to capture the Deep Learning Imputation for Likert-Type Items FIGURE 6. Stochastic gradient descent (Ghosh et al, 2020). Note.…”
Section: Hyperparametersmentioning
confidence: 99%
“…Although a high-performance GP-GPU often allows for higher batch Size values, other factors must be considered when dealing with a GAN. As pointed out by [21] and [20], a value of batch Size that is too large for a GAN could significantly increase training time and affect the overall performance of the model, potentially causing a decrease of its performance. Therefore, small values are recommended to be used so as to obtain better performance of the network.…”
Section: Training Settingsmentioning
confidence: 99%