2023
DOI: 10.1109/tetci.2022.3193373
|View full text |Cite
|
Sign up to set email alerts
|

A New Perspective on Stabilizing GANs Training: Direct Adversarial Training

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 21 publications
(7 citation statements)
references
References 25 publications
0
6
0
Order By: Relevance
“…LeakyReLU activation functions are utilized in all layers except for the last deconvolutional layer, where the activation function is Tanh. Notably, spectral normalization is applied to the weight matrix of each convolutional layer in D. In doing this, the model becomes more robust against input perturbations, enhancing the overall model robustness [29].…”
Section: Sngan-based Pct Approachmentioning
confidence: 99%
“…LeakyReLU activation functions are utilized in all layers except for the last deconvolutional layer, where the activation function is Tanh. Notably, spectral normalization is applied to the weight matrix of each convolutional layer in D. In doing this, the model becomes more robust against input perturbations, enhancing the overall model robustness [29].…”
Section: Sngan-based Pct Approachmentioning
confidence: 99%
“…Unfortunately, owing to the unstable training of GANs [ 40 , 41 , 42 , 43 ], unsuitable samples are inevitably generated. It is worth noting that the distributions of unsuitable samples do not properly match the distribution of the real data.…”
Section: Preliminariesmentioning
confidence: 99%
“…By generating virtual data that resemble actual data, the GAN enlarges the sample capacity to enhance the prediction performance. Although various improvements have been made in GANs, including alternative loss functions and training strategies, the training process remains unstable [ 40 , 41 , 42 , 43 ]. Therefore, the quality of the generated samples remains uncertain.…”
Section: Introductionmentioning
confidence: 99%
“…To reduce this problem, previous researches propose many methods which can be divided into two categories. The first kind of methods focus on the optimization process and divergence metrics [21, 25, 31–35] to stabilize the training process and mitigate the mode collapse problem. Salimans et al.…”
Section: Related Workmentioning
confidence: 99%
“…To reduce this problem, previous researches propose many methods which can be divided into two categories. The first kind of methods focus on the optimization process and divergence metrics [21,25,[31][32][33][34][35] [39] to capture more modes of the distribution. ModeGAN [40] and VEEGAN [41] utilize additional encoder networks to enforce the bijection mapping between the input noise vectors and generated images.…”
Section: Reducing Mode Collapsementioning
confidence: 99%