2020
DOI: 10.1360/ssi-2019-0205
|View full text |Cite
|
Sign up to set email alerts
|

Instability analysis for generative adversarial networks and its solving techniques

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
1
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 6 publications
0
1
0
Order By: Relevance
“…The generator and discriminator reached a balance after approximately 1500 steps, at which point the monitor stopped training. The stability of GAN training is closely related to the variations of D loss and G loss and their corresponding gradient changes [33]. Therefore, on the premise of performing STFT preprocessing on the EEG signals, the stability of network iteration training for DCGAN and WGAN-GP generators and discriminators is compared.…”
Section: Analysis and Experimental Results On Network Stabilitymentioning
confidence: 99%
See 1 more Smart Citation
“…The generator and discriminator reached a balance after approximately 1500 steps, at which point the monitor stopped training. The stability of GAN training is closely related to the variations of D loss and G loss and their corresponding gradient changes [33]. Therefore, on the premise of performing STFT preprocessing on the EEG signals, the stability of network iteration training for DCGAN and WGAN-GP generators and discriminators is compared.…”
Section: Analysis and Experimental Results On Network Stabilitymentioning
confidence: 99%
“…Secondly, the WGAN-GP is used as an unsupervised feature learning model. This model uses Earth-Mover (EM) distance as a measure instead of Jensen-Shannon (JS) divergence to overcome training instability and model collapse issues, ensuring the richness of generated samples [32,33]. Lastly, a Bi-LSTM classification model is employed as the backend classifier, using a small amount of labeled STFT spectrograms to guide the prediction task.…”
Section: Introductionmentioning
confidence: 99%
“…In the generative adversarial networks, the number of parameters increases exponentially with the number of generating layers, which leads to an increase in the amplitude of parameter variation and increases the probability of gradient explosion. By adding spectral normalization to the generator and discriminator respectively [ 29 ], the gradient upper bound of the function is restricted to make the function smoother, improve the stability of parameter variation and reduce the probability of gradient explosion.…”
Section: C-dcgan For Bearing Fault Diagnosismentioning
confidence: 99%