2022
DOI: 10.1007/978-3-031-20050-2_18
|View full text |Cite
|
Sign up to set email alerts
|

DuelGAN: A Duel Between Two Discriminators Stabilizes the GAN Training

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
1
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(3 citation statements)
references
References 20 publications
0
1
0
1
Order By: Relevance
“…The FID score of the DGD improves drastically with learning a more complex parameterized distribution, which lends itself naturally to our approach. When using 20 Gaussian components, the FID decreases to 27.25 0.11 (from three computations using different random seeds, see Section 2), positioning itself between the probabilistic autoencoder (28.0) ( Böhm and Seljak 2022 ) and PeerGAN (21.73) ( Wei et al 2022 ).…”
Section: Resultsmentioning
confidence: 99%
“…The FID score of the DGD improves drastically with learning a more complex parameterized distribution, which lends itself naturally to our approach. When using 20 Gaussian components, the FID decreases to 27.25 0.11 (from three computations using different random seeds, see Section 2), positioning itself between the probabilistic autoencoder (28.0) ( Böhm and Seljak 2022 ) and PeerGAN (21.73) ( Wei et al 2022 ).…”
Section: Resultsmentioning
confidence: 99%
“…Although both use the analogy to learning, they are different in their implementation and motivation. In addition to the above methods, there are some approaches that reduce the risk of falling into local modes by building multiple discriminators or multiple generators, such as Dropout-GAN [24], D2GAN [25], GMAN [26] and DuelGAN [27]. These methods can significantly improve the robustness of the model through the collaboration of multiple discriminators or generators, but they exponentially increase the size and training difficulty.…”
Section: Related Workmentioning
confidence: 99%
“…На основе 6 можно рассчитать дивергенцию Йенсена-Шеннона между двумя распределениями, используя формулу: Существует несколько разновидностей GAN, которые предназначены для различных приложений и контекстов. Например, при полунаблюдаемом обучении дискриминатор обновляется для присвоения реальных меток классам с 1 по K-1 и поддельной метки классу K, при этом генератор пытается обмануть дискриминатор, чтобы присвоить меньшую метку [16]. В данном исследовании для улучшения обучения DCGAN был использован модифицированный метод одностороннего сглаживания меток.…”
Section: рис 1 архитектура генеративной состязательной сетиunclassified