2021
DOI: 10.48550/arxiv.2110.14150
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Training Wasserstein GANs without gradient penalties

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 4 publications
0
2
0
Order By: Relevance
“…Additionally, we believe that the procedure outlined in this paper could be applied to augment analytical approximations to 𝑁-body simulations (like L-PICOLA, Howlett et al 2015, or FastPM, Feng et al 2016, as well as semi-analytical models of galaxies, which, in the same vein as lognormal random fields, provide a fast approximation to hydrodynamical simulations by modelling complicated baryonic processes (White & Frenk 1991;Kauffmann et al 1993;Cole et al 1994;Somerville & Primack 1999;Lacey 2001). We further plan to explore the possibility to employ the dataset described in this work to reduce the variance in the statistics of large-scale structure observables using a small number of expensive simulations Ding et al 2022), as well as to replace our WGAN-GP model with either a possibly more stable GAN version (Kwon et al 2021), or with a more compact model, like the one proposed in the context of Lagrangian deep learning (LDL, Dai & Seljak 2021), using graph neural networks (GNNs, see e.g. Zhou et al 2018 for a review) or through normalising flows (e.g.…”
Section: Discussionmentioning
confidence: 99%
“…Additionally, we believe that the procedure outlined in this paper could be applied to augment analytical approximations to 𝑁-body simulations (like L-PICOLA, Howlett et al 2015, or FastPM, Feng et al 2016, as well as semi-analytical models of galaxies, which, in the same vein as lognormal random fields, provide a fast approximation to hydrodynamical simulations by modelling complicated baryonic processes (White & Frenk 1991;Kauffmann et al 1993;Cole et al 1994;Somerville & Primack 1999;Lacey 2001). We further plan to explore the possibility to employ the dataset described in this work to reduce the variance in the statistics of large-scale structure observables using a small number of expensive simulations Ding et al 2022), as well as to replace our WGAN-GP model with either a possibly more stable GAN version (Kwon et al 2021), or with a more compact model, like the one proposed in the context of Lagrangian deep learning (LDL, Dai & Seljak 2021), using graph neural networks (GNNs, see e.g. Zhou et al 2018 for a review) or through normalising flows (e.g.…”
Section: Discussionmentioning
confidence: 99%
“…The excellent auxiliary classifier GAN model [28] that appeared recently improved and optimized the unstable training of adversarial networks. Later, the WGAN (Wasserstein GAN) was proposed, which replaced Jensen-Shannon divergence and Kullback-Leibler divergence [19,29] with Earth Mover's Distance [30] and completely solved the instability problem of GAN training. Utilizing the gradient penalty (GP) method in WGAN-GP [31] can improve the training effect and speed up the convergence speed.…”
Section: Introductionmentioning
confidence: 99%