“…For the generator model, to learn the distribution of the generator p g over data x, a prior on input noise variable p z (z) must be defined. This mapping is represented as G(z; θ g ), where G is a [163] [17, 27, 67, 126, 80, 87, 95] Wasserstein GANs with Gradient Penalty (WGAN-GP) [168] [42, 63, 106, 102] Variational Auetoencoder GANs (VAE-GAN) [169] [19, 20, 98, 101] Cycle-GAN [170] [117, 48, 124, 137] Auxiliary GANs (AC-GAN) [171] [116, 118, 140] Progressive Growing GANs (PG-GAN) [172] [116, 138] Orthogonal GAN (O-GAN) [173] [29, 71] Adversarial AutoEncoders (AAE) [174] [132] Balancing GANs (BGAN) [175] [116] Energy-Based GANs (EBGAN) [176] [56] Dual Discriminator GANs (D2GAN) [177] [124] GANs with Quadratic Potential (GAN-QP) [178] [71] One-Class GAN (OCGAN) [179] [89] Patch GANs (PatchGAN) [180] [41] Relativistic Discriminator GANs (RaSGAN) [181] [82] Sequence GANs (SeqGAN) [182] [121] Text GANs (TextGAN) [183] [68]…”