2017
DOI: 10.48550/arxiv.1706.03850
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Adversarial Feature Matching for Text Generation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
41
0
1

Year Published

2019
2019
2021
2021

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 35 publications
(42 citation statements)
references
References 14 publications
0
41
0
1
Order By: Relevance
“…For the generator model, to learn the distribution of the generator p g over data x, a prior on input noise variable p z (z) must be defined. This mapping is represented as G(z; θ g ), where G is a [163] [17, 27, 67, 126, 80, 87, 95] Wasserstein GANs with Gradient Penalty (WGAN-GP) [168] [42, 63, 106, 102] Variational Auetoencoder GANs (VAE-GAN) [169] [19, 20, 98, 101] Cycle-GAN [170] [117, 48, 124, 137] Auxiliary GANs (AC-GAN) [171] [116, 118, 140] Progressive Growing GANs (PG-GAN) [172] [116, 138] Orthogonal GAN (O-GAN) [173] [29, 71] Adversarial AutoEncoders (AAE) [174] [132] Balancing GANs (BGAN) [175] [116] Energy-Based GANs (EBGAN) [176] [56] Dual Discriminator GANs (D2GAN) [177] [124] GANs with Quadratic Potential (GAN-QP) [178] [71] One-Class GAN (OCGAN) [179] [89] Patch GANs (PatchGAN) [180] [41] Relativistic Discriminator GANs (RaSGAN) [181] [82] Sequence GANs (SeqGAN) [182] [121] Text GANs (TextGAN) [183] [68]…”
Section: Standard Generative Adversarial Network (Gan)mentioning
confidence: 99%
See 1 more Smart Citation
“…For the generator model, to learn the distribution of the generator p g over data x, a prior on input noise variable p z (z) must be defined. This mapping is represented as G(z; θ g ), where G is a [163] [17, 27, 67, 126, 80, 87, 95] Wasserstein GANs with Gradient Penalty (WGAN-GP) [168] [42, 63, 106, 102] Variational Auetoencoder GANs (VAE-GAN) [169] [19, 20, 98, 101] Cycle-GAN [170] [117, 48, 124, 137] Auxiliary GANs (AC-GAN) [171] [116, 118, 140] Progressive Growing GANs (PG-GAN) [172] [116, 138] Orthogonal GAN (O-GAN) [173] [29, 71] Adversarial AutoEncoders (AAE) [174] [132] Balancing GANs (BGAN) [175] [116] Energy-Based GANs (EBGAN) [176] [56] Dual Discriminator GANs (D2GAN) [177] [124] GANs with Quadratic Potential (GAN-QP) [178] [71] One-Class GAN (OCGAN) [179] [89] Patch GANs (PatchGAN) [180] [41] Relativistic Discriminator GANs (RaSGAN) [181] [82] Sequence GANs (SeqGAN) [182] [121] Text GANs (TextGAN) [183] [68]…”
Section: Standard Generative Adversarial Network (Gan)mentioning
confidence: 99%
“…Its single scalar output represents the probability that x comes from the data rather than p The Standard GAN optimizes the Jensen-Shannon (JS) divergence to learn the distribution of the data. Consequently, it suffers from an unstable, weak signal when the discriminator is approaching a local optimum, known as the problem of gradient vanishing [183]. This can also lead to mode collapse.…”
Section: Standard Generative Adversarial Network (Gan)mentioning
confidence: 99%
“…Recent works on Generative Adversarial Networks (GAN) have demonstrated their capability in generating different types of data, from images generation [16,29], text generation [82,87], music composition [41], and time-series sensory data generation [7]. The research published by [25] employed hidden Markov models (HMMs) to generate realistic synthetic smart home sensor data.…”
Section: Synthetic Sensor Data Generationmentioning
confidence: 99%
“…[12], [27] apply REINFORCE [24] algorithm for adversarial training. The third approach, including [2], [29], learns a mapping from the raw discrete feature space to a latent real space, as well as the reverse mapping. These mapping functions are, e.g., realized as an auto-encoder.…”
Section: Related Workmentioning
confidence: 99%
“…A few approaches to generate discrete data with GANs have recently been proposed; most promising are approaches involving reinforcement learning [27], more specifically policy gradients [12]. Another proposed solution is to learn a mapping function from the discrete space of words to a latent real space as well as a reverse mapping [29]. [2] applies this idea to handle categorical features in EHR data and develops auto-encoders to function as the mapping.…”
Section: Introductionmentioning
confidence: 99%