2020 IEEE 5th Information Technology and Mechatronics Engineering Conference (ITOEC) 2020
DOI: 10.1109/itoec49072.2020.9141571
|View full text |Cite
|
Sign up to set email alerts
|

Variational Auto-Encoder for text generation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 5 publications
0
2
0
Order By: Relevance
“…The network structure of the generator is shown in Figure (1), and the structure of the Transformer Encoder is shown in Figure (2). In Figure (1), s is the convolution stride, k is the convolution kernel dimension, and n is the number of output channels.…”
Section: Transformer-based Generative Model Transsrganmentioning
confidence: 99%
See 1 more Smart Citation
“…The network structure of the generator is shown in Figure (1), and the structure of the Transformer Encoder is shown in Figure (2). In Figure (1), s is the convolution stride, k is the convolution kernel dimension, and n is the number of output channels.…”
Section: Transformer-based Generative Model Transsrganmentioning
confidence: 99%
“…The earliest generative model is the variational autoencoder [1] based on variational inference and Bayesian theory. Variational autoencoder can generate not only pictures, but also text [2] and audio [3]. Although the variational autoencoder is simple and effective, it tends to generate noisy data irrelevant to the trainset owing to the assumption of a simple normal distribution as the original sample distribution.…”
Section: Introductionmentioning
confidence: 99%