2019
DOI: 10.1007/978-3-030-34518-1_10
|View full text |Cite
|
Sign up to set email alerts
|

Binary Autoencoder for Text Modeling

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(9 citation statements)
references
References 6 publications
0
9
0
Order By: Relevance
“…Finding noisy data in various datasets has always been of importance in traditional machine learning [3] and deep learning [17] . Using Autoencoders is the most popular approach among all and it has also been useful in natural language processing [8] . To obtain the distribution of the data before the actual event, the output documents of the Sentence-BERT model are used to train the autoencoder network.…”
Section: Distributional Denoising Autoencodermentioning
confidence: 99%
“…Finding noisy data in various datasets has always been of importance in traditional machine learning [3] and deep learning [17] . Using Autoencoders is the most popular approach among all and it has also been useful in natural language processing [8] . To obtain the distribution of the data before the actual event, the output documents of the Sentence-BERT model are used to train the autoencoder network.…”
Section: Distributional Denoising Autoencodermentioning
confidence: 99%
“…Variational Autoencoders [ 27 ] (VAEs) play an important role in text generation tasks, when semantically consistent latent space is needed; however, VAEs training generally suffers from mode collapse issues. The authors of [ 28 ] developed an autoencoder with binary latent space using a straight-through estimator: experiments showed that this approach maintains the main features of VAE, e.g., semantic consistency and good latent space coverage, while not suffering from the mode collapse, other than being much easier to train. One of the most successful uses of autoencoders is for noise removal.…”
Section: Related Workmentioning
confidence: 99%
“…Autoencoders are another successful deep architectures where the aim is to reconstruct a signal by learning a latent representation from a set of data. They have been used for realistic text [20] and images [21] generation; however, one of the most successful use of autoencoders is for noise removal. Since their introduction [22], denoising autoencoders (DAE) have been used for a broad number of tasks like medical images improvement [23], speech enhancement [24] and ECG signal boost [25].…”
Section: Related Workmentioning
confidence: 99%
“…proposed by [21], [22], and the authors of [23] relax the discrete latents and durations of a recurrent hidden semi-Markov Model. Applications of (Gumbel-based) discrete latent variable models as described above include (among others) planning [24], syntactic parsing [25], text modelling [26], speech modelling [27], [28], paraphrase generation [29], recommender systems [30], drug-drug interaction modelling [31], and event modelling [32].…”
Section: Discrete Latent Variable Modelsmentioning
confidence: 99%