2020
DOI: 10.1016/j.micpro.2020.103063
|View full text |Cite
|
Sign up to set email alerts
|

Text feature extraction based on stacked variational autoencoder

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
4
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 15 publications
(6 citation statements)
references
References 6 publications
0
4
0
Order By: Relevance
“…Due to the randomness of sample selection, each algorithm is executed 10 times to observe their identification results. According to previous studies, their model parameters are specifically summarized as follows: (1) In the traditional deep VAE [ 30 ], the network structure is set as 2048–512–64–32–10, which includes one input layer, three hidden layers and one output layer. (2) In the SDAE model [ 31 ], the network structure is set as 2048–200–100–80–10, which includes one input layer, three hidden layers and one output layer.…”
Section: Experimental Verification and Discussionmentioning
confidence: 99%
“…Due to the randomness of sample selection, each algorithm is executed 10 times to observe their identification results. According to previous studies, their model parameters are specifically summarized as follows: (1) In the traditional deep VAE [ 30 ], the network structure is set as 2048–512–64–32–10, which includes one input layer, three hidden layers and one output layer. (2) In the SDAE model [ 31 ], the network structure is set as 2048–200–100–80–10, which includes one input layer, three hidden layers and one output layer.…”
Section: Experimental Verification and Discussionmentioning
confidence: 99%
“…8 , AE is built by encoder and decoder, which is helpful for dimensionality reduction of data (Chen et al, 2018 ), feature extraction (Meng et al, 2017) and anomaly detection (Han et al, 2020 ). It also has some variants like undercomplete autoencoder (Buongiorno et al, 2019 ), regularized autoencoder (Hong et al, 2020 ) and variational autoencoder (Che et al, 2020 ).…”
Section: Frequently-used Deep Learning Models In Economics Applicationsmentioning
confidence: 99%
“…Ashok [15] et al compared GloVe with Word2vec (Word to Vector), and experiments proved that using GloVe has better text feature construction capabilities. The research on text feature extraction methods mainly focuses on deep learning, such as Stacked Variational Autoencoder (SVAE) [16], Convolutional Neural Network (CNN) [17][18], Long Short-Term Memory Network (LSTM) [19], etc. Many documents have proved the superiority of deep learning in text feature extraction.…”
Section: Text Data Feature Extractionmentioning
confidence: 99%