2017
DOI: 10.1007/978-3-319-68612-7_5
|View full text |Cite
|
Sign up to set email alerts
|

Word Embedding Dropout and Variable-Length Convolution Window in Convolutional Neural Network for Sentiment Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 9 publications
0
2
0
Order By: Relevance
“…Some of the embeddings are artificially decreased by a drop rate of 0.2 [51]. Using the drop-out layers on the built-in matrix can reduce deep neural network overfitting [52]. The remaining word embedding is scaled as 1 1Àpe where pe indicates the probability of drop out [53].…”
Section: Proposed Methodologymentioning
confidence: 99%
“…Some of the embeddings are artificially decreased by a drop rate of 0.2 [51]. Using the drop-out layers on the built-in matrix can reduce deep neural network overfitting [52]. The remaining word embedding is scaled as 1 1Àpe where pe indicates the probability of drop out [53].…”
Section: Proposed Methodologymentioning
confidence: 99%
“…The dropout layer arbitrarily drops some of the embeddings with a drop rate of 0.2 (Rao & Spasojevic, 2016). Utilizing the dropout layer on the embedded matrix helps to decrease the overfitting of deep neural networks (Sun & Gu, 2017). The rest of the word embedding which has not been dropped out are scaled as 1 1Àp e where p e is the likelihood of embedding dropout (Gal & Ghahramani, 2016).…”
Section: Proposed Cnn-lstm Modelmentioning
confidence: 99%