2019
DOI: 10.1109/access.2019.2957192
|View full text |Cite
|
Sign up to set email alerts
|

Transformer Based Memory Network for Sentiment Analysis of Web Comments

Abstract: The boom in wireless networking technology has led to an exponential increase in the number of web comments. Therefore, sentiment analysis of web comments is vital, and aspect-based sentiment analysis(ABSA) is very useful for the sentiment feature extraction of web comments. Currently, context-dependent sentiment feature typically derives from recurrent neural networks (RNN), and an average target vector usually replaces the target vector. However, web comments have become increasingly complex, and RNN may los… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
16
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 29 publications
(18 citation statements)
references
References 27 publications
0
16
0
Order By: Relevance
“…For this reason, bidirectional GRU is adopted in the FGAOM to learn long term dependencies from accumulated features in the previous layer in both forward and backward directions. Thus, for feature at time t, the hidden states of forward and backward GRU are computed as shown in equations (8)(9)(10).…”
Section: ) Bi-grumentioning
confidence: 99%
See 2 more Smart Citations
“…For this reason, bidirectional GRU is adopted in the FGAOM to learn long term dependencies from accumulated features in the previous layer in both forward and backward directions. Thus, for feature at time t, the hidden states of forward and backward GRU are computed as shown in equations (8)(9)(10).…”
Section: ) Bi-grumentioning
confidence: 99%
“…MHA mechanism and convolution operation are employed to further analyze the relevance of aspect and context words that passes through interactive pooling layer. [8]: This model applies the transformer memory network to learn hidden contextual patterns in complex web comments via global attention and local attention mechanisms for constructing fine-grained semantic representation.…”
Section: A Compared Studiesmentioning
confidence: 99%
See 1 more Smart Citation
“…,where e d is the dimension of vector. In T-MGAN design, Glove [40] word embeddings are alternatives for the embedding layer. We obtain the specific word vector through lookup embedding…”
Section: B Input Word Embedding Layermentioning
confidence: 99%
“…In the experiment, we set the initial word embeddings to 300-dimension Glove vectors [40] for all datasets. And for the words that out of vocabulary, we randomly sample their embeddings from the uniform distribution ( 0.01,0.01) U  .…”
Section: B Experiments Settingmentioning
confidence: 99%