Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Confere 2015
DOI: 10.3115/v1/p15-1130
|View full text |Cite
|
Sign up to set email alerts
|

Predicting Polarities of Tweets by Composing Word Embeddings with Long Short-Term Memory

Abstract: In this paper, we introduce Long Short-Term Memory (LSTM) recurrent network for twitter sentiment prediction. With the help of gates and constant error carousels in the memory block structure, the model could handle interactions between words through a flexible compositional function. Experiments on a public noisy labelled data show that our model outperforms several feature-engineering approaches, with the result comparable to the current best data-driven technique. According to the evaluation on a generated … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
90
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 255 publications
(105 citation statements)
references
References 24 publications
0
90
0
Order By: Relevance
“…2) RNN for sentence-level classification: Wang et al [25] proposed encoding entire tweets with LSTM, whose hidden state is used for predicting sentiment polarity. This simple strategy proved competitive to the more complex DCNN structure by Kalchbrenner et al [49] designed to endow CNN models with ability to capture long-term dependencies.…”
Section: B Rnn Modelsmentioning
confidence: 99%
“…2) RNN for sentence-level classification: Wang et al [25] proposed encoding entire tweets with LSTM, whose hidden state is used for predicting sentiment polarity. This simple strategy proved competitive to the more complex DCNN structure by Kalchbrenner et al [49] designed to endow CNN models with ability to capture long-term dependencies.…”
Section: B Rnn Modelsmentioning
confidence: 99%
“…Deep learning has emerged as a powerful machine learning technique and is also popularly used in sentiment analysis in recent years [14], [15]. Wang et al proposed a CNN-RNN architecture [16] to analyze the sentiment of short text while some other studies tried apply methods based on CNN [17] and RNN [18]. Their experimental results shown that the proposed method outperforms lexicon-based, regression-based, and obtained an obvious improvement upon the state-of-the-art.…”
Section: Literature Surveymentioning
confidence: 99%
“…In terms of methods, we believe that deep learning [58,61,72], together with semi-supervised and distantly-supervised methods [10,67], will be the main focus of future research. We also expect more attention to be paid to linguistic structure and sentiment compositionality [62,63].…”
Section: Future Directionsmentioning
confidence: 99%
“…A popular way to solve this latter problem is to use self training, a form of semi-supervised learning, where first a system is trained on the available training data only, then this system is applied to make predictions on a large unannotated set of tweets, and finally it is trained for a few more iterations on its own predictions. This works because parts of the network, e.g., with convolution or with LSTMs [58,61,72], need to learn something like a language model, i.e., which word is likely to follow which one. Training these parts needs no labels.…”
Section: Features and Learningmentioning
confidence: 99%