2019
DOI: 10.48550/arxiv.1902.00753
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Word Embeddings for Sentiment Analysis: A Comprehensive Empirical Survey

Abstract: This work investigates the role of factors like training method, training corpus size and thematic relevance of texts in the performance of word embedding features on sentiment analysis of tweets, song lyrics, movie reviews and item reviews. We also explore specific training or post-processing methods that can be used to enhance the performance of word embeddings in certain tasks or domains. Our empirical observations indicate that models trained with multithematic texts that are large and rich in vocabulary a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 20 publications
0
1
0
Order By: Relevance
“…LSTM has abilities to recall the data for a significant stretch of time. Here's the model that was formulated: GloVe embeddings are vector representations of words and they have been trained over 300 billion unique tokens in the vocabulary [25]. So, I have used GloVe embeddings as the embeddings initializer in the LSTM model.…”
Section: Step 2: Lstms and Glovementioning
confidence: 99%
“…LSTM has abilities to recall the data for a significant stretch of time. Here's the model that was formulated: GloVe embeddings are vector representations of words and they have been trained over 300 billion unique tokens in the vocabulary [25]. So, I have used GloVe embeddings as the embeddings initializer in the LSTM model.…”
Section: Step 2: Lstms and Glovementioning
confidence: 99%