2021
DOI: 10.1111/coin.12478
|View full text |Cite
|
Sign up to set email alerts
|

A heterogeneous stacking ensemble based sentiment analysis framework using multiple word embeddings

Abstract: Word embedding techniques have been proposed in the literature to analyze and determine the sentiments expressed in various textual documents such as social media posts, online product reviews, and so forth. However, it is difficult to capture the entire gamut of intricate inter-dependencies among words in the textual documents using a specific word embedding technique. In this article, we aim to address this issue by proposing a computation-efficient stacking ensemble based sentiment analysis framework using … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 18 publications
(3 citation statements)
references
References 40 publications
0
3
0
Order By: Relevance
“…Alexandridis et al [27] used various language models to represent social media texts and Greek language text classifiers, using word embedding implemented by the GloVe model, to detect the polarity of opinions expressed on social media. The GloVe model has also been used in sentiment analysis models, often associated with a recurrent neural network module like long sort-term memory (LSTM) or GRU [6], [28], [29].…”
Section: Glovementioning
confidence: 99%
“…Alexandridis et al [27] used various language models to represent social media texts and Greek language text classifiers, using word embedding implemented by the GloVe model, to detect the polarity of opinions expressed on social media. The GloVe model has also been used in sentiment analysis models, often associated with a recurrent neural network module like long sort-term memory (LSTM) or GRU [6], [28], [29].…”
Section: Glovementioning
confidence: 99%
“…In most cases, Ensemble learning methods can be in the form of three popular ones, namely bagging [ 17 ], boosting [ 18 ], and stacking [ 19 ]. Many Researcher activities on ensemble learning centered around homogeneous ensembles, even though heterogeneous ensembles could prove more efficient in case of combining pre-trained models that are often readily available such as [ 20 , 21 ].…”
Section: Introductionmentioning
confidence: 99%
“…Different distributional semantics models have been developed to generate embeddings, and these have proved to adequately capture the semantic properties of words, as long as sufficiently large corpora is used ( Joulin et al, 2016 ; Mikolov et al, 2013 ). Several application scenarios have been tested including: classification of twitter streams ( Khatua, Khatua & Cambria, 2019 ; Zhang & Luo, 2019 ), plagiarism detection ( Tien et al, 2019 ), opinion mining on social networks ( Nguyen & Le Nguyen, 2018 ; Rida-E-Fatima et al, 2019 ), recommendation systems ( Chamberlain et al, 2020 ; Baek & Chung, 2021 ), mapping of scientific domain keywords ( Hu et al, 2019 ), tracking emerging scientific keywords ( Dridi et al, 2019 ), optimization of queries for Information Retrieval ( Roy et al, 2019 ; Hofstätter et al, 2019 ) or sentiment analysis ( Santosh Kumar, Yadav & Dhavale, 2021 ; Subba & Kumari, 2022 ). However, the great majority of such models has been developed for English corpora.…”
Section: Introductionmentioning
confidence: 99%