2017
DOI: 10.1007/978-3-319-70096-0_51
|View full text |Cite
|
Sign up to set email alerts
|

Hybrid Deep Learning for Sentiment Polarity Determination of Arabic Microblogs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
32
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 42 publications
(32 citation statements)
references
References 18 publications
0
32
0
Order By: Relevance
“…State-of-the-art methods used in the comparison are listed in Table 15.CNN-base: a CNN similar to the model described in Section 3.2 trained on Twitter word embeddings (Twt-CBOW) from [58]. A random configuration is used, where parameters such as filter sizes list, number of neurons, NFCS, initialization mode, and dropout rate were set to [3, 5, 7], 150, 100, uniform, and 0.7, respectively.Combined LSTM: a model proposed by Al-Azani and El-Alfy [57], where two long short-term memory (LSTM) networks were combined using different combination methods including: summation, multiplication, and concatenation.Stacking ensemble (eclf14): a model based on stacking ensemble presented in [21], where several classifiers were included in the training. The used ensemble-learning techniques are stochastic gradient descent (SGD) and nu-support vector classification (NuSVC).NuSVC: a model is employed in [7] as a classifier on AAQ dataset.SVM(bigrams): a suport vector machine classifier trained on TF-IDF as weighting scheme through bigrams was evaluated in [52] on AJGT dataset.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…State-of-the-art methods used in the comparison are listed in Table 15.CNN-base: a CNN similar to the model described in Section 3.2 trained on Twitter word embeddings (Twt-CBOW) from [58]. A random configuration is used, where parameters such as filter sizes list, number of neurons, NFCS, initialization mode, and dropout rate were set to [3, 5, 7], 150, 100, uniform, and 0.7, respectively.Combined LSTM: a model proposed by Al-Azani and El-Alfy [57], where two long short-term memory (LSTM) networks were combined using different combination methods including: summation, multiplication, and concatenation.Stacking ensemble (eclf14): a model based on stacking ensemble presented in [21], where several classifiers were included in the training. The used ensemble-learning techniques are stochastic gradient descent (SGD) and nu-support vector classification (NuSVC).NuSVC: a model is employed in [7] as a classifier on AAQ dataset.SVM(bigrams): a suport vector machine classifier trained on TF-IDF as weighting scheme through bigrams was evaluated in [52] on AJGT dataset.…”
Section: Resultsmentioning
confidence: 99%
“…A random configuration is used, where parameters such as filter sizes list, number of neurons, NFCS, initialization mode, and dropout rate were set to [3, 5, 7], 150, 100, uniform, and 0.7, respectively. Combined LSTM: a model proposed by Al-Azani and El-Alfy [ 57 ], where two long short-term memory (LSTM) networks were combined using different combination methods including: summation, multiplication, and concatenation. Stacking ensemble (eclf14): a model based on stacking ensemble presented in [ 21 ], where several classifiers were included in the training.…”
Section: Resultsmentioning
confidence: 99%
“…It is noticed that using the word embedding with DL models helped improving the results over the linear models, such as SVM, as it is suitable for large datasets and can be computationally efficient [33,90].…”
Section: Machine Learning Approachmentioning
confidence: 99%
“…As a result, the obtained accuracy for ASA was improved on several datasets. In addition, DL method for ASA was presented in [90]. e authors investigated several combinations of skip-gram and CBOW, including CNN and LSTM evaluated on two publicly available datasets.…”
Section: Machine Learning Approachmentioning
confidence: 99%
“…In Al-Azani and El-Alfy [55], five architectures were investigated including CNN, CNN-LSTM, simple LSTM, stacked LSTM and combined LSTM to analyse Arabic tweets. They employed dynamic and static CBOW and SG word embeddings to train the models.…”
Section: Sentiment Analysismentioning
confidence: 99%