2018
DOI: 10.1007/978-3-319-75487-1_25
|View full text |Cite
|
Sign up to set email alerts
|

Sentiment Analysis in Arabic Twitter Posts Using Supervised Methods with Combined Features

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 13 publications
0
3
0
Order By: Relevance
“…Preprocessing [17] Normalization, POS tagging [24][25][26][27] Stemming [28][29][30][31][32][33] Text cleaning [34][35][36][37][38][39] Normalization, stemming, stop words removal [40][41][42] Text cleaning, normalization, stemming, stop words removal [43][44][45] Normalization Text cleaning, normalization, tokenization, stemming, stop words removal [49][50][51][52] Normalization, tokenization [53,54] Text cleaning, normalization, tokenization [55,56] Normalization, tokenization, POS tagging [13,[57][58][59][60][61][62][63][64] Normalization, tokenization, stemming, stop words removal [65,66] Normalization, tokenization, stemming, lemmatization [67,68] Text cleaning, normalization, tokenization, stemming [69] Text cleaning, tokenization, stemming, negation detection [70]…”
Section: Referencementioning
confidence: 99%
“…Preprocessing [17] Normalization, POS tagging [24][25][26][27] Stemming [28][29][30][31][32][33] Text cleaning [34][35][36][37][38][39] Normalization, stemming, stop words removal [40][41][42] Text cleaning, normalization, stemming, stop words removal [43][44][45] Normalization Text cleaning, normalization, tokenization, stemming, stop words removal [49][50][51][52] Normalization, tokenization [53,54] Text cleaning, normalization, tokenization [55,56] Normalization, tokenization, POS tagging [13,[57][58][59][60][61][62][63][64] Normalization, tokenization, stemming, stop words removal [65,66] Normalization, tokenization, stemming, lemmatization [67,68] Text cleaning, normalization, tokenization, stemming [69] Text cleaning, tokenization, stemming, negation detection [70]…”
Section: Referencementioning
confidence: 99%
“…These features were used to train several machinelearning algorithms for classification, mainly SVM, Multinomial Naïve Bayes (MNB), Conditional Random Fields (CRF), Decision Trees, and k-Nearest Neighbors (k-NN). Overall, in some cases, SVM achieved better results [21,48,57,58,66,90,91,100,101,110,144,145,147,148,175,194,195,205,206,218,237,307,351,363,365], and in other cases, NB performed better [51,115,117,191,257,286], especially in the case of unbalanced datasets such as in References [306,308,309]. Mostafa [304] claimed that the best classifier is dataset dependent.…”
Section: Feature Engineering "Supervised" Approachesmentioning
confidence: 99%
“…The proposed tweet-specific sentiment lexicon [32] is built using a gold seed words set manually annotated and extracted from the data set, and automatically expanded from 500000 tweets using co-occurrence and coordination computing methods. We mainly employed two features inspired from this lexicon:…”
Section: Tweet Specific Lexiconmentioning
confidence: 99%