2018
DOI: 10.1016/j.procs.2018.01.150
|View full text |Cite
|
Sign up to set email alerts
|

Random Forest and Support Vector Machine based Hybrid Approach to Sentiment Analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
99
0
12

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 213 publications
(111 citation statements)
references
References 9 publications
0
99
0
12
Order By: Relevance
“…It calculates a maximum margin hyperplane which divides the data points into two classes. In text classification, Support Vector Machine is considered the best classification algorithm [31], [32]. SVM algorithm generally divides the training dataset into minimum of two classes.…”
Section: B Phase II Using Machine Learning Methodsmentioning
confidence: 99%
“…It calculates a maximum margin hyperplane which divides the data points into two classes. In text classification, Support Vector Machine is considered the best classification algorithm [31], [32]. SVM algorithm generally divides the training dataset into minimum of two classes.…”
Section: B Phase II Using Machine Learning Methodsmentioning
confidence: 99%
“…(3) both algorithms were applied in the past to perform NLP [30]; and (4) decision-trees and SVM were included among the 25 best classifiers identified in a deep analysis performed in [31].…”
Section: Statistical Modelsmentioning
confidence: 99%
“…Yassine et al, [11] used amazon dataset that resulted from the product comments for Sentiment analysis. They proposed supervised learning algorithms like Support Vector Machine, Random Forest and Random Forest Support Vector Machine algorithms (RFSVM) for generating rules in classification technique.…”
Section: Related Workmentioning
confidence: 99%
“…Outfit Bagging produces a precision of 83% and booting calculation, for example, Adaboost, Gradient boosting and XGB boosting produces an exactness of 84%, 86% and 86 % respectively. The fundamental variation between various boosting calculations is that their strategy for weighting preparing information focuses and theories [11] [12]. Stacking strategy delivers an exactness of 84 %.…”
Section: Fig 2: Accuracy Of Various Classification Techniquesmentioning
confidence: 99%