2016
DOI: 10.17485/ijst/2016/v9i3/86387
|View full text |Cite
|
Sign up to set email alerts
|

Feature Selection using Random Forest Method for Sentiment Analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
7
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 15 publications
(7 citation statements)
references
References 16 publications
0
7
0
Order By: Relevance
“…An experiment was performed using the WEKA Explorer classification tool [29]. Some of the well-known classifiers used for our comparative study included Decision tree, Sequential minimal optimization (SMO), Naive Bayes, Random Forests, K-Nearest neighbor (KNN) [30][31] etc. The output feature matrix was converted into an (Attribute-Relation File Format (ARFF) file, which describes a list of instances, sharing a set of attributes.…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…An experiment was performed using the WEKA Explorer classification tool [29]. Some of the well-known classifiers used for our comparative study included Decision tree, Sequential minimal optimization (SMO), Naive Bayes, Random Forests, K-Nearest neighbor (KNN) [30][31] etc. The output feature matrix was converted into an (Attribute-Relation File Format (ARFF) file, which describes a list of instances, sharing a set of attributes.…”
Section: Resultsmentioning
confidence: 99%
“…The pre-process tab in WEKA [29] enables loading and processing the feature matrix. All the classifiers such Decision tree, SMO, Naï ve Bayes, Random Forests, K-Nearest Neighbor [30][31] chose default parameter settings Naive Bayes [30][31] is the most commonly used classifier for its simple probabilistic classification. It is based on Bayes theorem with strong independent assumptions.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…This is the fundamentalreason for the increased use of random forest (RF) techniques in variable selection. However, methods like multiple imputation and complete case analysiscould be employed when the data embodies missing values.In theirstudy (Jotheeswaran & Koteeswaran, 2016) compared RF with Principal component analysis (PCA) and Decision Trees (DT) in variable selection and concluded that using RF, the precision of classifiers improved than the others. In his study to downscale temperatures on the land surface, (Hutengs & Vohland, 2016), used RF to select the critical variables suggesting the number of variables included influenced the importance score, while a change in the importance score could also be attributed to predictors changing or replaced (Aldrich & Auret, 2010), while investigating fault conditions, employed RF to identify variables in the process that had high contribution to faultiness.…”
Section: Introductionmentioning
confidence: 99%