2017
DOI: 10.1063/1.5005351
|View full text |Cite
|
Sign up to set email alerts
|

Statistical analysis for validating ACO-KNN algorithm as feature selection in sentiment analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 20 publications
0
4
0
Order By: Relevance
“…In this article, we propose a multi-view classification algorithm, called Multi-View K-Nearest Neighbors (MVKNN). We chose the KNN algorithm to apply multi-view learning because of its advantages such as simplicity, easy implementation, and effectiveness [39]. The advantages of KNN also include ease of understanding and interpretation of the results, and useful for nonlinear data.…”
Section: The Proposed Approach: Multi-view K-nearest Neighborsmentioning
confidence: 99%
“…In this article, we propose a multi-view classification algorithm, called Multi-View K-Nearest Neighbors (MVKNN). We chose the KNN algorithm to apply multi-view learning because of its advantages such as simplicity, easy implementation, and effectiveness [39]. The advantages of KNN also include ease of understanding and interpretation of the results, and useful for nonlinear data.…”
Section: The Proposed Approach: Multi-view K-nearest Neighborsmentioning
confidence: 99%
“…Rohaidah et al [9], introduced a new Sentiment Analysis approach using the k-NN classifier. The dataset was collected from customer review datasets with maximum precision results equal to 0.892.…”
Section: Related Work IImentioning
confidence: 99%
“…Ahmad et al, [12] had proposed yet another hybrid Ant Colony Optimization (ACO) along with the K-Nearest Neighbour (KNN) based algorithms like the feature selections used for choosing all relevant features from the datasets of customer review. The Information Gain (IG), the GA, and finally the Rough Set Attribute Reduction (RSAR) had been employed to be the baseline ones in a performance evaluation and comparison.…”
Section: Related Workmentioning
confidence: 99%
“…The actual distance from the separating hyper plane for a certain point on the H1 is 1/|W| and the Distance from a separating of the hyper plane to a particular point on the H2 is 1/|W| so that the maximum margin will be 2/|W|. An MMH will be rewritten to be a decision boundary as according to its Lagrangian formulation as in (12)…”
Section: E Support Vector Machine (Svm) Classifiermentioning
confidence: 99%