2011
DOI: 10.4028/www.scientific.net/amr.271-273.149
|View full text |Cite
|
Sign up to set email alerts
|

A Comparison of Machine Learning Classifiers

Abstract: A number of different classifiers have been used to improve the precision and accuracy and give better classification results. Machine learning classifiers have proven to be the most successful techniques in majority of the fields. This paper presents a comparison of the three most successful machine learning classification techniques SVM, boosting and Local SVM applied to a cancer dataset. The comparison is made on the basis of precision and accuracy along with the training time analysis. Finally, the efficac… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2018
2018
2018
2018

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 8 publications
0
1
0
Order By: Relevance
“…In our experiments, the SafeDroid v2.0 framework employs the functionality of the following classification algorithms: K-Nearest Neighbor, Support Vector Machine, Random Forest, Decision Tree, Multilayer Perceptron, and Adaptive Boosting [25]. In particular, the framework uses the cross-validation technique to produce accuracy results of all classifiers and then compares the cross-validation scores of the different classifiers with the greedy look-up procedure.…”
Section: Model Generationmentioning
confidence: 99%
“…In our experiments, the SafeDroid v2.0 framework employs the functionality of the following classification algorithms: K-Nearest Neighbor, Support Vector Machine, Random Forest, Decision Tree, Multilayer Perceptron, and Adaptive Boosting [25]. In particular, the framework uses the cross-validation technique to produce accuracy results of all classifiers and then compares the cross-validation scores of the different classifiers with the greedy look-up procedure.…”
Section: Model Generationmentioning
confidence: 99%