2015
DOI: 10.5120/ijca2015907374
|View full text |Cite
|
Sign up to set email alerts
|

Survey on Classification Techniques for Data Mining

Abstract: This paper focuses on the various techniques that can be implemented for classification of observations that are initially uncategorized. Our objective is to compare the different classification methods and classifiers that can be used for this purpose. In this paper, we study and demonstrate the different accuracies and usefulness of classifiers and the circumstances in which they should be implemented.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0
1

Year Published

2018
2018
2023
2023

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(7 citation statements)
references
References 10 publications
0
6
0
1
Order By: Relevance
“…REP tree is a fast decision-tree learner that builds a decision/regression tree using information obtained from the splitting criterion, and prunes it using the reduced error method [36]. Nearest neighbor classifiers are based on learning by analogy: when given an unknown sample, a k-nearest neighbor (KNN) classifier searches the pattern space for the k training samples that are closest to the unknown sample [37]. The NB classifier assumes that each variable is unrelated to the presence of any other variable; because the independent variables are assumed, only the variances of the variables for each class need to be determined [37].…”
Section: Machine Learning Algorithmsmentioning
confidence: 99%
See 2 more Smart Citations
“…REP tree is a fast decision-tree learner that builds a decision/regression tree using information obtained from the splitting criterion, and prunes it using the reduced error method [36]. Nearest neighbor classifiers are based on learning by analogy: when given an unknown sample, a k-nearest neighbor (KNN) classifier searches the pattern space for the k training samples that are closest to the unknown sample [37]. The NB classifier assumes that each variable is unrelated to the presence of any other variable; because the independent variables are assumed, only the variances of the variables for each class need to be determined [37].…”
Section: Machine Learning Algorithmsmentioning
confidence: 99%
“…Nearest neighbor classifiers are based on learning by analogy: when given an unknown sample, a k-nearest neighbor (KNN) classifier searches the pattern space for the k training samples that are closest to the unknown sample [37]. The NB classifier assumes that each variable is unrelated to the presence of any other variable; because the independent variables are assumed, only the variances of the variables for each class need to be determined [37]. SVM produces a model that predicts the target data value using a testing set that contains only attributes; it can classify both linear and nonlinear data [37].…”
Section: Machine Learning Algorithmsmentioning
confidence: 99%
See 1 more Smart Citation
“…Naive Bayes merupakan sebuah pengklasifikasian probabilistik sederhana yang menghitung sekumpulan probabilitas dengan menjumlahkan frekuensi dan kombinasi nilai dari dataset yang diberikan [8]. Pengklasifikasian Naive Bayes diasumsikan bahwa ada atau tidaknya ciri tertentu dari suatu kelas tidak berkaitan dengan ciri kelas lainnya [9]. Dalam teori lain dijelaskan bahwa Naive Bayes merupakan pengklasifikasian dengan metode probabilitas dan statistik yang memprediksi peluang di masa depan berdasarkan pengalaman di masa sebelumnya [10].…”
Section: Metode Penelitianunclassified
“…Some scholars have compared the classification effects of data mining classification methods in the medical field. For example, Agarwal et al [17] compared the Bayesian, SVM, and decision tree classification results using medical data. The results show that the SVM has the highest classification accuracy.…”
Section: Introductionmentioning
confidence: 99%