2017
DOI: 10.3390/e19060247
|View full text |Cite
|
Sign up to set email alerts
|

Improving the Naive Bayes Classifier via a Quick Variable Selection Method Using Maximum of Entropy

Abstract: Abstract:Variable selection methods play an important role in the field of attribute mining. The Naive Bayes (NB) classifier is a very simple and popular classification method that yields good results in a short processing time. Hence, it is a very appropriate classifier for very large datasets. The method has a high dependence on the relationships between the variables. The Info-Gain (IG) measure, which is based on general entropy, can be used as a quick variable selection method. This measure ranks the impor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
16
0
1

Year Published

2019
2019
2022
2022

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 34 publications
(19 citation statements)
references
References 26 publications
0
16
0
1
Order By: Relevance
“…MCCV is based on multiple executions of the train and test method, with a defined random percentage, split into training and test parts. In search of an effective, simple classification method, we implemented the Naive Bayes classifier [14], the weighted classification 811 method [14], and the local kNN using Manhattan, Euclidean, and Canberra metrics [14]. Naive Bayes uses sum instead of multiplication to avoid zeroing of parameters and descriptors' indiscernibility ratio.…”
Section: Searching For the Dedicated Method-selected Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…MCCV is based on multiple executions of the train and test method, with a defined random percentage, split into training and test parts. In search of an effective, simple classification method, we implemented the Naive Bayes classifier [14], the weighted classification 811 method [14], and the local kNN using Manhattan, Euclidean, and Canberra metrics [14]. Naive Bayes uses sum instead of multiplication to avoid zeroing of parameters and descriptors' indiscernibility ratio.…”
Section: Searching For the Dedicated Method-selected Resultsmentioning
confidence: 99%
“…There are many methods of analysis in the field of using the electronic nose in beekeeping, including linear discriminant analysis (LDA), principal component analysis (PCA), and cluster analysis (CA) with the furthest neighbor method (kNN). Good results have also been obtained using the artificial neural network (ANN) machine learning techniques, which use a neural network model based on a multilayer perceptron that learned using a backpropagation algorithm [8][9][10][11][12][13][14][15].…”
Section: Achievements To Date In the Use Of Gas Sensors For This Typementioning
confidence: 99%
“…Metode classifier juga dapat digunakan pada algoritma Naive Bayes, dimana metode tersebut akan mampu melihat peluang dari setiap kondisi [7]. Algoritma Naive Bayes dalam pengklasifikasian dokumen akan dapat mengklasifikasikan dengan melihat probabilitas pada setiap model [8]. Pada penerapan analisis sentimen terhadap suatu produk di Twitter menggunakan Naive Bayes dan Information Gain mampu menghasilkan tingkat akurasi tertinggi senilai 85% dan titik terendah pengujian menghasilkan tingkat akurasi 50% [9].…”
Section: Pendahuluanunclassified
“…The experiment is tested using UCI datasets and results are compared with SBC, NB, C4.5, and ABC. [17]A new variable selection IIG proposed to use with NB to enhance the performance prediction. The IIG method is proposed to eliminate the drawbacks of IG (non-negative & Threshold value).…”
Section: Related Workmentioning
confidence: 99%