Encyclopedia of Machine Learning and Data Mining 2016
DOI: 10.1007/978-1-4899-7502-7_581-1
|View full text |Cite
|
Sign up to set email alerts
|

Naïve Bayes

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0
2

Year Published

2020
2020
2024
2024

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 23 publications
(14 citation statements)
references
References 0 publications
0
12
0
2
Order By: Relevance
“…State-of-the-art result is achieved using SVM with accuracy 98.97%. Chandra et al 37 have used majority voting based ensemble of five classifiers—SVM 28 , KNN 22 , DT 36 , Artificial Neural Network (ANN) 38 , Naive Bayes (NB) 39 on the database consisting of three publicly available CXR image datasets: covid-chestxray dataset 23 , Montgomery dataset 40 , and NIH ChestX-ray14 dataset 41 . Among the total 8196 features extracted from all the pre-processed images, 8 are First Order Statistical Features (FOSF) 42 , 88 are Grey Level Co-occurrence Matrix (GLCM) 43 based features and the rest 8100 are Histogram of Oriented Gradients (HOG) 44 features.…”
Section: Introductionmentioning
confidence: 99%
“…State-of-the-art result is achieved using SVM with accuracy 98.97%. Chandra et al 37 have used majority voting based ensemble of five classifiers—SVM 28 , KNN 22 , DT 36 , Artificial Neural Network (ANN) 38 , Naive Bayes (NB) 39 on the database consisting of three publicly available CXR image datasets: covid-chestxray dataset 23 , Montgomery dataset 40 , and NIH ChestX-ray14 dataset 41 . Among the total 8196 features extracted from all the pre-processed images, 8 are First Order Statistical Features (FOSF) 42 , 88 are Grey Level Co-occurrence Matrix (GLCM) 43 based features and the rest 8100 are Histogram of Oriented Gradients (HOG) 44 features.…”
Section: Introductionmentioning
confidence: 99%
“…It has the advantage that it natively provides a probabilistic result; however, it is known to be relatively computationally intensive when applied to high dimensional data sets. The second contrasting nonlinear method tested is a Gaussian Naive Bayes model, based on applying Bayes' theorem while assuming conditional interdependence between features, given the assigned class (Pérez et al, 2006; Webb et al, 2011). In this formulation, the likelihood of the features is assumed to be Gaussian.…”
Section: Methods Models and Metricsmentioning
confidence: 99%
“…For salmon classification, AutoML generated thirty-six pipelines, as shown in Figure 8. The RF, LDA, Adaboost [67], libsvm_svc, Extra Tree, PA, KNN, MLP, Decision Tree (DT) [68], Bernoulli naïve Bayes (bernoulli_nb) [69], and GB classifiers were tested. Pipelines #3, #4, #20, #22, #30, #31, #32, #33, #34, #35, and #36 were left out of the Ensemble.…”
Section: Resultsmentioning
confidence: 99%