2002
DOI: 10.1007/3-540-47887-6_10
|View full text |Cite
|
Sign up to set email alerts
|

SNNB: A Selective Neighborhood Based Naïve Bayes for Lazy Learning

Abstract: Abstract. Naive Bayes is a probability-based classification method which is based on the assumption that attributes are conditionally mutually independent given the class label. Much research has been focused on improving the accuracy of Naïve Bayes via eager learning. In this paper, we propose a novel lazy learning algorithm, Selective Neighbourhood based Naïve Bayes (SNNB). SNNB computes different distance neighborhoods of the input new object, lazily learns multiple Naïve Bayes classifiers, and uses the cla… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
34
0
2

Year Published

2005
2005
2017
2017

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 64 publications
(37 citation statements)
references
References 9 publications
1
34
0
2
Order By: Relevance
“…Previously, Xie et al (2002) proposed an improved algorithm called Selective Neighborhood Naïve Bayes (SNNB). Lazkano and Sierra (2003) proposed the combination in a different way that combines the nearest neighbor with Bayesian network.…”
Section: The Proposed Algorithmmentioning
confidence: 99%
“…Previously, Xie et al (2002) proposed an improved algorithm called Selective Neighborhood Naïve Bayes (SNNB). Lazkano and Sierra (2003) proposed the combination in a different way that combines the nearest neighbor with Bayesian network.…”
Section: The Proposed Algorithmmentioning
confidence: 99%
“…It is very efficient with reasonable prediction accuracy [8], [9], [10], [11], [12], [13], [14], [15]. In recent years, there has also been considerable interest in developing variants of NB that weaken the attribute independence assumption in order to further improve the prediction accuracy [6], [7], [16], [17], [18], [19], [20], [21], [22], [23], [24], [25], [26], [27], [28], [29], [30]. For instance, one-dependence estimators (ODEs) [23] such as the tree-augmented naive Bayes (TAN) [16] provide a powerful alternative to NB.…”
Section: Spodementioning
confidence: 99%
“…SNNB [18] and LWNB [10] select k-nearest neighbor examples from original training data set to construct sub data set according the attribute value vector of the tested instance, and with which a local naive Bayesian classifier is trained. LBR [20] picks out the attributes which remarkably improve the classifier accuracy, and constructs local naive Bayesian classifier using the remainder attributes.…”
Section: Introductionmentioning
confidence: 99%