2001
DOI: 10.2307/1403452
|View full text |Cite
|
Sign up to set email alerts
|

Idiot's Bayes: Not So Stupid after All?

Abstract: JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.. International Statistical Institute (ISI) is collaborating with JSTOR to digitize, preserve and extend access to International Statistical Review / Revue Internationale de Sta… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

4
297
0
3

Year Published

2001
2001
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 279 publications
(304 citation statements)
references
References 20 publications
4
297
0
3
Order By: Relevance
“…This last minimization is exactly the minimum spanning tree problem, and the argument that minimizes it is the same as the argument that maximizes (2). Because this algorithm has to initialize the Θ(n 2 ) edges between every pair of features and then to solve the minimum spanning tree (e.g.…”
Section: Classification and Learning Tansmentioning
confidence: 99%
See 1 more Smart Citation
“…This last minimization is exactly the minimum spanning tree problem, and the argument that minimizes it is the same as the argument that maximizes (2). Because this algorithm has to initialize the Θ(n 2 ) edges between every pair of features and then to solve the minimum spanning tree (e.g.…”
Section: Classification and Learning Tansmentioning
confidence: 99%
“…Yet, at least under the zero-one accuracy, the naive Bayes classifier performs surprisingly well [1,2]. Reasons for this phenomenon have been provided, among others, by Friedman [3], who proposed an approach to decompose the misclassification error into bias error and variance error; the bias error represents how closely the classifier approximates the target function, while the variance error reflects the sensitivity of the parameters of the classifier to the training sample.…”
Section: Introductionmentioning
confidence: 99%
“…Despite the fact that the independence assumption in the naïve Bayes classifier is almost always incorrect in real applications, many studies have shown a very good performance of the model, even in comparison with much more sophisticated classifiers(for example [4,6,11,21,22]). These results are of particular interest especially considering the many advantages of the naïve Bayes classifier in practical applications: it is very simple, very efficient and very easy to interpret and implement.…”
Section: Bayesian Network For Classificationmentioning
confidence: 99%
“…These results are of particular interest especially considering the many advantages of the naïve Bayes classifier in practical applications: it is very simple, very efficient and very easy to interpret and implement. Hand and Yu [11] give some mathematical justifications of why such a simple and unrealistic model might perform so well on future observations: simple models, like this, have a lower variance than more complex models. Hence, despite having a larger bias, they might perform better on observations outside the training data.…”
Section: Bayesian Network For Classificationmentioning
confidence: 99%
“…The naïve Bayes classifier [31,32] is a probabilistic method for classification. It performs an approximate calculation of the probability that an example belongs to a class given the values of predictor variables.…”
Section: Naïve Bayesmentioning
confidence: 99%