1975
DOI: 10.1109/tit.1975.1055373
|View full text |Cite
|
Sign up to set email alerts
|

k-nearest-neighbor Bayes-risk estimation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
67
0
1

Year Published

2001
2001
2014
2014

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 134 publications
(68 citation statements)
references
References 11 publications
0
67
0
1
Order By: Relevance
“…Binary pattern classification is the quintessential pattern recognition problem studied at great length in the machine learning literature, with various algorithms such as the nearest neighbor rule (Cover & Hart, 1967;Fukunaga & Hostetler, 1975), support vector machines (Cortes & Vapnik, 1995;Vapnik, 1998), artificial neural networks (Haykin, 2008), and discriminant functions (McLachlan, 2004). These methods can be trained to construct decision rules to separate the pattern classes of interest based on a training set of data containing samples representing the two classes.…”
Section: The Supervised Learning Methods Used For Automatic Segmentationmentioning
confidence: 99%
“…Binary pattern classification is the quintessential pattern recognition problem studied at great length in the machine learning literature, with various algorithms such as the nearest neighbor rule (Cover & Hart, 1967;Fukunaga & Hostetler, 1975), support vector machines (Cortes & Vapnik, 1995;Vapnik, 1998), artificial neural networks (Haykin, 2008), and discriminant functions (McLachlan, 2004). These methods can be trained to construct decision rules to separate the pattern classes of interest based on a training set of data containing samples representing the two classes.…”
Section: The Supervised Learning Methods Used For Automatic Segmentationmentioning
confidence: 99%
“…Later, he shown some of the formal properties of this procedure, for example, that the classification error rate is bounded by twice the Bayes error value when you have an infinite number of samples for classifying and k is equal to 1 (Cover & Hart, 1967). Once developed the formal properties of this classifier, he established a line of research that goes up today, highlighting the work of Hellman (Hellman, 1970), which show a new approach to rejection, Fukunaga and Hostetler (Fukunaga & Hostetler, 1975), which sets out refinements with respect to the Bayes error rate, or those developed by (Dudani, 1976) and Bailey and Jain (Bailey & Jain, 1978), in which new approaches were established to the use of weighted distances. Other interesting work on the subject is related to soft computing (Bermejo & Cabestany, 2000) and fuzzy methods (Jozwik, 1983, Keller et al, 1985.…”
Section: K Nearest Neighbors (Knn)mentioning
confidence: 99%
“…where pðxjx A C 0 Þ and pðxjx A C 1 Þ represent the class conditional probability densities for the classes C 0 and C 1 , respectively [4,10]. The basis of this observation is as follows: Let N ðxÞ & X be a small spherical neighborhood of x of size VðN ðxÞÞ, and R be a random reference set containing n points from each of C 0 and C 1 .…”
Section: Likelihood Ratio Estimation Via the Nearest Neighbor Rulementioning
confidence: 99%
“…The general strategy relies on expert-curated ground truth datasets providing the categorical associations of all available data samples. Ground truth datasets are then used as the basis for statistical learning, specifically to construct a classification rule using one of a host of methods such as the support vector machines [1,2], nearest neighbor classifiers [3,4], neural networks [5], discriminant functions [6] and so on.…”
Section: Introductionmentioning
confidence: 99%