1994
DOI: 10.1214/aos/1176325633
|View full text |Cite
|
Sign up to set email alerts
|

On the Strong Universal Consistency of Nearest Neighbor Regression Function Estimates

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
148
0
4

Year Published

2004
2004
2023
2023

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 266 publications
(155 citation statements)
references
References 27 publications
3
148
0
4
Order By: Relevance
“…3 This flavor of result is "nonasymptotic" in that it can be phrased in a way that gives the probability of misclassification for any training data set size; we do not need an asymptotic assumption that the amount of training data goes to infinity. Chaudhuri and Dasgupta's result subsumes or matches classical results by Fix and Hodges (1951), Devroye et al (1994), Cérou and Guyader (2006), and Audibert and Tsybakov (2007), while providing a perhaps more intuitive explanation for when nearest neighbor classification works, accounting for the metric used and the distribution from which the data are sampled. Moreover, we show that their analysis can be translated to the regression setting, yielding theoretical guarantees that nearly match the best of existing regression results.…”
Section: Nearest Neighbor Methods In Theorysupporting
confidence: 66%
“…3 This flavor of result is "nonasymptotic" in that it can be phrased in a way that gives the probability of misclassification for any training data set size; we do not need an asymptotic assumption that the amount of training data goes to infinity. Chaudhuri and Dasgupta's result subsumes or matches classical results by Fix and Hodges (1951), Devroye et al (1994), Cérou and Guyader (2006), and Audibert and Tsybakov (2007), while providing a perhaps more intuitive explanation for when nearest neighbor classification works, accounting for the metric used and the distribution from which the data are sampled. Moreover, we show that their analysis can be translated to the regression setting, yielding theoretical guarantees that nearly match the best of existing regression results.…”
Section: Nearest Neighbor Methods In Theorysupporting
confidence: 66%
“…As the name suggests, this rule classifies X by assigning it to the class that appears most frequently among the k nearest neighbors. Indeed, as shown in (Stone, 1977;L. Devroye and Lugosi, 1994), the k-nearest neighbor rule is universally consistent provided that the speed of k approaching n is properly controlled, i.e., k → ∞ and k/n → 0 as n → ∞.…”
Section: Introductionmentioning
confidence: 78%
“…While being intuitive and simple to implement, k-nearest neighbor regression is well-understood from the point of view of theory as well, see e.g. [2], [3], [13], and the references therein for an overview of the most important theoretical results. These theoretical results are also justified by empirical studies: for example, in their recent paper, Stensbo-Smidt et al found that nearest neighbor regression outperforms model-based prediction of star formation rates [30], while Hu et al showed that a model based on k-nearest neighbor regression is able to estimate the capacity of lithium-ion batteries [19].…”
Section: Ecknn: K-nearest Neighbor Regression With Error Correctionmentioning
confidence: 99%