1998
DOI: 10.1006/jmva.1997.1725
|View full text |Cite
|
Sign up to set email alerts
|

The Hilbert Kernel Regression Estimate

Abstract: Let (X, Y ) be an R d _R-valued regression pair, where X has a density and Y is bounded. If n i.i.d. samples are drawn from this distribution, the Nadaraya Watson kernel regression estimate in R d with Hilbert kernel K(x)=1Â&x& d is shown to converge weakly for all such regression pairs. We also show that strong convergence cannot be obtained. This is particularly interesting as this regression estimate does not have a smoothing parameter. Academic PressAMS 1991 subject classifications: Primary 62G05.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
68
0

Year Published

2002
2002
2024
2024

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 27 publications
(69 citation statements)
references
References 17 publications
1
68
0
Order By: Relevance
“…However, as noted by Devroye, Györfi and Krzyżak (1998), a kernel that is singular at 0 does interpolate the data. While the Hilbert kernel K(u) = u −d 2 , suggested in Devroye et al (1998), does not enjoy non-asymptotic rates of convergence, its truncated version…”
Section: Local Methods: Nadaraya-watsonmentioning
confidence: 99%
“…However, as noted by Devroye, Györfi and Krzyżak (1998), a kernel that is singular at 0 does interpolate the data. While the Hilbert kernel K(u) = u −d 2 , suggested in Devroye et al (1998), does not enjoy non-asymptotic rates of convergence, its truncated version…”
Section: Local Methods: Nadaraya-watsonmentioning
confidence: 99%
“…Traditionally, consistency and rates of convergence have been a central object of statistical investigation. The first result in this direction is Devroye et al (1998), which showed statistical consistency of a certain kernel regression scheme, closely related to Shepard's inverse distance interpolation (Shepard 1968).…”
Section: Optimality Of K-nn With Singular Weighting Schemesmentioning
confidence: 97%
“…It is perhaps ironic that an outlier feature of the 1-NN rule, shared with no other common methods in the classical statistics literature (except for the relatively unknown work by Devroye, Györfi and Krzyżak 1998), may be one of the cues to understanding modern deep learning.…”
Section: The Peculiar Case Of 1-nnmentioning
confidence: 99%
“…Rather than to prove the full-blown universal theorem, we restrict ourselves to the uniform density on the real line and recall the following result from Devroye and Krzyżak (1998), which is applicable as for d=1, the Hilbert product kernel estimate coincides with the standard Hilbert kernel estimate. …”
Section: Lack Of Strong Convergencementioning
confidence: 97%