2002
DOI: 10.1007/b97848
|View full text |Cite
|
Sign up to set email alerts
|

A Distribution-Free Theory of Nonparametric Regression

Abstract: We study the problem of lossless feature selection for a d-dimensional feature vector X = (X (1) , . . . , X (d) ) and label Y for binary classification as well as nonparametric regression. For an index set S ⊂ {1, . . . , d}, consider the selected |S|-dimensional feature subvector X S = (X (i) , i ∈ S). If L * and L * (S) stand for the minimum risk based on X and X S , respectively, then X S is called lossless if L * = L * (S). For classification, the minimum risk is the Bayes error probability, while in reg… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

27
1,861
0
6

Year Published

2007
2007
2017
2017

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 1,384 publications
(1,894 citation statements)
references
References 11 publications
27
1,861
0
6
Order By: Relevance
“…This will become evident in the proof. Putting (27) together with (24) gives a bound (Corollary 4.2) that improves known estimates (like [20,Theorem 11.3] for instance). The improvement comes from the statistical error which is now essentially K σ 2 n as soon as the mean number nν(H k ) of simulations in each H k is large enough:…”
Section: Regression On Piecewise Constant Basis Functions and Nonparamentioning
confidence: 54%
See 3 more Smart Citations
“…This will become evident in the proof. Putting (27) together with (24) gives a bound (Corollary 4.2) that improves known estimates (like [20,Theorem 11.3] for instance). The improvement comes from the statistical error which is now essentially K σ 2 n as soon as the mean number nν(H k ) of simulations in each H k is large enough:…”
Section: Regression On Piecewise Constant Basis Functions and Nonparamentioning
confidence: 54%
“…Thanks to the orthogonal structure of the class K, the functions m n and m n are available in closed form [20,Ch. 4]: on each set H k , k ∈ {1, .…”
Section: Regression On Piecewise Constant Basis Functions and Nonparamentioning
confidence: 99%
See 2 more Smart Citations
“…Extensive work in nonparametrics has extended this result to consider the consistency of Stonetype rules under various sampling processes; see, for example, [6,10] and references therein. These models focus on various dependency structures within the training data and assume that a single processor has access to the entire data stream.…”
Section: The Classical Learning Model and Our Departurementioning
confidence: 97%