1993
DOI: 10.1080/10485259308832575
|View full text |Cite
|
Sign up to set email alerts
|

Kernel estimators for multivariate regression

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
11
0

Year Published

1999
1999
2021
2021

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 18 publications
(11 citation statements)
references
References 17 publications
0
11
0
Order By: Relevance
“…In fact, they extend the minimum variance selection principle kernel used to select the optimal kernel in the interior region, as in Epanechnikov (1969) and Granovsky and Müller (1991). In the nonparametric regression context, the problem of boundary bias is developed by Gasser, Müller, and Mammitsch (1985), and Zhang, Karunamuni, and Jones (1999) for the univariate case, and Fan and Gijbels (1992), , Staniswalis, Messer, and Finston (1993), and Staniswalis and Messer (1997) for multivariate data. This paper proposes a nonparametric product kernel estimator for density functions of multivariate bounded data.…”
Section: Introductionmentioning
confidence: 99%
“…In fact, they extend the minimum variance selection principle kernel used to select the optimal kernel in the interior region, as in Epanechnikov (1969) and Granovsky and Müller (1991). In the nonparametric regression context, the problem of boundary bias is developed by Gasser, Müller, and Mammitsch (1985), and Zhang, Karunamuni, and Jones (1999) for the univariate case, and Fan and Gijbels (1992), , Staniswalis, Messer, and Finston (1993), and Staniswalis and Messer (1997) for multivariate data. This paper proposes a nonparametric product kernel estimator for density functions of multivariate bounded data.…”
Section: Introductionmentioning
confidence: 99%
“…Boundary problems are particularly severe in higher dimensions: if the support of the function to be estimated is a unit cube in d and the mean-squared optimal bandwidth b $ cn À1a4d is used for the situation of a twice continuously dierentiable function to be estimated with a non-negative kernel, then the volume of the boundary region is 2 d b ob $ 2 d n À1a4d , which is increasing in d. Thus the boundary region takes up an increasingly larger fraction of the support of the function to be estimated, as the dimension d increases. A correction for boundary eects arising for multivariate kernel estimates has been discussed previously by Staniswalis et al (1993), section 2.2, and Staniswalis and Messer (1996). This approach extends a method proposed by Rice (1984) for univariate boundary corrections to the multivariate case by linearly combining estimates with several dierent bandwidths.…”
Section: Introductionmentioning
confidence: 87%
“…The former approach was explored, for example, by Cleveland and Devlin (1988), using locally weighted regression and by Nussbaum (1986), Georgiev (1987), Muller (1988, and Staniswalis, Messer, and Finston (1990), using kernel estimates. More restrictive additive models were considered by Hastie andTibshirani (1986, 1990) and others.…”
Section: Uj)es(n)mentioning
confidence: 99%