1994
DOI: 10.1214/aos/1176325632
|View full text |Cite
|
Sign up to set email alerts
|

Multivariate Locally Weighted Least Squares Regression

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

12
579
0
2

Year Published

1996
1996
2008
2008

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 885 publications
(600 citation statements)
references
References 19 publications
12
579
0
2
Order By: Relevance
“…From this proof we can see that if m is a linear functional then the local linear estimator is unbiased. This fact was already observed by Fan (1992) and Ruppert and Wand (1994) in the case that X has finite dimension.…”
Section: Asymptotic Behavioursupporting
confidence: 67%
“…From this proof we can see that if m is a linear functional then the local linear estimator is unbiased. This fact was already observed by Fan (1992) and Ruppert and Wand (1994) in the case that X has finite dimension.…”
Section: Asymptotic Behavioursupporting
confidence: 67%
“…Our algorithm is based on local linear regression [13,5]. Sensor readings from static nodes (a set of buoys) are sent to the mobile robot (a boat) and used to estimate the Hessian Matrix of the scalar field (the surface temperature of a lake), which is directly related to the estimation error.…”
Section: Contributionsmentioning
confidence: 99%
“…Again the monotonized estimate is first order equivalent to the unconstrained local linear estimate [see e.g. Ruppert and Wand (1994)]. Because of its better performance at the boundary, the local linear estimate was in fact used in our numerical examples, which will be described in the following section.…”
Section: Asymptotic Theorymentioning
confidence: 99%
“…In order to avoid boundary effects, we implemented the new procedure using a two dimensional local linear estimator [see Ruppert and Wand (1994)] and the bandwidth h r,1 = h r,2 = σ 2 n 1/6 (4.1) whereσ 2 = σ 2 (u, v)dudv denotes the integrated variance and n is the sample size. For the kernel K r , we used a product kernel based on two Epanechnikov kernels k(x) = 3 4 (1 − x 2 )I [−1,1] (x) and the same kernel was used for K d in step 1 and 3 of the procedure.…”
Section: A Small Simulation Studymentioning
confidence: 99%