2020
DOI: 10.1111/stan.12224
|View full text |Cite
|
Sign up to set email alerts
|

k‐Nearest neighbors local linear regression for functional and missing data at random

Abstract: We combine the k‐Nearest Neighbors (kNN) method to the local linear estimation (LLE) approach to construct a new estimator (LLE‐kNN) of the regression operator when the regressor is of functional type and the response variable is a scalar but observed with some missing at random (MAR) observations. The resulting estimator inherits many of the advantages of both approaches (kNN and LLE methods). This is confirmed by the established asymptotic results, in terms of the pointwise and uniform almost complete consis… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
8
1
1

Relationship

1
9

Authors

Journals

citations
Cited by 15 publications
(7 citation statements)
references
References 36 publications
0
7
0
Order By: Relevance
“…If the k value is relatively small, the approximate error of KNN will be relatively small, and the prediction result is easily affected by the nearest neighbor, which leads to overfitting. At the same time, the value of k also has a range limitation on the other side, because when the value of k is larger, the nearest neighbor error of the prediction result will be larger, which will also lead to a large deviation between the prediction result and the actual result [11]. In this case, the model is prone to underfitting.…”
Section: Selection Of Training Parametersmentioning
confidence: 99%
“…If the k value is relatively small, the approximate error of KNN will be relatively small, and the prediction result is easily affected by the nearest neighbor, which leads to overfitting. At the same time, the value of k also has a range limitation on the other side, because when the value of k is larger, the nearest neighbor error of the prediction result will be larger, which will also lead to a large deviation between the prediction result and the actual result [11]. In this case, the model is prone to underfitting.…”
Section: Selection Of Training Parametersmentioning
confidence: 99%
“…While [25] took into account the estimation of the regression function and established these asymptotic properties under the condition of stationarity and ergodicity with MAR response. On the other hand, [26] used the k nearest neighbors method (k-NN) combined with the local linear method to estimate the regression function with a low number of randomly missing values repones (MAR). For semi-parametric partially linear multivariate models with randomly missing responses, we can cite the work of [27,28] while [29] is the first to study the SFPLR model for i.i.d data with a MAR response, and their results generalize those obtained in [27].…”
Section: Of 21mentioning
confidence: 99%
“…In addition, the effectiveness of this method is verified by experiments. Rachdi et al proposed a Knn local linear regression algorithm [45]. It combines KNN and local linear estimation and inherits many advantages of these two methods.…”
Section: Neighbor Searchmentioning
confidence: 99%