2001
DOI: 10.1007/bf02595701
|View full text |Cite
|
Sign up to set email alerts
|

Asymptotic properties in partial linear models under dependence

Abstract: Bandwidth selection, kernel smoothing, mixing, partial linear models, 62G05, 62G20, 62M10,

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
9
0

Year Published

2006
2006
2018
2018

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(9 citation statements)
references
References 21 publications
0
9
0
Order By: Relevance
“…We need to have the exact form of Φ n and an estimate of this matrix. As [1] pointed out, Φ n has ((n(n + 1)/2) − 1) different unknown parameters, but it is usual to assume that the elements in Φ n are functions of a k × 1 vector φ (k < n and remains constant as n increases). Then [13] the estimation of Φ n (φ) reduces to the estimation of φ.…”
Section: Consistencymentioning
confidence: 99%
See 1 more Smart Citation
“…We need to have the exact form of Φ n and an estimate of this matrix. As [1] pointed out, Φ n has ((n(n + 1)/2) − 1) different unknown parameters, but it is usual to assume that the elements in Φ n are functions of a k × 1 vector φ (k < n and remains constant as n increases). Then [13] the estimation of Φ n (φ) reduces to the estimation of φ.…”
Section: Consistencymentioning
confidence: 99%
“…In this context, Z is called a functional random variable. In a non-functional context, Aneiros and Quintela [1] considered the regression model 1], and ε i are unobserved dependent errors. They established the root-n-consistency of an estimator of β.…”
Section: Introductionmentioning
confidence: 99%
“…Assume that {x i , t i , y i ; i = 1, · · · , n} satisfy model (1). Now we shall proceed to define a SLSE for the parametric component β by the nonparametric component g(·) approximated by a Bspline series.…”
Section: Slse and Some Assumptionsmentioning
confidence: 99%
“…x is | = O(1) and max 1≤i≤n g * (t i ) = O(n −1 k n ) + O(k −τ n ) it holds that J 2 = O(n −1 m 2 n ) · [O(n −1 k n ) + O(k −τ n )] 2 = o(1). By the definitions of β and αCov (γ − γ) = (W W ) −1 W ΩW (W W ) −1 ,where W = (X, B kn ).…”
mentioning
confidence: 98%
“…Some limit theories can be found in the monograph of Lin and Lu [39]. Recently, the mixing-dependent error structure has also been used to study the nonparametric and semiparametric regression models, for instance, Roussas [40], Truong [41], Fraiman and Iribarren [42], Roussas and Tran [43], Masry and Fan [44], Aneiros and Quintela [45], and Fan and Yao [46].…”
Section: Introductionmentioning
confidence: 99%