2017
DOI: 10.1080/03610926.2016.1148735
|View full text |Cite
|
Sign up to set email alerts
|

Wavelet estimators for the derivatives of the density function from data contaminated with heteroscedastic measurement errors

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
5
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 22 publications
0
5
0
Order By: Relevance
“…When observations can be made directly on X$$ X $$, the kernel density estimation procedure is often called on for this purpose. Starting with the classical kernel estimator, Wang et al (2009) followed the classical SIMEX algorithm, constructed an average of the kernel estimator truef^B,nfalse(xfalse)=Bprefix−1b=1Bfalse[nprefix−1i=1nKhfalse(xprefix−Ziprefix−λVi,bfalse)false]$$ {\hat{f}}_{B,n}(x)={B}^{-1}{\sum}_{b=1}^B\left[{n}^{-1}{\sum}_{i=1}^n{K}_h\left(x-{Z}_i-\sqrt{\lambda }{V}_{i,b}\right)\right] $$ with B$$ B $$ pseudo‐datasets false{Zi+λVi,bfalse}i=1n$$ {\left\{{Z}_i+\sqrt{\lambda }{V}_{i,b}\right\}}_{i=1}^n $$, b=1,2,,B$$ b=1,2,\dots, B $$, where Khfalse(·false)=hprefix−1Kfalse(·false/hfalse)$$ {K}_h\left(\cdotp \right)={h}^{-1}K\left(\cdotp /h\right) $$, K$$ K $$ is a kernel density function, and h$$ h $$ is a sequence of positive numbers often called bandwidths. By the law of large numbers, truef^B,nfalse(xfalse)…”
Section: Motivating Examplesmentioning
confidence: 99%
See 2 more Smart Citations
“…When observations can be made directly on X$$ X $$, the kernel density estimation procedure is often called on for this purpose. Starting with the classical kernel estimator, Wang et al (2009) followed the classical SIMEX algorithm, constructed an average of the kernel estimator truef^B,nfalse(xfalse)=Bprefix−1b=1Bfalse[nprefix−1i=1nKhfalse(xprefix−Ziprefix−λVi,bfalse)false]$$ {\hat{f}}_{B,n}(x)={B}^{-1}{\sum}_{b=1}^B\left[{n}^{-1}{\sum}_{i=1}^n{K}_h\left(x-{Z}_i-\sqrt{\lambda }{V}_{i,b}\right)\right] $$ with B$$ B $$ pseudo‐datasets false{Zi+λVi,bfalse}i=1n$$ {\left\{{Z}_i+\sqrt{\lambda }{V}_{i,b}\right\}}_{i=1}^n $$, b=1,2,,B$$ b=1,2,\dots, B $$, where Khfalse(·false)=hprefix−1Kfalse(·false/hfalse)$$ {K}_h\left(\cdotp \right)={h}^{-1}K\left(\cdotp /h\right) $$, K$$ K $$ is a kernel density function, and h$$ h $$ is a sequence of positive numbers often called bandwidths. By the law of large numbers, truef^B,nfalse(xfalse)…”
Section: Motivating Examplesmentioning
confidence: 99%
“…By the law of large numbers, truef^B,nfalse(xfalse)nprefix−1i=1nKhfalse(xprefix−Ziprefix−λσuufalse)ϕfalse(ufalse)du=truef˜nfalse(xfalse)$$ {\hat{f}}_{B,n}(x)\to {n}^{-1}{\sum}_{i=1}^n\int {K}_h\left(x-{Z}_i-\sqrt{\lambda }{\sigma}_uu\right)\phi (u) du={\tilde{f}}_n(x) $$ in probability as B$$ B\to \infty $$. After some algebra, Wang et al (2009) proposed to estimate fXfalse(xfalse)$$ {f}_X(x) $$ using truef^nfalse(xfalse)=nprefix−1i=1nfalse(λσufalse)prefix−1ϕfalse(false(xprefix−Zifalse)false/λσufalse)$$ {\hat{f}}_n(x)={n}^{-1}{\sum}_{i=1}^n{\left(\sqrt{\lambda }{\sigma}_u\right)}^{-1}\phi \left(\left(x-{Z}_i\right)/\sqrt{\lambda }{\sigma}_u\right) $$ which approximates the limit truef˜nfalse(xfalse)$$ {\tilde{f}}_n(x) $$ for sufficiently large n$$ n $$. In fact, before initiating the simulation step, Cook and Stefanski (1994) suggested one should try to calculate the conditional expectation Efalse[truef^B,nfalse(xfalse...…”
Section: Motivating Examplesmentioning
confidence: 99%
See 1 more Smart Citation
“…Derivative estimation has been studied in many statistical model, see [16][17][18][19][20][21]. For example, [17] develops an adaptive estimator of d-th derivatives of unknown function in the standard deconvolution model (M = 1) and proves that it achieves near optimal rates of convergence under the mean integrated squared error (MISE) over a wide range of smoothness classes.…”
Section: Introductionmentioning
confidence: 99%
“…Chesneau & Fadili [9] constructed a wavelet estimator of the density and investigated its MISE ( 2 risk) performance over Besov balls. The risk (1 ≤ < ∞) of wavelet deconvolution estimator was extended by Wang, Zhang & Kou [10]. However, we do not know whether the density function is smooth or not in some practical applications.…”
mentioning
confidence: 99%