2019
DOI: 10.3150/18-bej1040a
|View full text |Cite
|
Sign up to set email alerts
|

Sparse covariance matrix estimation in high-dimensional deconvolution

Abstract: We study the estimation of the covariance matrix Σ of a p-dimensional normal random vector based on n independent observations corrupted by additive noise. Only a general nonparametric assumption is imposed on the distribution of the noise without any sparsity constraint on its covariance matrix. In this high-dimensional semiparametric deconvolution problem, we propose spectral thresholding estimators that are adaptive to the sparsity of Σ. We establish an oracle inequality for these estimators under model mis… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 8 publications
(9 citation statements)
references
References 54 publications
(113 reference statements)
0
7
0
Order By: Relevance
“…Until this point, we have assumed the availability of the adjacency matrix. However, sparse stochastic processes have become increasingly influential to handle high-dimension problems (Belomestny et al 2019;Gaïffas and Matulewicz 2019;Ma et al 2021). Regularisation through a penalty on the model parameters is an important component of this literature and we show in the next section that general Lévy-driven OU processes can be consistently transformed into GrOU processes in this context.…”
Section: Asymptotics For ã-Groumentioning
confidence: 89%
See 1 more Smart Citation
“…Until this point, we have assumed the availability of the adjacency matrix. However, sparse stochastic processes have become increasingly influential to handle high-dimension problems (Belomestny et al 2019;Gaïffas and Matulewicz 2019;Ma et al 2021). Regularisation through a penalty on the model parameters is an important component of this literature and we show in the next section that general Lévy-driven OU processes can be consistently transformed into GrOU processes in this context.…”
Section: Asymptotics For ã-Groumentioning
confidence: 89%
“…Stationary noise distributions are usually too simplistic to explain the intrinsic variability of the data. Volatility modulation adds a stochastic scaling factor (Belomestny et al 2019;Cai et al 2016) which follows its own dynamics to better represent exogenous source of uncertainty (Pigorsch and Stelzer 2009b;Yang et al 2020) whilst a jump component helps to model unforeseen perturbations or rare calendar events (Barndorff-Nielsen and Veraart 2012).…”
Section: An Extension To a Volatility-modulated Grou Processmentioning
confidence: 99%
“…Estimators of variance or covariance matrix based on the empirical characteristic function have been studied in several papers [4,5,3,6]. The setting in [4,5,3] is different from the ours as those papers deal with the model where the non-zero components of θ are random with a smooth distribution density. The estimators in [4,5] are also quite different.…”
Section: 1mentioning
confidence: 99%
“…The estimators in [4,5] are also quite different. On the other hand, [3,6] consider estimators close toṽ 2 . In particular, [6] uses a similar pilot estimator for testing in the sparse vector model where it is assumed that σ ∈ [σ − , σ + ], 0 < σ − < σ + < ∞, and the estimator depends on σ + .…”
Section: 1mentioning
confidence: 99%
See 1 more Smart Citation