2019
DOI: 10.1002/sta4.229
|View full text |Cite
|
Sign up to set email alerts
|

Sparse spectral estimation with missing and corrupted measurements

Abstract: Supervised learning methods with missing data have been extensively studied not just due to the techniques related to low‐rank matrix completion. Also, in unsupervised learning, one often relies on imputation methods. As a matter of fact, missing values induce a bias in various estimators such as the sample covariance matrix. In the present paper, a convex method for sparse subspace estimation is extended to the case of missing and corrupted measurements. This is done by correcting the bias instead of imputing… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
7
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(7 citation statements)
references
References 33 publications
0
7
0
Order By: Relevance
“…We remark that this is clearly not a new algorithmic idea. In fact, proper handling of the diagonal entries (e.g., diagonal deletion, diagonal reweighting) has already been recommended in several different applications, including bipartite stochastic block models [47], covariance estimation [43,[77][78][79], tensor completion [84], to name just a few.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…We remark that this is clearly not a new algorithmic idea. In fact, proper handling of the diagonal entries (e.g., diagonal deletion, diagonal reweighting) has already been recommended in several different applications, including bipartite stochastic block models [47], covariance estimation [43,[77][78][79], tensor completion [84], to name just a few.…”
Section: Resultsmentioning
confidence: 99%
“…Since we concentrate primarily on estimating the column space of A , it is natural to expect a reduced sample complexity as well as a weaker requirement on the signal-to-noise ratio, in comparison to the conditions required for reliable reconstruction of the whole matrix-particularly for those highly unbalanced problems with drastically different dimensions d 1 and d 2 . Focusing on a spectral method applied to the Gram matrix AA with diagonal deletion (whose variants have been studied in multiple contexts [43,47,77,79,84]), we establish new statistical guarantees in terms of the sample complexity and the estimation accuracy, both of which strengthen prior theory. Our results deliver optimal 2,∞ estimation risk bounds with respect to the noise level, which are previously unavailable.…”
mentioning
confidence: 88%
See 1 more Smart Citation
“…When this assumption holds, the analysis becomes much easier, because we can regard our observed data as a representative sample from the wider population. For instance, theoretical guarantees have recently been established in the MCAR setting for a variety of modern statistical problems, including high-dimensional regression (Loh and Wainwright, 2012), high-dimensional or sparse principal component analysis (Zhu, Wang and Samworth, 2019;Elsener and van de Geer, 2019), classification (Cai and Zhang, 2019), and precision matrix and changepoint estimation (Loh and Tan, 2018;Follain, Wang and Samworth, 2022). The failure of this assumption, on the other hand, may introduce significant bias and necessitate further investigation of the nature of the dependence between the data and the missingness (Davison, 2003;Little and Rubin, 2019).…”
Section: Introductionmentioning
confidence: 99%
“…Other techniques are based on the expectation-maximisation (EM) algorithm (Dempster et al, 1977). Missing data has also been studied in a range of high-dimensional settings, including regression (Loh and Wainwright, 2012), classification (Cai and Zhang, 2018), and (sparse) principal component analysis (Elsener and van de Geer, 2018;Zhu et al, 2019).…”
Section: Introductionmentioning
confidence: 99%