2018
DOI: 10.1080/01621459.2018.1429275
|View full text |Cite
|
Sign up to set email alerts
|

Joint Mean and Covariance Estimation with Unreplicated Matrix-Variate Data

Abstract: It has been proposed that complex populations, such as those that arise in genomics studies, may exhibit dependencies among observations as well as among variables. This gives rise to the challenging problem of analyzing unreplicated high-dimensional data with unknown mean and dependence structures. Matrixvariate approaches that impose various forms of (inverse) covariance sparsity allow flexible dependence structures to be estimated, but cannot directly be applied when the mean and covariance matrices are est… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
4

Relationship

1
7

Authors

Journals

citations
Cited by 10 publications
(7 citation statements)
references
References 37 publications
0
6
0
Order By: Relevance
“…Similar properties hold for the relative Frobenius norm error in view of Lemma 3.4. Note that by (17) and Lemma 3.4, under the settings of Theorem 3.2,…”
Section: The Main Theoremmentioning
confidence: 96%
See 1 more Smart Citation
“…Similar properties hold for the relative Frobenius norm error in view of Lemma 3.4. Note that by (17) and Lemma 3.4, under the settings of Theorem 3.2,…”
Section: The Main Theoremmentioning
confidence: 96%
“…See [44], where such regression model is introduced for subgaussian matrix variate data. See also [16,17] and references therein for more recent applications of matrix variate models.…”
Section: Introductionmentioning
confidence: 99%
“…In particular, the special case q = 0 corresponds to Theorem 3.3 of Zhou ( 2014) and these two results are consistent due to the dual properties between Lasso and the Dantzig selector. Moreover, our result can be generalized to the sub-Gaussian condition of the matrix data (Hornstein et al, 2019) and we omit the details.…”
Section: Matrix Data Estimationmentioning
confidence: 99%
“…For matrix-variate data with two-way dependencies, prior work depended on a large number of replicated data to obtain certain convergence guarantees, even when the data is observed in full and free of measurement error; see for example Dutilleul (1999), Werner et al (2008), Leng and Tang (2012), and Tsiligkaridis et al (2013). A recent line of work on matrix variate models (Kalaitzis et al, 2013;Zhou, 2014;Rudelson and Zhou, 2017) have focused on the design of estimators and efficient algorithms while establishing theoretical properties by using the Kronecker sum and product covariance models when a single or a small number of replicates are available from such matrix-variate distributions; See also Efron (2009), Allen andTibshirani (2010), andHornstein et al (2016) for related models and applications. Among these models, the Kronecker sum provides a covariance or precision matrix which is sparser than the Kronecker product (inverse) covariance model.…”
Section: Related Workmentioning
confidence: 99%