2014
DOI: 10.1080/10618600.2013.859619
|View full text |Cite
|
Sign up to set email alerts
|

Sufficient Dimension Folding for Regression Mean Function

Abstract: In this article, we consider sufficient dimension folding for the regression mean function when predictors are matrix-or array-valued. We propose a new concept named central mean dimension folding subspace and its two local estimation methods: folded outer product of gradients estimation (folded-OPG) and folded minimum average variance estimation (folded-MAVE). We establish the asymptotic properties for folded-MAVE. A modified BIC criterion is used to determine the dimensions of the central mean dimension fold… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
14
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 22 publications
(17 citation statements)
references
References 33 publications
0
14
0
Order By: Relevance
“…[4] discussed matrix versions of principal component analysis and principal fitted components (PFC). [53] introduced central mean dimension folding subspace and proposed several methods to estimate it. [6] further developed tensor-valued sliced inverse regression.…”
Section: Review Of Methods For General Tensor-valued Datamentioning
confidence: 99%
“…[4] discussed matrix versions of principal component analysis and principal fitted components (PFC). [53] introduced central mean dimension folding subspace and proposed several methods to estimate it. [6] further developed tensor-valued sliced inverse regression.…”
Section: Review Of Methods For General Tensor-valued Datamentioning
confidence: 99%
“…Research dealing with high-dimensional matrix-valued variables (also known as matrix-variates) has attracted considerable attention in the past 10 years. There is a vast literature on dimension reduction techniques for matrix-variates (eg, Li et al, 2010;Ding and Cook, 2014;Xue and Yin, 2014;2015;Virta et al, 2017). In regression settings, existing methods for modeling matrix-variates as covariates focus on incorporating various regularization schemes that also preserve the inherent matrix nature of the covariate, for example, by imposing a low rank bilinear structure to the associated regression coefficients for the matrix-valued covariate (Hung and Wang, 2013;Zhou et al, 2013;Hoff, 2015;Jiang et al, 2017), applying a particular structured lasso penalty (Zhao and Leng, 2014), applying a nuclear norm penalty (Zhou and Li, 2014), and utilizing the envelope concept originally proposed in Cook et al (2010) to achieve supervised sufficient dimension reduction for tensor-variates (eg, Li and Zhang, 2017;Zhang and Li, 2017;Ding and Cook, 2018).…”
Section: Introductionmentioning
confidence: 99%
“…For examples of particular dimension reduction methods incorporating matrix or tensor predictors, see e.g. [10], [11], [1] for independent component analysis, [12], [13], [14], [15] for sufficient dimension reduction and [16], [17] for principal components analysis-based techniques. More references are also given in [12], [1].…”
Section: Introductionmentioning
confidence: 99%