“…Research dealing with high-dimensional matrix-valued variables (also known as matrix-variates) has attracted considerable attention in the past 10 years. There is a vast literature on dimension reduction techniques for matrix-variates (eg, Li et al, 2010;Ding and Cook, 2014;Xue and Yin, 2014;2015;Virta et al, 2017). In regression settings, existing methods for modeling matrix-variates as covariates focus on incorporating various regularization schemes that also preserve the inherent matrix nature of the covariate, for example, by imposing a low rank bilinear structure to the associated regression coefficients for the matrix-valued covariate (Hung and Wang, 2013;Zhou et al, 2013;Hoff, 2015;Jiang et al, 2017), applying a particular structured lasso penalty (Zhao and Leng, 2014), applying a nuclear norm penalty (Zhou and Li, 2014), and utilizing the envelope concept originally proposed in Cook et al (2010) to achieve supervised sufficient dimension reduction for tensor-variates (eg, Li and Zhang, 2017;Zhang and Li, 2017;Ding and Cook, 2018).…”