We provide a remedy for two concerns that have dogged the use of principal
components in regression: (i) principal components are computed from the
predictors alone and do not make apparent use of the response, and (ii)
principal components are not invariant or equivariant under full rank linear
transformation of the predictors. The development begins with principal fitted
components [Cook, R. D. (2007). Fisher lecture: Dimension reduction in
regression (with discussion). Statist. Sci. 22 1--26] and uses normal models
for the inverse regression of the predictors on the response to gain reductive
information for the forward regression of interest. This approach includes
methodology for testing hypotheses about the number of components and about
conditional independencies among the predictors.Comment: Published in at http://dx.doi.org/10.1214/08-STS275 the Statistical
Science (http://www.imstat.org/sts/) by the Institute of Mathematical
Statistics (http://www.imstat.org
We obtain the maximum likelihood estimator of the central subspace under conditional normality of the predictors given the response. Analytically and in simulations we found that our new estimator can preform much better than sliced inverse regression, sliced average variance estimation and directional regression, and that it seems quite robust to deviations from normality.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.