Hilbert-Schmidt Independence Criterion (HSIC) has recently been used in the field of single-index models to estimate the directions. Compared with some other well-established methods, it requires relatively weaker conditions. However, its performance has not yet been studied in the high-dimensional scenario, where the number of covariates is much larger than the sample size. In this article, we propose a new efficient sparse estimate in HSIC based single-index model. This new method estimates the subspace spanned by the linear combinations of the covariates directly and performs variable selection simultaneously. Due to the non-convexity of the objective function, we use a majorize-minimize approach together with the linearized alternating direction method of multipliers algorithm to solve the optimization problem. The algorithm does not involve the inverse of the covariance matrix and therefore can handle the large p small n scenario naturally. Through extensive simulation studies and a real data analysis, we show our proposal is efficient and effective in the high-dimensional setting. The Matlab codes for this method are available online.
Hilbert-Schmidt Independence Criterion (HSIC) has recently been introduced to the field of single-index models to estimate the directions. Compared with other well-established methods, the HSIC based method requires relatively weak conditions. However, its performance has not yet been studied in the prevalent high-dimensional scenarios, where the number of covariates can be much larger than the sample size. In this article, based on HSIC, we propose to estimate the possibly sparse directions in the high-dimensional single-index models through a parameter reformulation. Our approach estimates the subspace of the direction directly and performs variable selection simultaneously. Due to the non-convexity of the objective function and the complexity of the constraints, a majorize-minimize algorithm together with the linearized alternating direction method of multipliers is developed to solve the optimization problem. Since it does not involve the inverse of the covariance matrix, the algorithm can naturally handle large $p$ small $n$ scenarios. Through extensive simulation studies and a real data analysis, we show that our proposal is efficient and effective in the high-dimensional settings. The $\verb|Matlab|$ codes for this method are available online.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.