2021
DOI: 10.5705/ss.202019.0401
|View full text |Cite
|
Sign up to set email alerts
|

Efficient kernel-based variable selection with sparsistency

Abstract: Sparse learning is central to high-dimensional data analysis, and various methods have been developed. Ideally, a sparse learning method shall be methodologically flexible, computationally efficient, and with theoretical guarantee, yet most existing methods need to compromise some of these properties to attain the other ones. In this article, a three-step sparse learning method is developed, involving kernel-based estimation of the regression function and its gradient functions as well as a hard thresholding. … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
32
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(32 citation statements)
references
References 40 publications
(77 reference statements)
0
32
0
Order By: Relevance
“…More recently, some works have been made in [12][13][14][15] to alleviate the restriction on the hypothesis function space, which just require that the regression function belongs to a reproducing kernel Hilbert space (RKHS). In contrast to the traditional structure assumption on regression function, these methods identify the important variable via the gradient of kernel-based estimator.…”
Section: Introductionmentioning
confidence: 99%
See 4 more Smart Citations
“…More recently, some works have been made in [12][13][14][15] to alleviate the restriction on the hypothesis function space, which just require that the regression function belongs to a reproducing kernel Hilbert space (RKHS). In contrast to the traditional structure assumption on regression function, these methods identify the important variable via the gradient of kernel-based estimator.…”
Section: Introductionmentioning
confidence: 99%
“…Magda et al [15] introduces a nonparametric structured sparsity by considering two regularizers based on partial derivatives and offers its optimization with the alternating direction method of multiples (ADMM) [18]. Moreover, to further improve the computation feasibility, a three-step variable selection algorithm is developed in [12] with the help of the three building blocks: kernel ridge regression, functional gradient in RKHS, and a hard threshold. Meanwhile, the effectiveness of the proposed algorithm in [12] is supported by theoretical guarantees on variable selection consistency and empirical verification on simulated data.…”
Section: Introductionmentioning
confidence: 99%
See 3 more Smart Citations