2015
DOI: 10.1007/s11425-015-5062-9
|View full text |Cite
|
Sign up to set email alerts
|

A selective overview of feature screening for ultrahigh-dimensional data

Abstract: High-dimensional data have frequently been collected in many scientific areas including genomewide association study, biomedical imaging, tomography, tumor classifications, and finance. Analysis of high-dimensional data poses many challenges for statisticians. Feature selection and variable selection are fundamental for high-dimensional data analysis. The sparsity principle, which assumes that only a small number of predictors contribute to the response, is frequently adopted and deemed useful in the analysis … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
29
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
9
1

Relationship

1
9

Authors

Journals

citations
Cited by 83 publications
(31 citation statements)
references
References 55 publications
(115 reference statements)
0
29
0
Order By: Relevance
“…After Tibshirani (1996) introduced L 1 -penalized regression, the so-called least absolute shrinkage and selection operator (LASSO), statistical modelling with sparsity became an active research topic in the fields of statistics and machine learning (see B€ uhlmann & van de Geer, 2011;Fan & Lv, 2010;Wellner & Zhang, 2012; for reviews). By adding a sparsityinducing penalty (e.g., the L 1 , penalty) in the estimation criterion (e.g., a likelihood function), the resulting penalized (or regularized) estimate can have elements that are exactly zero.…”
Section: Introductionmentioning
confidence: 99%
“…After Tibshirani (1996) introduced L 1 -penalized regression, the so-called least absolute shrinkage and selection operator (LASSO), statistical modelling with sparsity became an active research topic in the fields of statistics and machine learning (see B€ uhlmann & van de Geer, 2011;Fan & Lv, 2010;Wellner & Zhang, 2012; for reviews). By adding a sparsityinducing penalty (e.g., the L 1 , penalty) in the estimation criterion (e.g., a likelihood function), the resulting penalized (or regularized) estimate can have elements that are exactly zero.…”
Section: Introductionmentioning
confidence: 99%
“…He et al (2013) suggest a ranking procedure relying on the marginal quantile utility; Shao and Zhang (2014) introduce a ranking based on the martingale difference correlation. An extensive overview of these and other measures that can be used for variable screening can be found in Liu et al (2015). In this work we also consider variable rankings based on measures which originally have not been developed for this purpose, e.g.…”
Section: Introductionmentioning
confidence: 99%
“…[30] and [17]). [21] is an excellent review paper of feature screening procedures. The adaptive Lasso and the group Lasso are important variants of the Lasso.…”
Section: Introductionmentioning
confidence: 99%