2011
DOI: 10.1214/10-aos870
|View full text |Cite
|
Sign up to set email alerts
|

Sparse linear discriminant analysis by thresholding for high dimensional data

Abstract: In many social, economical, biological and medical studies, one objective is to classify a subject into one of several classes based on a set of variables observed from the subject. Because the probability distribution of the variables is usually unknown, the rule of classification is constructed using a training sample. The well-known linear discriminant analysis (LDA) works well for the situation where the number of variables used for classification is much smaller than the training sample size. Because of t… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

4
308
0

Year Published

2013
2013
2021
2021

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 208 publications
(312 citation statements)
references
References 12 publications
4
308
0
Order By: Relevance
“…This can again be explained in terms of the behavior of the eigenvalues of the pooled sample covariance matrix. Motivated by this, regularized versions of LDA in the high-dimensional context, for which the regularization is imposed either through sparse penalization or thresholding of the discriminant function, often in combination with sparse estimation of the precision matrix, have been proposed and analyzed by Cai and Liu (2011b), Clemmensen et al (2011), Shao et al (2011. A key requirement for good performance of these procedures is the sparsity of the vector Σ À 1 ðμ 1 Àμ 2 Þ, where Σ denotes the common population covariance matrix.…”
Section: Sparse Discriminant Analysismentioning
confidence: 99%
“…This can again be explained in terms of the behavior of the eigenvalues of the pooled sample covariance matrix. Motivated by this, regularized versions of LDA in the high-dimensional context, for which the regularization is imposed either through sparse penalization or thresholding of the discriminant function, often in combination with sparse estimation of the precision matrix, have been proposed and analyzed by Cai and Liu (2011b), Clemmensen et al (2011), Shao et al (2011. A key requirement for good performance of these procedures is the sparsity of the vector Σ À 1 ðμ 1 Àμ 2 Þ, where Σ denotes the common population covariance matrix.…”
Section: Sparse Discriminant Analysismentioning
confidence: 99%
“…In recent years, many high-dimensional generalizations of linear discriminant analysis have been proposed (Tibshirani et al, 2002;Trendafilov and Jolliffe, 2007;Clemmensen et al, 2011;Donoho and Jin, 2008;Fan and Fan, 2008;Wu et al, 2008;Shao et al, 2011;Cai and Liu, 2011;Witten and Tibshirani, 2011;Mai et al, 2012;Fan et al, 2012). In the binary case, the discriminant direction is β = Σ −1 (µ 2 − µ 1 ).…”
Section: Introductionmentioning
confidence: 99%
“…These two methods basically follow the diagonal LDA paradigm with an added variable selection component, where correlations among variable are completely ignored. Recently, more sophisticated sparse LDA proposals have been proposed; see Trendafilov and Jolliffe [29], Wu et al [34], Clemmensen et al [6], Witten and Tibshirani [33], Mai et al [20], Shao et al [25], Cai and Liu [3] and Fan et al [9]. In these papers, a lot of empirical and theoretical results have been provided to demonstrate the competitive performance of sparse LDA for high-dimensional classification.…”
Section: Introductionmentioning
confidence: 99%