2012
DOI: 10.1111/j.1467-9868.2012.01029.x
|View full text |Cite
|
Sign up to set email alerts
|

A Road to Classification in High Dimensional Space: The Regularized Optimal Affine Discriminant

Abstract: Summary.  For high dimensional classification, it is well known that naively performing the Fisher discriminant rule leads to poor results due to diverging spectra and accumulation of noise. Therefore, researchers proposed independence rules to circumvent the diverging spectra, and sparse independence rules to mitigate the issue of accumulation of noise. However, in biological applications, often a group of correlated genes are responsible for clinical outcomes, and the use of the covariance information can si… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

4
164
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 151 publications
(168 citation statements)
references
References 39 publications
4
164
0
Order By: Relevance
“…Targeting on cancer classification and other modern applications, a great number of high dimensional classification techniques have been invented and studied recently; see Hastie et al (2009) for an extensive introduction, and Witten & Tibshirani (2011) ;Cai & Liu (2011);Fan et al (2012); Mai et al (2012) for some recent developments. Although these contemporary classification tools can be applied to the high dimensional data, most of them rely on some strong assumptions.…”
Section: Introductionmentioning
confidence: 99%
“…Targeting on cancer classification and other modern applications, a great number of high dimensional classification techniques have been invented and studied recently; see Hastie et al (2009) for an extensive introduction, and Witten & Tibshirani (2011) ;Cai & Liu (2011);Fan et al (2012); Mai et al (2012) for some recent developments. Although these contemporary classification tools can be applied to the high dimensional data, most of them rely on some strong assumptions.…”
Section: Introductionmentioning
confidence: 99%
“…Fan et al [13] argued that ignoring the covariances (the off-diagonal entries in W) as suggested by Bickel and Levina [2] may not be a good idea. In order to avoid redefining Fisher's LDA for singular W, Fan et al [13] proposed working with (minimizing) the classification error instead of the Fisher's LDA ratio (1).…”
Section: Sparse Lda Based On Minimization Of the Classification Errormentioning
confidence: 99%
“…In order to avoid redefining Fisher's LDA for singular W, Fan et al [13] proposed working with (minimizing) the classification error instead of the Fisher's LDA ratio (1). The method is called for short ROAD (from Regularized Optimal Affine Discriminant) and is developed for two groups.…”
Section: Sparse Lda Based On Minimization Of the Classification Errormentioning
confidence: 99%
See 1 more Smart Citation
“…In high dimensional data sets, the number of variables is very large whereas, and due to excessive costs, the sample size is typically small (1). Several studies have referred to challenges occurring in high dimensional settings as "curse of dimensionality" (2).…”
Section: Introductionmentioning
confidence: 99%