2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2012
DOI: 10.1109/icassp.2012.6288279
|View full text |Cite
|
Sign up to set email alerts
|

Feature selection for composite hypothesis testing with small samples: Fundamental limits and algorithms

Abstract: This paper considers the problem of feature selection for composite hypothesis testing: The goal is to select, from m candidate features, r relevant ones for distinguishing the null hypothesis from the composite alternative hypothesis; the training data are given as L sequences of observations, of which each is an n-sample sequence coming from one distribution in the alternative hypothesis. What is the fundamental limit for successful feature selection? Are there any algorithms that achieve this limit? We inve… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2017
2017
2017
2017

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 7 publications
(8 reference statements)
0
1
0
Order By: Relevance
“…Relevance is calculated using measures such as correlation [8], [9], single classification power [10], hypothesis testing [11], [12], and several information theory measures [13], [14]. Although assessing each feature independently assumes decreased computational burden, filtering methods ignore the effects of the different combinations of features.…”
Section: Introductionmentioning
confidence: 99%
“…Relevance is calculated using measures such as correlation [8], [9], single classification power [10], hypothesis testing [11], [12], and several information theory measures [13], [14]. Although assessing each feature independently assumes decreased computational burden, filtering methods ignore the effects of the different combinations of features.…”
Section: Introductionmentioning
confidence: 99%