2011 IEEE International Conference on Bioinformatics and Biomedicine 2011
DOI: 10.1109/bibm.2011.84
|View full text |Cite
|
Sign up to set email alerts
|

Stability Analysis of Feature Ranking Techniques on Biological Datasets

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2012
2012
2018
2018

Publication Types

Select...
5
2

Relationship

4
3

Authors

Journals

citations
Cited by 18 publications
(10 citation statements)
references
References 10 publications
0
10
0
Order By: Relevance
“…We feel this consideration is important as feature selection stability can affect the performance of the overall inductive model building process. According to previous research [9] we see that IG has average to below average stability; S2N is above average in terms of stability, and ROC is one of the most stable feature selection techniques. When using feature selection a final feature subset size must be chosen.…”
Section: B Feature Selection Techniquesmentioning
confidence: 88%
“…We feel this consideration is important as feature selection stability can affect the performance of the overall inductive model building process. According to previous research [9] we see that IG has average to below average stability; S2N is above average in terms of stability, and ROC is one of the most stable feature selection techniques. When using feature selection a final feature subset size must be chosen.…”
Section: B Feature Selection Techniquesmentioning
confidence: 88%
“…We believe, it is important to observe how the choice between RUSBoost combined with external feature selection and SelectRUSBoost affect techniques with varying degrees of stability. According to previous research [7] we see that IG has average to below average stability; ROC is one of the most stable feature selection techniques; and S2N is above average in terms of stability.…”
Section: B Feature Selection Techniquesmentioning
confidence: 86%
“…All of these learners are available with the Weka machine learning toolkit [9]. Due to space considerations we cannot elaborate further on each dataset; refer to the work of Dittman et al [10] for more information on the datasets in Table I.…”
Section: Methodsmentioning
confidence: 99%
“…Based on the training data, a logistic regression model is created which is used to decide the class membership of future instances [10].…”
Section: Classifiersmentioning
confidence: 99%