2017 International Conference on Trends in Electronics and Informatics (ICEI) 2017
DOI: 10.1109/icoei.2017.8300901
|View full text |Cite
|
Sign up to set email alerts
|

A comparative analysis of feature selection stability measures

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(11 citation statements)
references
References 5 publications
0
11
0
Order By: Relevance
“…Depending on the output of the feature selection algorithms, stability of feature selection measures are categorized into three groups. These are stability by rank, stability by weight and stability by similarity [12,13]. In the stability by rank approach, stability of feature selection algorithms, whose output is ranked lists of features, are evaluated by the correlation between two ranked feature lists.…”
Section: Stability Measures For Feature Selection Algorithmsmentioning
confidence: 99%
“…Depending on the output of the feature selection algorithms, stability of feature selection measures are categorized into three groups. These are stability by rank, stability by weight and stability by similarity [12,13]. In the stability by rank approach, stability of feature selection algorithms, whose output is ranked lists of features, are evaluated by the correlation between two ranked feature lists.…”
Section: Stability Measures For Feature Selection Algorithmsmentioning
confidence: 99%
“…The second evaluation approach involves the stability of the ensemble feature selection itself. There are three categories for stability measurement, which are stability by index/subset, stability by rank, and stability by weight [38,39]. Stability by rank and weight has a major drawback that does not allow stability calculations on two subsets of features that have different numbers of features.…”
Section: Accuracy (Acc) =mentioning
confidence: 99%
“…rank, and stability by weight[38,39]. Stability by rank and weight has a major drawback that does 223 not allow stability calculations on two subsets of features that have different numbers of features.…”
mentioning
confidence: 99%
“…Next, Pearson's correlation coefficient is generally used to measure stability of feature weights in feature selection algorithms [9], but this is a measure of the general trend of importance and does not calculate the degree by which a feature's weight may vary. As such, we specify the measure stability by weight based on the statistical measure of relative variance, and calculate the stability of feature weights (φ(W)) for a single process instance in an event log as follows:…”
Section: Explanation Stabilitymentioning
confidence: 99%