The 2010 International Joint Conference on Neural Networks (IJCNN) 2010
DOI: 10.1109/ijcnn.2010.5596692
|View full text |Cite
|
Sign up to set email alerts
|

Feature selection using ROC curves on classification problems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
15
0

Year Published

2011
2011
2017
2017

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 20 publications
(15 citation statements)
references
References 13 publications
0
15
0
Order By: Relevance
“…To obtain the most relevant features for the respective classification, we have adopted iterative sequential maximisation of task performance (also called the 'wrapper feature selection' [41]) in which initially the data is divided into k folds (in our case k = 5). Then the first feature is selected which has Gammanormalised derivative maximum mean classification performance across the folds.…”
Section: Feature Extraction and Selectionmentioning
confidence: 99%
“…To obtain the most relevant features for the respective classification, we have adopted iterative sequential maximisation of task performance (also called the 'wrapper feature selection' [41]) in which initially the data is divided into k folds (in our case k = 5). Then the first feature is selected which has Gammanormalised derivative maximum mean classification performance across the folds.…”
Section: Feature Extraction and Selectionmentioning
confidence: 99%
“…From the available set of features, the feature with the highest Area Under the Curve (AUC) [24] is selected. The next feature is chosen in such a way that when it is used along with the first selected feature, it will give the highest AUC compared to other non-selected features.…”
Section: Feature Selectionmentioning
confidence: 99%
“…In the filter approach, the features are ranked with respect to their effectiveness in classification and higher ranked features are thresholded out. In order to determine most relevant features, an independent evaluation criterion for binary classification is used [18] and AUC is selected as its evaluation measure [24]. The features with higher AUC are ranked higher and the features with AUC greater than 0.9 are selected.…”
Section: Feature Selectionmentioning
confidence: 99%
“…The features are calculated for red and green channels (blue channel is zero in SLO images) and classification power was calculated using Area Under the Curve (AUC) [11]. AUC is taken using 5-fold cross validation on the training set.…”
Section: Feature Generation and Selectionmentioning
confidence: 99%