2009
DOI: 10.1007/978-3-642-03547-0_51
|View full text |Cite
|
Sign up to set email alerts
|

Enhancing the Performance of LibSVM Classifier by Kernel F-Score Feature Selection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2010
2010
2022
2022

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 15 publications
(6 citation statements)
references
References 22 publications
0
6
0
Order By: Relevance
“…To tackle the first problem, we accelerate the search process by employing the F-score criterion [25,26] to rank the features and using the backward elimination search strategy. For the latter, all the features are first ranked in descending order of the importance of the features.…”
Section: Svm With Feature Selectionmentioning
confidence: 99%
“…To tackle the first problem, we accelerate the search process by employing the F-score criterion [25,26] to rank the features and using the backward elimination search strategy. For the latter, all the features are first ranked in descending order of the importance of the features.…”
Section: Svm With Feature Selectionmentioning
confidence: 99%
“…After cross validation and grid search to find the optimal parameters, the prediction was tested by using a test feature vector of size 570 × 24. The average accuracy, which was computed by taking into account both false negatives and false positives as proposed in [ 46 ], resulted to be equal to 88.2%. Table 3 summarizes the results of classification on each individual volunteer.…”
Section: Resultsmentioning
confidence: 99%
“…To identify the most important descriptors of the molecules for making these predictions, an independent feature selection analysis was required since the importance of each feature cannot be computed explicitly from a nonlinear SVM. 36 Several feature selection methods have previously been applied to problems to improve classifier performance, including the F-score statistical test 37 which identifies important features, and recursive feature elimination 36 which removes redundant features. We compared the accuracy of many classifiers built using only a single molecular descriptor 38 in order to find the most important feature.…”
Section: Machine Learningmentioning
confidence: 99%