2005
DOI: 10.2495/data050031
|View full text |Cite
|
Sign up to set email alerts
|

On extending F-measure and G-mean metrics to multi-class problems

Abstract: The evaluation of classifiers is not an easy task. There are various ways of testing them and measures to estimate their performance. The great majority of these measures were defined for two-class problems and there is not a consensus about how to generalize them to multiclass problems. This paper proposes the extension of the F-measure and G-mean in the same fashion as carried out with the AUC. Some datasets with diverse characteristics are used to generate fuzzy classifiers and C4.5 trees. The most common e… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2006
2006
2024
2024

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 52 publications
(11 citation statements)
references
References 7 publications
0
11
0
Order By: Relevance
“…With just 30 features, the classification accuracy of the SVM classifier was more than 94% of the accuracy of the total features. The achieved accuracy outperforms the accuracy of the original study, which used the same dataset [ 60 ], which was 91.7%.…”
Section: Resultsmentioning
confidence: 65%
“…With just 30 features, the classification accuracy of the SVM classifier was more than 94% of the accuracy of the total features. The achieved accuracy outperforms the accuracy of the original study, which used the same dataset [ 60 ], which was 91.7%.…”
Section: Resultsmentioning
confidence: 65%
“…The first experiment proves the superiority of ROA against three other latest optimization algorithms AOA, GWO and WOA by applying data reduction for different 12 datasets obtained from KEEL repository [26]. In the second experiment a recent published paper [27], [28] were used in a comparison with our proposed algorithm. The last experiment was conducted on three student performance prediction data as a real-life application.…”
Section: Methodsmentioning
confidence: 89%
“…In predictive analytics, a confusion matrix is a two-dimensional matrix, with two rows and two columns that report the numbers of true positives (TP), false negatives (FN), false positives (FP), and true negatives (TN) [ 46 , 47 , 48 , 49 , 50 ]. In Table 1 , the columns represent the actual classes while the rows represent the predicted classes [ 51 , 52 , 53 ].…”
Section: Results and Evaluationmentioning
confidence: 99%