2009
DOI: 10.1016/j.ipm.2009.03.002
|View full text |Cite
|
Sign up to set email alerts
|

A systematic analysis of performance measures for classification tasks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

4
2,081
1
72

Year Published

2014
2014
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 4,273 publications
(2,371 citation statements)
references
References 17 publications
4
2,081
1
72
Order By: Relevance
“…However, these values hide disparities between classes. The F-score, e.g., a synthetic accuracy metric, is used here to compare the classification performance at the class level [59].…”
Section: Land-use Classification Using Random Forestmentioning
confidence: 99%
“…However, these values hide disparities between classes. The F-score, e.g., a synthetic accuracy metric, is used here to compare the classification performance at the class level [59].…”
Section: Land-use Classification Using Random Forestmentioning
confidence: 99%
“…Basically, the set of traces classified by the model (to determine whether it accepts them as positive or rejects them as negative), and from this we end up with four numbers: The number of True Positives (TP), True Negatives (TN), False Positives (FP) and False Negatives (FN). From these numbers it is possible to calculate various 'Binary Classification Measures' [41].…”
Section: Measuring Accuracymentioning
confidence: 99%
“…Classification accuracy (AC) (Sokolova & Lapalme, 2009): percentage of correctly classified samples (or sample sequences for the HMM classifier). 2.…”
Section: Model Validationmentioning
confidence: 99%
“…2. F1-score (FS) (Sokolova & Lapalme, 2009): a measure of how well the classifier was able to distinguish between classes given an unbalanced dataset. 3.…”
Section: Model Validationmentioning
confidence: 99%