2023
DOI: 10.1007/978-981-19-6755-9_24
|View full text |Cite
|
Sign up to set email alerts
|

Metrics for Evaluating Classification Algorithms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 17 publications
0
1
0
Order By: Relevance
“…Accuracy indicates overall correctness, sensitivity measures true positive rate, precision assesses positive prediction accuracy, and F1-score balances both precision and sensitivity through harmonic mean [38]. These metrics are prevalent in classification performance evaluation [39][40][41][42], where higher values of AUC denote model's superior discrimination ability between positive and negative classes. A two-tailed P-value of less than 0.05 indicated statistical significance.…”
Section: Other Statistical Methodsmentioning
confidence: 99%
“…Accuracy indicates overall correctness, sensitivity measures true positive rate, precision assesses positive prediction accuracy, and F1-score balances both precision and sensitivity through harmonic mean [38]. These metrics are prevalent in classification performance evaluation [39][40][41][42], where higher values of AUC denote model's superior discrimination ability between positive and negative classes. A two-tailed P-value of less than 0.05 indicated statistical significance.…”
Section: Other Statistical Methodsmentioning
confidence: 99%
“…The latter can also lead to overfitting (Figure 6), where a real-world test of an arguably perfect model may fail. Thus, keeping in mind the practical applications, potential biases, ethical implications, and general application of the model are more tangible criteria in performance evaluation (Duffull and Isbister, 2022;Muntean and Militaru, 2023).…”
Section: Machine Learning Model Representationmentioning
confidence: 99%