2019
DOI: 10.1371/journal.pone.0210264
|View full text |Cite|
|
Sign up to set email alerts
|

Enhancing Confusion Entropy (CEN) for binary and multiclass classification

Abstract: Different performance measures are used to assess the behaviour, and to carry out the comparison, of classifiers in Machine Learning. Many measures have been defined on the literature, and among them, a measure inspired by Shannon’s entropy named the Confusion Entropy (CEN). In this work we introduce a new measure, MCEN, by modifying CEN to avoid its unwanted behaviour in the binary case, that disables it as a suitable performance measure in classification. We compare MCEN with CEN and other performance measur… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 20 publications
(8 citation statements)
references
References 14 publications
(28 reference statements)
0
8
0
Order By: Relevance
“…In the 2010s, several alternative novel measures have been proposed, either to tackle a particular issue such as imbalance [34, 77], or with a broader purpose. Among them, we mention the confusion entropy [78, 79], a statistical score comparable with MCC [80], and the K measure [81], a theoretically grounded measure that relies on a strong axiomatic base.…”
Section: Introductionmentioning
confidence: 99%
“…In the 2010s, several alternative novel measures have been proposed, either to tackle a particular issue such as imbalance [34, 77], or with a broader purpose. Among them, we mention the confusion entropy [78, 79], a statistical score comparable with MCC [80], and the K measure [81], a theoretically grounded measure that relies on a strong axiomatic base.…”
Section: Introductionmentioning
confidence: 99%
“…The true positive rate (TPR), positive predictive value (PPV), true negative rate (TNR), and modified confusion entropy (MCEN) were also measured for these three FCNNs using 10 000 test images. [39] Figure 4b,c shows that the values of the TPR and PPV of BP and DGC-S were different from those of DGC-O for some labels due to convergence at different weight values. However, as shown in Figure 4d, all three methods exhibit similar TNRs.…”
Section: Trainability Validationmentioning
confidence: 95%
“…There are papers focusing on properties of a particular measure, for instance, Cohen's Kappa [8,22], Confusion Entropy [7], or Balanced Accuracy [2]. Some papers go beyond the threshold measures considered in our paper.…”
Section: A Related Workmentioning
confidence: 99%