2018
DOI: 10.1177/1758573218791813
|View full text |Cite
|
Sign up to set email alerts
|

The kappa paradox

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
3
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 7 publications
0
3
0
1
Order By: Relevance
“…On the other hand, there are several authors that defend that Kappa is a useful measure of agreement, when its limitations are taken into account. For example, in [32] the authors defend the use of Kappa in a previous study, and warn that it is a useful measure if marginal distributions are considered. A similar conclusion was reached in [33], where it is said that although Kappa is not suitable in certain circumstances, it is better than the raw proportion.…”
Section: Introductionmentioning
confidence: 99%
“…On the other hand, there are several authors that defend that Kappa is a useful measure of agreement, when its limitations are taken into account. For example, in [32] the authors defend the use of Kappa in a previous study, and warn that it is a useful measure if marginal distributions are considered. A similar conclusion was reached in [33], where it is said that although Kappa is not suitable in certain circumstances, it is better than the raw proportion.…”
Section: Introductionmentioning
confidence: 99%
“…To address some of these issues, we have seen the rise of Cohen's Kappa and Matthews Correlation Coefficient (MCC), which tries to address a larger scope of variables to gauge classifier performance. Although the use of K has been an area of debate inside ML domains (Delgado & Tibau, 2019) and outside them (Bexkens et al, 2018), we see a much more positive reception for MCC (Chicco & Jurman, 2020).…”
Section: The Limitations Of ML Metrics In Financial Crime Analyticsmentioning
confidence: 79%
“…While these observations seem counter-intuitive, the phenomenon has been explained as a kappa paradox which results from imbalanced marginal distributions of the data [25,26]. However, as explained by Bexkens et al [27], this is not a limitation of the kappa coefficient but rather a logical consequence of its purpose to correctly interpret agreement adjusted for agreement by chance alone. We therefore opted to report both concordances and kappa statistics, while at the same time showing all underlying numbers to allow for detailed insights in data.…”
Section: Discussionmentioning
confidence: 99%