1990
DOI: 10.1016/0895-4356(90)90158-l
|View full text |Cite
|
Sign up to set email alerts
|

High agreement but low Kappa: I. the problems of two paradoxes

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

32
1,496
7
35

Year Published

1992
1992
2017
2017

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 2,396 publications
(1,618 citation statements)
references
References 9 publications
32
1,496
7
35
Order By: Relevance
“…and "How often after hospice became involved did you feel you were on your own to advocate for the patient? "-all low kappas (≤0.40) could be explained by the skewed marginal effects, as described by Feinstein and Cicchetti (16,17), and explained above, in which a low kappa results despite high percent agreement in the data. We report the p-positive and p-negative values for items with skewed marginal effects in Table 2.…”
Section: Resultsmentioning
confidence: 74%
See 1 more Smart Citation
“…and "How often after hospice became involved did you feel you were on your own to advocate for the patient? "-all low kappas (≤0.40) could be explained by the skewed marginal effects, as described by Feinstein and Cicchetti (16,17), and explained above, in which a low kappa results despite high percent agreement in the data. We report the p-positive and p-negative values for items with skewed marginal effects in Table 2.…”
Section: Resultsmentioning
confidence: 74%
“…As recommended by Landis and Koch (15), we considered a kappa of less than 0.40 to indicate poor agreement. Feinstein and Cicchetti noted an important paradox of high agreement but low kappa as a result of imbalances in marginal totals (16,17). This paradox is explained by the fact that kappa statistics are dependent on the prevalence of the finding under observation.…”
Section: Analytic Approachmentioning
confidence: 99%
“…It is known that when the table's marginal totals are substantially unbalanced, kappa is low in spite of high agreement between raters. 24 The recommendation has been made to either report proportionate agreement with respect to a positive (`Yes') decision and with respect to a negative (`No') decision, 25 or kappa and percentage agreement. 26 As is clear from Table 4, except for the decision whether the paper deals with human subjects, the per cent agreement was also low; therefore, kappa was used in Table 5.…”
Section: Discussionmentioning
confidence: 99%
“…The proportion of observed agreement (p O ) and Cohen's Kappa (j) were calculated as measures of concurrent validity. Since p O and j show no insight into the agreement between the positive and negative answers and because the j statistic is considered unstable as it is strongly influenced by the observed proportions of individuals who fall in each category of classification (Speklé et al 2009;Perreault et al 2008;Juul-Kristensen et al 2006;Salerno et al 2000;Feinstein and Cicchetti 1990), p positive (p pos ) and p negative (p neg ) were also calculated as extra means of assessing the agreement (Speklé et al 2009;Feinstein and Cicchetti 1990;Cicchetti and Feinstein 1990). According to Cicchetti and Feinstein (Cicchetti and Feinstein 1990), the observed proportion of positive agreement (p pos ) can be calculated as the ratio of the actual number of subjects that the questionnaire and the occupational physician agree on having symptoms over the average number of subjects with symptoms that were identified by the questionnaire and the occupational physician ((cases questionnaire ?…”
Section: Statistical Analysesmentioning
confidence: 99%