1990
DOI: 10.1016/0895-4356(90)90159-m
|View full text |Cite
|
Sign up to set email alerts
|

High agreement but low kappa: II. Resolving the paradoxes

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

13
935
8
11

Year Published

1992
1992
2017
2017

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 1,511 publications
(986 citation statements)
references
References 12 publications
13
935
8
11
Order By: Relevance
“…and "How often after hospice became involved did you feel you were on your own to advocate for the patient? "-all low kappas (≤0.40) could be explained by the skewed marginal effects, as described by Feinstein and Cicchetti (16,17), and explained above, in which a low kappa results despite high percent agreement in the data. We report the p-positive and p-negative values for items with skewed marginal effects in Table 2.…”
Section: Resultsmentioning
confidence: 73%
See 2 more Smart Citations
“…and "How often after hospice became involved did you feel you were on your own to advocate for the patient? "-all low kappas (≤0.40) could be explained by the skewed marginal effects, as described by Feinstein and Cicchetti (16,17), and explained above, in which a low kappa results despite high percent agreement in the data. We report the p-positive and p-negative values for items with skewed marginal effects in Table 2.…”
Section: Resultsmentioning
confidence: 73%
“…As recommended by Landis and Koch (15), we considered a kappa of less than 0.40 to indicate poor agreement. Feinstein and Cicchetti noted an important paradox of high agreement but low kappa as a result of imbalances in marginal totals (16,17). This paradox is explained by the fact that kappa statistics are dependent on the prevalence of the finding under observation.…”
Section: Analytic Approachmentioning
confidence: 99%
See 1 more Smart Citation
“…Since k is influenced by prevalence of the characteristic being measured, agreement was measured by category-specific k and percent agreements to accommodate uncommon or low prevalent features. 5 In general, k statistics less than 0.4 are associated with relatively poor agreement, values of 0.4-0.6 moderate agreement, values of 0.6-0.8 substantial (good) agreement and values greater than 0.8 are associated with excellent (almost perfect) agreement. 6 In the second approach, the reproducibility (or accuracy) of the study pathologists' diagnoses was assessed relative to the reference standard.…”
Section: Methodsmentioning
confidence: 94%
“…Through this relationship, kappa coefficients depend on the prevalence of the trait under study, which limits the possibility to compare them among studies with different prevalence. Several authors (Thompson and Walter, 1988; Feinstein and Cicchetti, 1990; Cicchetti and Feinstein, 1990; Byrt et al ., 1993; de Vet et al ., 2006) proposed the use of absolute agreement measures (e.g. the proportion of items classified in the same category by the two observers) to avoid that dependency.…”
Section: Introductionmentioning
confidence: 99%