1993
DOI: 10.2307/2532564
|View full text |Cite
|
Sign up to set email alerts
|

Interval Estimation under Two Study Designs for Kappa with Binary Classifications

Abstract: Cornfield's test-based method of setting a confidence interval on a parameter associated with a two-by-two contingency table is adapted for use with the measure of agreement kappa. One-sided confidence intervals derived in this way are compared to other intervals proposed for kappa under two study designs. Both designs involve two ratings per subject on a dichotomous scale. In one design the same two raters make all evaluations; in the other, possibly different pairs of raters evaluate different subjects, or t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
52
0

Year Published

2000
2000
2014
2014

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 74 publications
(53 citation statements)
references
References 15 publications
1
52
0
Order By: Relevance
“…Fifty-five percent of the cases respondent were spouses, as were 47% of control respondents. There was no difference between spouse and other respondents with respect to distributions of the quality of the [Hale and Fleiss, 1993]. b 95% CI by the logit method with 1 /2 correction [Gart and Zweifel, 1962;Cox, 1970].…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Fifty-five percent of the cases respondent were spouses, as were 47% of control respondents. There was no difference between spouse and other respondents with respect to distributions of the quality of the [Hale and Fleiss, 1993]. b 95% CI by the logit method with 1 /2 correction [Gart and Zweifel, 1962;Cox, 1970].…”
Section: Resultsmentioning
confidence: 99%
“…The Mantel and Haenszel [1959] procedure was used to adjust odds ratios for type of next-of-kin (spouse and other) interviewed and for quality of interview (highly reliable, generally reliable, and questionable/unreliable). To measure agreement among exposure assessment methods, separately for cases and controls, we computed pair-wise kappa statistics [Cohen, 1960;Fleiss, 1981] and corresponding 95% confidence intervals [Hale and Fleiss, 1993], and a three-way kappa measure of assessment [Shrout and Fleiss, 1979]. To assess agreement, we also computed marginal odds ratios among the exposure measurements as well as a pair-wise conditional odds ratios, holding the third exposure constant.…”
Section: Methodsmentioning
confidence: 99%
“…Kappa-values were assessed as: k#0.20, minor agreement; k50.21-0.40, fair agreement; k50.41-0.60, moderate agreement; k50.61-0.80, high agreement; and k50.81-1.00, almost perfect agreement [9,10]. Discrepancies regarding these evaluations between the two reviewers were resolved by consensus.…”
Section: Discussionmentioning
confidence: 99%
“…The reliability of ISH and Consensus PCR was determined using Fleiss' intra-class correlation coefficient (ICCC) [47,48]. The Kappa coefficients were divided into categories as described by Landis and Kock [49].…”
Section: Reliabilitymentioning
confidence: 99%