1989
DOI: 10.2307/2531695
|View full text |Cite
|
Sign up to set email alerts
|

On Assessing Interrater Agreement for Multiple Attribute Responses

Abstract: New methods are developed for assessing the extent of interrater agreement when each unit to be rated is characterized by a (possibly empty) subset of a specified set of distinct nominal attributes. For such multiple attribute response data, a two-rater concordance statistic is derived, and associated statistical inference-making procedures are provided. This concordance statistic is corrected for chance agreement by using an underlying hypergeometric model. Numerical examples are given to illustrate the propo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
23
0

Year Published

1991
1991
2022
2022

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 37 publications
(23 citation statements)
references
References 23 publications
0
23
0
Order By: Relevance
“…Inter-rater reliability (IRR) was computed using the KupperHafner statistic [32]. For the Choice question, we computed IRR for each pair of raters.…”
Section: Discussionmentioning
confidence: 99%
“…Inter-rater reliability (IRR) was computed using the KupperHafner statistic [32]. For the Choice question, we computed IRR for each pair of raters.…”
Section: Discussionmentioning
confidence: 99%
“…The inter-rater agreement was calculated over 3500 codes (of two independent raters) for 1512 segments from networks A and B, periods 1 and 2. The inter-rater correlation was calculated using the Kupper measure (Kupper & Hafner, 1989). The correlation was very high being 0.96 for all codes together.…”
Section: Discussionmentioning
confidence: 99%
“…Thus, it involved non-mutual exclusivity of codes; it also lacked a finite set of instances for potential agreement between raters. These characteristics ruled out assessment of inter-rater agreement using tools that rely on these discrete counts, such as the kappa measure of agreement [25][26][27].…”
Section: Content Analysismentioning
confidence: 99%