1984
DOI: 10.1016/0361-476x(84)90039-0
|View full text |Cite
|
Sign up to set email alerts
|

A review of reliability procedures for measuring observer agreement

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

1989
1989
2007
2007

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 33 publications
0
5
0
Order By: Relevance
“…The attractiveness of the present approach lies in the interpretability of the terms of expected and observed disagreement in terms of average multivariate distances between observations, and the simplicity with which the definition of agreement as a function of these terms enables the extension of kappa to the multivariate case. The objection to multivariate extensions of kappa, that they lack an interpretation (e.g., Towstopiat, 1984), no longer seems valid. Our approach, unlike most previous, addresses the case of multivariate nominal data.…”
Section: Discussionmentioning
confidence: 99%
“…The attractiveness of the present approach lies in the interpretability of the terms of expected and observed disagreement in terms of average multivariate distances between observations, and the simplicity with which the definition of agreement as a function of these terms enables the extension of kappa to the multivariate case. The objection to multivariate extensions of kappa, that they lack an interpretation (e.g., Towstopiat, 1984), no longer seems valid. Our approach, unlike most previous, addresses the case of multivariate nominal data.…”
Section: Discussionmentioning
confidence: 99%
“…Just how high kappa needs to be is an open question, although Suen and Lee (1985) suggested 0.60 as a lenient criterion and 0.75 as more stringent. Towstopiat (1984), however, noting that the use of kappa is limited to situations when only two observers are involved, suggested the use of multivariate agreement models.…”
Section: Measuring Agreement Between Observersmentioning
confidence: 97%
“…Cronbach, Gleser, Nanda, and Rajaratnam (1972) extended the components of variance approach to complex experimental designs in which variability among different raters is one factor influencing &dquo;generalizability.&dquo; However, none of those authors discuss the problem of separating differing individual rater reliability as such. More recent authors have compared other approaches to estimating overall consistency among several raters as distinct from the specific reliabilities of individual raters (James, Demaree, & Wolf, 1984;Jones, Johnson, Butler, & Main, 1983;Towstopiat, 1984). Coefficient alpha (Cronbach, 1951) is another frequently used estimate of the reliability of a composite test or the average reliability of subtests entering into the composite (Conn & Ramanaiah, 1990;Dolan, Lacey, & Evans, 1990).…”
Section: Estimating Individual Rater Reliabilitiesmentioning
confidence: 99%