1981
DOI: 10.1007/bf01321350
|View full text |Cite
|
Sign up to set email alerts
|

Measures of interobserver agreement: Calculation formulas and distribution effects

Abstract: Seventeen measures o f association f o r observer reliabifity dnterobserver agreement) are reviewed and computational formulas are given in a common notational system. A n empirical comparison o f 10 o f these measures is made over a range o f potential reliability check results. The effects on percentage and correlational measures o f occurrence frequency, error frequency, and error distribution are examined. The question o f which is the "'best" measure o f interobserver agreement is discussed in terms o f c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
35
0
2

Year Published

1986
1986
2019
2019

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 151 publications
(37 citation statements)
references
References 46 publications
0
35
0
2
Order By: Relevance
“…According to the results of the expert validation, two items were eliminated due to their low consent rate since 70% agreement is considered necessary in interpreting percentage agreement [69] whereas two extra items were added based on the experts' supplements (Table 5). It was found that the experts agreed on most of the barriers and provided further elaborations which engender a more holistic perspective.…”
Section: S1mentioning
confidence: 99%
“…According to the results of the expert validation, two items were eliminated due to their low consent rate since 70% agreement is considered necessary in interpreting percentage agreement [69] whereas two extra items were added based on the experts' supplements (Table 5). It was found that the experts agreed on most of the barriers and provided further elaborations which engender a more holistic perspective.…”
Section: S1mentioning
confidence: 99%
“…Interobserver agreement was obtained on an average of 23% of the sessions across all phases of the experiment. For disruptive behavior, interobserver agreement was calculated on a point-by-point basis (House, House, & Campbell, 1981). Counts of disruptive behaviors within 10-s intervals were compared for two independent observers, and the number of agreements was divided by the total number of agreements plus disagreements and multiplied by 100%.…”
Section: Dependent Variables and Data Collectionmentioning
confidence: 99%
“…Independent observer agreement (IOA) was calculated by a percent-of-intervals method (House, House, & Campbell, 1981). Data were required to meet a minimum of 90 % agreement to be considered clinically valid.…”
Section: Interobserver Agreementmentioning
confidence: 99%