2020
DOI: 10.3138/jehr-2019-0001
|View full text |Cite
|
Sign up to set email alerts
|

A Comparison of Gwet’s AC1 and Kappa When Calculating Inter-Rater Reliability Coefficients in a Teacher Evaluation Context

Abstract: With increased emphasis on teacher quality in the Race to the Top federal grants program, rater agreement is an important topic in teacher evaluation. Variations of kappa have often been used to assess inter-rater reliability (IRR). Research has shown that kappa suffers from a paradox where high exact agreement can produce low kappa values. Two chance-corrected methods of IRR were examined to determine if Gwet’s AC1 statistic is a more stable estimate than kappa. Findings suggest that Gwet’s AC1 statistic outp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(11 citation statements)
references
References 8 publications
0
9
0
Order By: Relevance
“…Gwet's AC tends to lessen the kappa limitations; [22,23] hence we consider Gwet's AC providing more stable interrater reliability coe cients in our study following few recent studies that have also preferred Gwet's AC over Cohen/Conger's Kappa for being a more stable inter-rater reliability coe cient. [24,25,26]…”
Section: Discussionmentioning
confidence: 99%
“…Gwet's AC tends to lessen the kappa limitations; [22,23] hence we consider Gwet's AC providing more stable interrater reliability coe cients in our study following few recent studies that have also preferred Gwet's AC over Cohen/Conger's Kappa for being a more stable inter-rater reliability coe cient. [24,25,26]…”
Section: Discussionmentioning
confidence: 99%
“…The second stage involved testing the reliability of the assessment method. The reliability of the proposed method was analyzed by Gwet's AC1 [15,18,32]. In evaluating reliability among experts, Gwet's AC1 is better than Kappa Statistics.…”
Section: Methodsmentioning
confidence: 99%
“…This can involve using the kappa statistic [37,38,39] or Gwet's AC1 [40,41,42] to measure a method's results and the Alpha Cronbach [43,44] to ensure the data's reliability. But, Gwet's AC1 could be better than the kappa statistic for assessment case [45,46,47,48]. Software reuse papers tested their research using precision and recall, with none utilizing similarity measurements to test their research.…”
Section: What Are the Parameters (Measuring Instruments) Used To Measmentioning
confidence: 99%