2013
DOI: 10.1080/23808985.2013.11679142
|View full text |Cite
|
Sign up to set email alerts
|

Assumptions behind Intercoder Reliability Indices

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
136
0
7

Year Published

2014
2014
2024
2024

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 94 publications
(161 citation statements)
references
References 1 publication
1
136
0
7
Order By: Relevance
“…Intersystem agreement between the CNN and manual FACS coding (i.e., ground truth) was quantified using AUC, F1 (positive agreement), NA (negative agreement), and free-marginal kappa [3, 4], which estimates chance agreement by assuming that each category is equally likely to be chosen at random [32]. (Table 1).…”
Section: Methodsmentioning
confidence: 99%
“…Intersystem agreement between the CNN and manual FACS coding (i.e., ground truth) was quantified using AUC, F1 (positive agreement), NA (negative agreement), and free-marginal kappa [3, 4], which estimates chance agreement by assuming that each category is equally likely to be chosen at random [32]. (Table 1).…”
Section: Methodsmentioning
confidence: 99%
“…The second coder then recoded the initial group and coded one other randomly selected group. At the end of this process, a Cohen's Kappa, which has been the most often used index of reliability in social science (Zhao, Liu, & Deng, 2013), was calculated to determine the extent of agreement between the author and the second coder. The final Cohen's kappa is .83, indicating very acceptable agreement (Neuendorf, 2002).…”
Section: Reliabilitymentioning
confidence: 99%
“…It estimates chance agreement by assuming that each category is equally likely to be chosen at random [39]. When applied to two raters assigning objects to dichotomous categories, the S score is calculated using (1), where n 00 is the number of objects that both raters assigned to the negative (i.e., absent) category and n 11 is the number of objects that both raters assigned to the positive (i.e., present) category.…”
Section: Baseline Methodsmentioning
confidence: 99%