2021
DOI: 10.1177/16094069211002418
|View full text |Cite
|
Sign up to set email alerts
|

Visual and Statistical Methods to Calculate Intercoder Reliability for Time-Resolved Observational Research

Abstract: While calculating intercoder reliability (ICR) is straightforward for text-based data, such as for interview transcript excerpts, determining ICR for naturalistic observational video data is much more complex. To date, there have been few methods proposed in literature that are robust enough to handle complexities such as the occurrence of simultaneous event complexity and partial agreement by the raters. This is especially important with the emergence of high-resolution video data, which collects nearly conti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 49 publications
0
1
0
Order By: Relevance
“…Unfortunately, calculating inter-rater and intra-rater reliability (or, more properly termed for this type of nominal data, intercoder/intracoder reliability) is highly challenging for complex naturalistic observational video data featuring overlapping open codes and an evolving, living codebook. Attempts to quantify such comparisons present problematic implications and oversimplification due both to the nature of grounded theory research and the statistical issues arising from a large, complex codebook with multiple rare codes (e.g., prevalence discrepancies). , However, visual analysis of the coding (see SI) illustrated an apparently high degree of agreement between the participants and author in assigning MA codes, especially after accounting for restrictions that the author operated under in assigning codes which the participants did not (Box ). The participants sometimes provided more MA codes during their verbal open coding because they could recall cognitive decisions they were making or report on what their eyes were doing behind the camera.…”
Section: Methodsmentioning
confidence: 99%
“…Unfortunately, calculating inter-rater and intra-rater reliability (or, more properly termed for this type of nominal data, intercoder/intracoder reliability) is highly challenging for complex naturalistic observational video data featuring overlapping open codes and an evolving, living codebook. Attempts to quantify such comparisons present problematic implications and oversimplification due both to the nature of grounded theory research and the statistical issues arising from a large, complex codebook with multiple rare codes (e.g., prevalence discrepancies). , However, visual analysis of the coding (see SI) illustrated an apparently high degree of agreement between the participants and author in assigning MA codes, especially after accounting for restrictions that the author operated under in assigning codes which the participants did not (Box ). The participants sometimes provided more MA codes during their verbal open coding because they could recall cognitive decisions they were making or report on what their eyes were doing behind the camera.…”
Section: Methodsmentioning
confidence: 99%