2019
DOI: 10.1145/3359174
|View full text |Cite
|
Sign up to set email alerts
|

Reliability and Inter-rater Reliability in Qualitative Research

Abstract: What does reliability mean for building a grounded theory? What about when writing an auto-ethnography? When is it appropriate to use measures like inter-rater reliability (IRR)? Reliability is a familiar concept in traditional scientific practice, but how, and even whether to establish reliability in qualitative research is an oft-debated question. For researchers in highly interdisciplinary fields like computer-supported cooperative work (CSCW) and human-computer interaction (HCI), the question is particular… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

10
241
0
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 562 publications
(252 citation statements)
references
References 66 publications
10
241
0
1
Order By: Relevance
“…All interviews were double coded by two researchers who met to discuss the themes and codes after each set of two to three interviews. Because the interviewers reviewed every independently-coded transcript together, we do not present inter-rater reliability [44,45].…”
Section: Resultsmentioning
confidence: 99%
“…All interviews were double coded by two researchers who met to discuss the themes and codes after each set of two to three interviews. Because the interviewers reviewed every independently-coded transcript together, we do not present inter-rater reliability [44,45].…”
Section: Resultsmentioning
confidence: 99%
“…We sought first-person, subjective, narrative accounts of their experiences in the interviews and identified recurring themes. Our analytical procedures focused on eventually yielding concepts and themes (recurrent topics or meanings that represent a phenomena) rather than agreement -because even if coders agreed on codes, they may interpret the meaning of those codes differently [37]. Therefore, we did not seek inter-rater reliability in our analysis but endeavored to identify recurring themes of interest, detect relationships among them, and organize them into clusters of more complex and broader themes.…”
Section: Resultsmentioning
confidence: 99%
“…4) Defining themes: The same authors reviewed the final coding to identify similarities that allowed thematic grouping. We collated codes into themes and therefore did not calculate survey inter-rater reliability because codes were not the final outcome of our analysis [43].…”
Section: Resultsmentioning
confidence: 99%