2020
DOI: 10.1177/0886260520918588
|View full text |Cite
|
Sign up to set email alerts
|

Identifying Invalid Responders in a Campus Climate Survey: Types, Impact on Data, and Best Indicators

Abstract: Self-report surveys that are online, lengthy, and contain sensitive material greatly increase the probability of invalid responding (IR) on the instrument. Most research to inform our identification of invalid responders have not been able to test their methodologies where all these conditions are present. This study systematically adopted 10 IR indicators based on direct, archival, and statistic strategies to identify IR providing answers on a lengthy survey collecting campus climate/violence information that… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
6
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(9 citation statements)
references
References 30 publications
1
6
0
Order By: Relevance
“…Using indirect careless responding indices that fall into different, previously proposed types of indices (i.e., categorized by consistency, response pattern, and RT), the present research demonstrated that an LCA approach that is adapted to the nested data structure (ML-LCA) can be used to model careless responding in AA. Our results on the latent typology of momentary careless responding were largely in line with previous research (Goldammer et al, 2020; Kam & Meyer, 2015; Li et al, 2022; Maniaci & Rogge, 2014; Meade & Craig, 2012), thereby demonstrating that different types of careless responding (long string vs. inconsistent careless responding) can also be identified in AA data. Similar to recommendations for studying careless responding in cross-sectional surveys (e.g., Curran, 2016), we recommend that researchers who want to investigate patterns of careless responding in their AA data use indices from these different types to capture different types of careless responding.…”
Section: Discussionsupporting
confidence: 90%
See 2 more Smart Citations
“…Using indirect careless responding indices that fall into different, previously proposed types of indices (i.e., categorized by consistency, response pattern, and RT), the present research demonstrated that an LCA approach that is adapted to the nested data structure (ML-LCA) can be used to model careless responding in AA. Our results on the latent typology of momentary careless responding were largely in line with previous research (Goldammer et al, 2020; Kam & Meyer, 2015; Li et al, 2022; Maniaci & Rogge, 2014; Meade & Craig, 2012), thereby demonstrating that different types of careless responding (long string vs. inconsistent careless responding) can also be identified in AA data. Similar to recommendations for studying careless responding in cross-sectional surveys (e.g., Curran, 2016), we recommend that researchers who want to investigate patterns of careless responding in their AA data use indices from these different types to capture different types of careless responding.…”
Section: Discussionsupporting
confidence: 90%
“…To study careful versus careless responding using LCA, researcher-defined careless responding indices (e.g., the ones summarized above) serve as observed variables in the LCA model. The LCA approach has been used successfully to identify careless responding in cross-sectional surveys (Goldammer et al, 2020; Kam & Meyer, 2015; Li et al, 2022; Maniaci & Rogge, 2014; McKibben & Silvia, 2017; Meade & Craig, 2012; Paas et al, 2018; Schneider et al, 2018). Previous research has repeatedly shown that participants can be assigned to one of three classes, with one class representing careful responders and two classes representing careless responders (Goldammer et al, 2020; Kam & Meyer, 2015; Li et al, 2022; Maniaci & Rogge, 2014; Meade & Craig, 2012).…”
Section: Careless Responding Indices In Cross-sectional Researchmentioning
confidence: 99%
See 1 more Smart Citation
“…Response rate was 14.2% in the first sample and 11.2% in the second. For the current study, only students who completed the sexual violence modules were included in analyses, and 19 participants were removed for indicating that they either did not tell the truth on the survey or did not pay attention (Li et al, 2022) for a final analytic sample of 4,020 (Sample 1, n = 2,937 and Sample 2, n = 1,083). The samples were analyzed separately to see if reasons for participation were similar across different survey administrations and student populations.…”
Section: Methodsmentioning
confidence: 99%
“…One of the major threats to validity in survey research comes from participants who are inattentive, respond randomly to survey questions (Kim et al, 2018; King, Kim, and McCabe, 2018; see Chandler, Paolacci and Hauser, 2020 for review), or are ‘mischievous’, providing responses that are intentionally false or misleading (Fan et al, 2006; Robinson-Cimpian, 2014; Kramer, Rubin and Coster, 2014; Fish and Russell, 2018; Kaltiala-Heino and Lindberg, 2019; Li, Follingstad. Campe, and Chahal, 2020; see Cimpian et al, 2020 for a review). Inattentive and mischievous respondents (collectively referred to in this report as ‘problematic respondents’) can bias the results of surveys by dramatically inflating point estimates and creating illusory associations.…”
Section: Introductionmentioning
confidence: 99%