2020
DOI: 10.1016/j.leaqua.2020.101384
|View full text |Cite
|
Sign up to set email alerts
|

Careless responding in questionnaire measures: Detection, impact, and remedies

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
95
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 117 publications
(125 citation statements)
references
References 44 publications
2
95
0
Order By: Relevance
“…A convenience sample of four hundred nineteen U.S.-based adult employees was recruited using Amazon’s MTurk to participate in an ongoing research project entitled “Longitudinal study of work/life experiences during the COVID-19 pandemic.” Employees successively completed three waves of anonymous online surveys in May 2020 (Time 1), June 2020 (Time 2), and August 2020 (Time 3). Participation requirements included (a) being employed outside of MTurk at the time of data collection, (b) having a 90% approval rate or higher for the past 100 crowd-sourced tasks [ 14 ], and (c) not having been flagged as a careless respondent in previous waves (i.e., taking on average less than 2 s to answer the survey items, failing two or more attention checks out of four, and being a multivariate outlier; [ 15 ]). A breakdown of our sample’s demographics is available in Table 1 .…”
Section: Methodsmentioning
confidence: 99%
“…A convenience sample of four hundred nineteen U.S.-based adult employees was recruited using Amazon’s MTurk to participate in an ongoing research project entitled “Longitudinal study of work/life experiences during the COVID-19 pandemic.” Employees successively completed three waves of anonymous online surveys in May 2020 (Time 1), June 2020 (Time 2), and August 2020 (Time 3). Participation requirements included (a) being employed outside of MTurk at the time of data collection, (b) having a 90% approval rate or higher for the past 100 crowd-sourced tasks [ 14 ], and (c) not having been flagged as a careless respondent in previous waves (i.e., taking on average less than 2 s to answer the survey items, failing two or more attention checks out of four, and being a multivariate outlier; [ 15 ]). A breakdown of our sample’s demographics is available in Table 1 .…”
Section: Methodsmentioning
confidence: 99%
“…Of the 377 respondents, we first excluded 58 who were not eligible and had missing values. An additional 32 RNs were excluded for careless responses that delineated unmotivated and inattentive responding patterns (Goldammer et al, 2020). In this study, careless responders were identified by Mahalanobis distance values (DeSimone et al, 2015;Goldammer et al, 2020) larger than the critical chi-square value of 21.03 at an alpha level of 0.05.…”
Section: Sample Characteristics Of the Validation Studymentioning
confidence: 99%
“…As a result, 287 respondents were included in the analysis. Response type can affect psychometric properties of constructs (Goldammer et al, 2020) and rejection of the unidimensional factor model in a CFA test (Huang et al, 2012;Woods, 2006). Table 1 summarizes…”
Section: Sample Characteristics Of the Validation Studymentioning
confidence: 99%
“…To address this problem, we flagged careless respondents based on three criteria: average response time per item, item consistency on a semantic antonym and Mahalonobis distance (Curran 2016;Meade and Craig 2012). We adopted the cut scores set for 95% specificity from Goldammer et al (2020) for average response time per item and the Mehalonobis distance. We considered responses to be inconsistent if the absolute difference between the two reverse-item scored items was equal or larger than 2.…”
Section: Participants and Data Collectionmentioning
confidence: 99%