2023
DOI: 10.1027/1015-5759/a000719
|View full text |Cite
|
Sign up to set email alerts
|

How a Few Inconsistent Respondents Can Confound the Structure of Personality Survey Data

Abstract: Abstract. In survey data, inconsistent responses due to careless/insufficient effort (C/IE) can lead to problems of replicability and validity. However, data cleaning prior to the main analyses is not yet a standard practice. We investigated the effect of C/IE responses on the structure of personality survey data. For this purpose, we analyzed the structure of the Core-Self Evaluations scale (CSE-S), including the detection of aberrant responses in the study design. While the original theoretical model of the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(12 citation statements)
references
References 31 publications
(44 reference statements)
0
12
0
Order By: Relevance
“…Because the content facets of positive and negative CSE perfectly align with the wording of the items, it has been suggested that the CSES does not represent substantively different traits. Rather, its dimensionality is often distorted by artifacts stemming from the use of positively and negatively worded items (e.g., Arias et al, 2022;Gu et al, 2015;Henderson & Gardiner, 2019;Schmalbach et al, 2020). A frequently observed phenomenon in self-report instruments is systematic variance captured by negatively worded items that can present itself as an additional factor beyond the focal construct (e.g., DiStefano & Motl, 2006;Koutsogiorgi et al, 2021).…”
Section: Methods Artifacts and Wording Effectsmentioning
confidence: 99%
See 3 more Smart Citations
“…Because the content facets of positive and negative CSE perfectly align with the wording of the items, it has been suggested that the CSES does not represent substantively different traits. Rather, its dimensionality is often distorted by artifacts stemming from the use of positively and negatively worded items (e.g., Arias et al, 2022;Gu et al, 2015;Henderson & Gardiner, 2019;Schmalbach et al, 2020). A frequently observed phenomenon in self-report instruments is systematic variance captured by negatively worded items that can present itself as an additional factor beyond the focal construct (e.g., DiStefano & Motl, 2006;Koutsogiorgi et al, 2021).…”
Section: Methods Artifacts and Wording Effectsmentioning
confidence: 99%
“…Although developed to represent a single latent trait, factor analytic investigations of the CSES often favored multi-as compared to unidimensional measurement models (e.g., Arias et al, 2022;Henderson & Gardiner, 2019;Mäkikangas et al, 2018;Schmalbach et al, 2020;Sun et al, 2017;Zenger et al, 2015). These results have been interpreted either from a substantive point of view as reflecting different content facets of CSE (Mäkikangas et al, 2018;Zenger et al, 2015) or as the result of methodological artifacts stemming from the item wording (Arias et al, 2022;Schmalbach et al, 2020). The present study contributes to this debate by presenting meta-analytic evidence on the psychometric properties of the CSES.…”
Section: A Meta-analytic Investigation Of Wording Effectsmentioning
confidence: 99%
See 2 more Smart Citations
“…However, several studies have shown differences in the scores of direct items and recoded reverse items (also known as wording effects). This effect has been described as a tendency to respond differentially to both subsets, producing ubiquitous alterations in data quality that can induce (a) spurious relationships between variables that would not otherwise be correlated (Arias et al, 2022; Huang et al, 2015), (b) an increase/decrease in internal consistency (Roszkowski & Soven, 2010; Wood et al, 2017), (c) mismatch in unidimensional models or the emergence of method factors associated with direct or reverse items (Greenberger et al, 2003; Woods, 2006), (d) alterations in discriminative capacity (Rodebaugh et al, 2004), or (e) a systematic bias that undermines the effectiveness of experimental manipulations (Maniaci & Rogge, 2014), to list a few examples. In this respect, the interpretation and statistical control of these systematic elements present conceptual and analytical challenges for the quality of the evaluation when using self-report measures.…”
Section: Wording Effects and Their Substantive Versus Ephemeral Naturementioning
confidence: 99%