2021
DOI: 10.3389/fpsyg.2021.636693
|View full text |Cite
|
Sign up to set email alerts
|

Using Constrained Factor Mixture Analysis to Validate Mixed-Worded Psychological Scales: The Case of the Rosenberg Self-Esteem Scale in the Dominican Republic

Abstract: A common method to collect information in the behavioral and health sciences is the self-report. However, the validity of self-reports is frequently threatened by response biases, particularly those associated with inconsistent responses to positively and negatively worded items of the same dimension, known as wording effects. Modeling strategies based on confirmatory factor analysis have traditionally been used to account for this response bias, but they have recently become under scrutiny due to their incorr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

2
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

3
4

Authors

Journals

citations
Cited by 7 publications
(8 citation statements)
references
References 99 publications
2
6
0
Order By: Relevance
“…Thus, with data contaminated by wording effects the dimensionality estimates no longer reflect the underlying substantive dimensionality, but rather, they suggest a combination of the number of substantive and method factors. This finding is consistent with the wording effects empirical factor-analytic literature (García-Batista et al, 2021;Yang et al, 2018;Zhang & Savalei, 2016). However, novel findings of this study show that the impact of the wording effects on the dimensionality estimates is complex, and notably different for the PA and EGA retention methods.…”
Section: Traditional Dimensionality Estimation With Wording Effectssupporting
confidence: 91%
See 1 more Smart Citation
“…Thus, with data contaminated by wording effects the dimensionality estimates no longer reflect the underlying substantive dimensionality, but rather, they suggest a combination of the number of substantive and method factors. This finding is consistent with the wording effects empirical factor-analytic literature (García-Batista et al, 2021;Yang et al, 2018;Zhang & Savalei, 2016). However, novel findings of this study show that the impact of the wording effects on the dimensionality estimates is complex, and notably different for the PA and EGA retention methods.…”
Section: Traditional Dimensionality Estimation With Wording Effectssupporting
confidence: 91%
“…As the magnitude of the wording effects increases, factor retention methods are likely to detect this systematic variance and suggest a latent dimensionality greater than the number of substantive factors (García-Batista et al, 2021;Yang et al, 2018;Zhang & Savalei, 2016).…”
Section: Dimensionality Assessment In the Presence Of Wording Effectsmentioning
confidence: 99%
“…Specifying less -underfactoringor more -overfactoring-dimensions than those present in the population can have detrimental effects on the quality of the factor solutions, including substantial error in the factor loading and factor score estimates, factor splitting, inadmissible solutions, and the emergence of uninterpretable factors (Auerswald & Moshagen, 2019). The inherent difficulties of dimensionality assessment are further exacerbated by the response biases related to wording effects, which tend to increase the latent dimensionality of observed scores (García-Batista et al, 2021;Kam, 2018;Schmalbach et al, 2020;Yang et al, 2018;Zhang & Savalei, 2016). This, in turn, can lead to long-standing controversies regarding the substantive or artifactual nature of dimensions underlying mixed-worded scales (Gnambs et al, 2018).…”
Section: Dimensionality Assessment In the Presence Of Wording Effects...mentioning
confidence: 99%
“…The detrimental effects of nCB data have been well documented: increased risk of type I error in decision-making between competing models, replication problems between studies with different proportions of nCB responses, spurious relationships between truly unrelated variables, artificial deflation or inflation of the internal consistency of data, appearance of factors other than those theoretically expected, obscured effects of experimental manipulation, and severe perturbations in the factorial structure of data (Arias et al, 2020a ; Curran, 2012 ; García-Batista et al, 2021 ; Goldammer et al, 2020 ; Huang et al, 2015 ; Johnson, 2005 ; Kam & Meyer, 2015 ; Maniaci & Rogge, 2014 ; Steinmann et al, 2022 ; Wood et al, 2017 ; Woods, 2006 ).…”
Section: Introductionmentioning
confidence: 99%
“…Various approaches have been proposed to model this inconsistency, usually by specifying additional factors (DiStefano & Motl, 2006 ; Eid, 2000 ; Gnambs et al, 2018 ; Horan et al, 2003 ; Marsh et al, 2010 ; Michaelides et al, 2016 ; Savalei & Falk, 2014 ; Tomás & Oliver, 1999 ; Weijters et al, 2013 ). However, recent studies using mixture models suggest that the phenomenon represented by the wording/method factor is not generalizable to the whole sample: on the contrary, a large proportion of spurious variance is due to a limited proportion of response vectors (Arias et al, 2020a ; García-Batista et al, 2021 ; Ponce et al, 2021 ; Reise et al, 2016 ; Steinmann et al, 2021 , 2022 ; Yang et al, 2018 ). Therefore, although modeling the wording variance helps to reveal the true structure of data, the estimates of the trait in the contaminated vectors remain biased, which may affect important properties of the data, such as the accuracy of the estimators, validity coefficients, or measurement invariance (Arias et al, 2020a ; Nieto et al, 2021 ; Tomás et al, 2015 ).…”
Section: Introductionmentioning
confidence: 99%