2016
DOI: 10.1037/pas0000205
|View full text |Cite
|
Sign up to set email alerts
|

“False feigners”: Examining the impact of non-content-based invalid responding on the Minnesota Multiphasic Personality Inventory-2 Restructured Form content-based invalid responding indicators.

Abstract: Misinterpretation of non-content-based invalid (e.g., random, fixed) responding as overreporting or underreporting is likely to adversely impact test interpretation and could bias inferences about examinee intentions. We examined the impact of non-content-based invalid responding on the following Minnesota Multiphasic Personality Inventory-2 Restructured Form (MMPI-2-RF) content-based invalid responding indicators: Infrequent Responses (F-r), Infrequent Psychopathology Responses (FP-r), Infrequent Somatic Resp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 23 publications
(15 citation statements)
references
References 27 publications
0
15
0
Order By: Relevance
“…Despite their frequency of use, self-reports are rarely cross-checked for accuracy and are susceptible to invalid responses. In the psychoeducational literature, invalid responses are often characterized as resulting from insincere respondents, respondents that choose to purposefully distort (e.g., lie) their responses in order to provide more (or less) favorable ratings of their circumstances (Burchett et al, 2015), rebellious responders that purposefully provide a particular response pattern because they find it amusing (Fan et al, 2006), and careless or rapid responders who are inattentive to the survey items (Meade & Craig, 2012). Failure to identify and remove these respondents from analytic samples prior to analysis has been found to contaminate substantive conclusions regarding the prevalence rates of risk behaviors in younger samples (Cornell et al, 2012; Cornell, Lovegrove, et al, 2014; Fan et al, 2002, 2006).…”
Section: Discussionmentioning
confidence: 99%
“…Despite their frequency of use, self-reports are rarely cross-checked for accuracy and are susceptible to invalid responses. In the psychoeducational literature, invalid responses are often characterized as resulting from insincere respondents, respondents that choose to purposefully distort (e.g., lie) their responses in order to provide more (or less) favorable ratings of their circumstances (Burchett et al, 2015), rebellious responders that purposefully provide a particular response pattern because they find it amusing (Fan et al, 2006), and careless or rapid responders who are inattentive to the survey items (Meade & Craig, 2012). Failure to identify and remove these respondents from analytic samples prior to analysis has been found to contaminate substantive conclusions regarding the prevalence rates of risk behaviors in younger samples (Cornell et al, 2012; Cornell, Lovegrove, et al, 2014; Fan et al, 2002, 2006).…”
Section: Discussionmentioning
confidence: 99%
“…When an SVT includes items describing rare symptoms or unlikely behaviors and attitudes, a bona fide responder may inadvertently endorse these items, as pathological individuals are more likely than non-pathological individuals to endorse rare complaints on these types of tests (Greiffenstein & Baker, 2008;Rogers & Bender, 2018;Slick, Sherman, Grant, & Iverson, 1999). Consequently, if the response-pattern appears random-like, test scales which address overreporting of symptoms and problems may be artificially inflated (Burchett et al, 2016).…”
Section: Random Respondingmentioning
confidence: 99%
“…Examination of scores on these indices of non-contentbased invalid responding is always the first step in examining protocol validity, as a high CNS score can artificially suppress scores on content-based Validity Scales and substantive scales whereas high scores on VRIN-r and TRIN-r may spuriously elevate or suppress scores. Each response style detected by CNS, VRIN-r, or TRIN-r is a threat to protocol validity, as they likely compromise the predictive integrity of the various diagnostic efficiency statistics and the meaningfulness of recommended cut scores of the rest of the scales (e.g., Ben-Porath, 2012;Burchett et al, 2016;Dragon et al, 2012;Handel et al, 2010;Ingram & Ternes, 2016). In Fig.…”
Section: Mmpi-2-rf Scales For Detecting Non-content-based Invalid Respondingmentioning
confidence: 99%