The authors examined the Trauma Symptom Inventory's (TSI) ability to discriminate 88 student post-traumatic stress disorder (PTSD) simulators screened for genuine PTSD from 48 clinical PTSD-diagnosed outpatients. Results demonstrated between-group differences on several TSI clinical scales and the Atypical Response (ATR) validity scale. Discriminant function analysis using ATR revealed 75% correct patient classification but only 48% correct simulator classification, with an overall correct classification rate of 59% (positive predictive power [PPP] = .71; negative predictive power [NPP] = .51). Individual ATR cutoff scores did not yield impressive classification results, with the optimal cutoff (T score = 61) correctly classifying only 61% of simulators and patients (PPP = .66, NPP = .54). Although ATR was not developed as a malingered PTSD screen, instead serving as a general validity screen, caution is recommended in its current clinical use for detecting malingered PTSD.
In this article, we combine two analogue experiments in which we empirically examined three malingering methodological issues in individuals trained and instructed to simulate posttraumatic stress disorder (PTSD) on the Trauma Symptom Inventory (TSI; Briere, 1995). In Experiment 1, we examined TSI scale effects of the following manipulations using a 2 x 2 design with 330 college students: (a) inclusion or exclusion of cautionary instructions regarding believability of participants' simulation and (b) different financial incentive levels. In Experiment 2, we examined comorbid psychiatric diagnostic training with 180 college students who were either trained to simulate PTSD and comorbid major depressive disorder or trained to simulate only PTSD. Caution main effects were significant for all but two TSI Clinical Scales, incentive main effects and interactions were only significant for one Clinical scale each, and the comorbidity manipulation did not yield any scale differences. We discuss malingering research design implications regarding the use of cautionary instructions, financial incentive levels, and comorbid training.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.