The Personality Inventory for ICD-11 (PiCD) was recently developed to assess the ICD-11 model of personality disorders. The purpose of this study was to examine the construct validity of the PiCD using the Minnesota Multiphasic Personality Inventory (MMPI)-2-Restructured Form (MMPI-2-RF) and the Computerized Adaptive Test of Personality Disorders Static Form (CAT-PD-SF). We administered these tests to 328 college students (150 males, 178 females). We found that the PiCD had adequate internal consistency reliability. Correlations between scores from the PiCD scales and the criterion measures generally indicated adequate discriminant validity. Along the same lines, convergent validity was adequate for the PiCD Negative Affective, Disinhibition, and Dissocial scales. However, the evidence was more mixed for the PiCD Detachment and Anankastic domains, which may be due to limitations with the content domains for these scales. Consistent with other research and theoretical expectations, a conjoint exploratory factor analysis utilizing the PiCD and MMPI-2-RF PSY-5 scales also indicated that anankastic and disinhibition may be more appropriately conceptualized as measuring opposite poles of one construct. Implications of these findings for the PiCD and the ICD-11 model are discussed.
The current study evaluated the comparability of Minnesota Multiphasic Personality Inventory–3 (MMPI-3) scale scores derived from the 335-item MMPI-3 to MMPI-3 scale scores derived from the 433-item MMPI-2 restructured form–expanded version (MMPI-2-RF-EX), an enhanced version of the MMPI-2-RF that was used to develop and validate the MMPI-3. To that end, we examined data from 192 college undergraduates who completed both the MMPI-3 and MMPI-2-RF-EX 1 week apart using a counterbalanced design. Across versions, mean T-scores and standard deviations, estimates of internal consistency, and standard error of measurement values, were highly similar, indicating no clinically meaningful differences across versions. We also compared between-version test–retest comparability values with within-version values calculated using a sample of undergraduates ( N = 318) who completed the MMPI-2-RF-EX twice over the same time interval, finding only marginal differences across the two samples. Finally, we computed column-vector correlations between MMPI-3 scores from both versions and several criterion measures, where results reflected no effect of test version on external validity. Overall, we determined that scale scores derived from either booklet are psychometrically interchangeable, indicating that MMPI-3 scale scores obtained from an administration of the MMPI-2-RF-EX can be applied when using the 335-item MMPI-3.
In this study, we explore the effects of in-person versus remote administration and in-person versus remote proctoring on scores on the Minnesota Multiphasic Personality Inventory-2 Restructured Form (MMPI-2-RF) in the context of police candidate preemployment evaluations. To this end, we compare data gathered from candidates who completed the test under standard, in-person conditions with data from candidates who completed the test remotely with the Q-global Remote On-Screen Assessment (ROSA) system, using either in-person or remote proctoring. We find that the standard group (n = 3,311), remote administration/in-person proctoring group (ROSA-IPP; n = 108), and remote administration/remote proctoring group (ROSA-RP; n = 90) all produce very similar distributions of scores, with group differences in means and standard deviations no greater than two T-score points per scale. Examination of the correlations between MMPI-2-RF externalizing scale scores and a set of relevant extra-test criteria for the ROSA-IPP and ROSA-RP groups reveals little difference between groups and suggests patterns of convergent and discriminant validity similar to those observed in studies of the MMPI-2-RF under standard administration conditions. Taken together, these findings provide evidence that the MMPI-2-RF's psychometric properties in police candidate preemployment evaluations are equivalent regardless of whether the test is administered in-person or remotely and whether proctoring is conducted in-person or remotely. Public Significance StatementThis study indicates that when the MMPI-2-RF is used to examine the psychological functioning of police candidates, it produces similar results regardless of whether it is administered remotely (i.e., over the internet) or in-person.
The present study investigated the comparability of laptop computer- and tablet-based administration modes for the Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF). Employing a counterbalanced within-subjects design, the MMPI-2-RF was administered via both modes to a sample of college undergraduates ( N = 133). Administration modes were compared in terms of mean scale scores, internal consistency, test-retest consistency, external validity, and administration time. Mean scores were generally similar, and scores produced via both methods appeared approximately equal in terms of internal consistency and test-retest consistency. Scores from the two modalities also evidenced highly similar patterns of associations with external criteria. Notably, tablet administration of the MMPI-2-RF was substantially longer than laptop administration in the present study (mean difference 7.2 minutes, Cohen's d = .95). Overall, results suggest that varying administration mode between laptop and tablet has a negligible influence on MMPI-2-RF scores, providing evidence that these modes of administration can be considered psychometrically equivalent.
In the present study, the author employed tools and principles from the domain of machine learning to investigate four questions related to the generalizability of statistical prediction in psychological assessment. First, to what extent do predictive methods common to psychology research and machine learning actually tend to predict new data points in new settings? Second, of what practical value is parsimony in applied prediction? Third, what is the most effective way to select model predictors when attempting to maximize generalizability? Fourth, how well do the methods considered compare with one another with respect to prediction generalizability? To address these questions, the author developed various types of predictive models on the basis of Minnesota Multiphasic Personality Inventory (MMPI)-2-RF scales, using multiple prediction criteria, in a calibration inpatient sample, then externally validated those models by applying them to one or two clinical samples from other settings. Model generalizability was then evaluated based on prediction accuracy in the external validation samples. Noteworthy findings from the present study include (a) statistical models generally demonstrated observable performance shrinkage across settings regardless of modeling approach, though they nevertheless tended to retain non-negligible predictive power in new settings; (b) of the modeling approaches considered, regularized (penalized) regression methods appeared to produce the most consistently robust predictions across settings; (c) parsimony appeared more likely to reduce than to enhance model generalizability; and (d) multivariate models whose predictors were selected automatically tended to perform relatively well, often producing substantially more generalizable predictions than models whose predictors were selected based on theory. Public Significance StatementThis study evaluated how well prediction models developed using a variety of approaches accurately generate predictions in new clinical samples, a question rarely considered in psychological assessment research. Among other findings, results suggest that researchers may be able to improve their predictive accuracy across samples by using a more data-driven approach to model construction.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.