This study was designed to introduce and validate a forced choice recognition trial to the Rey Complex Figure Test (FCR RCFT ). Healthy undergraduate students at a midsized Canadian university were randomly assigned to the control (n ϭ 80) or experimental malingering (n ϭ 60) conditions. All participants were administered a brief battery of neuropsychological tests. The FCR RCFT had good overall classification accuracy (area under the curve: .79 -.88) against various criterion variables. The conservative cutoff (Յ16) was highly specific (.93-.96) but not very sensitive (.38 -.51). Conversely, the liberal cutoff (Յ18) was sensitive (.57-.72) but less specific (.88 -.90). The FCR RCFT provided unique information about performance validity above and beyond the existing yes/no recognition trial. Combining multiple RCFT validity indices improved classification accuracy. The utility of previously published validity indicators embedded in the RCFT was also replicated. The FCR RCFT extends the growing trend of enhancing the clinical utility of widely used standard memory tests by developing a built-in validity check. Multivariate models were superior to univariate cutoffs. Although the FCR RCFT performed well in the current sample, replication in clinical/forensic patients is needed to establish its utility in differentiating genuine memory deficits from noncredible responding.
This study was designed to examine the clinical utility of critical items within the Recognition Memory Test (RMT) and the Word Choice Test (WCT). Archival data were collected from a mixed clinical sample of 202 patients clinically referred for neuropsychological testing (54.5% male; mean age = 45.3 years; mean level of education = 13.9 years). The credibility of a given response set was psychometrically defined using three separate composite measures, each of which was based on multiple independent performance validity indicators. Critical items improved the classification accuracy of both tests. They increased sensitivity by correctly identifying an additional 2-17% of the invalid response sets that passed the traditional cutoffs based on total score. They also increased specificity by providing additional evidence of noncredible performance in response sets that failed the total score cutoff. The combination of failing the traditional cutoff, but passing critical items was associated with increased risk of misclassifying the response set as invalid. Critical item analysis enhances the diagnostic power of both the RMT and WCT. Given that critical items require no additional test material or administration time, but help reduce both false positive and false negative errors, they represent a versatile, valuable, and time- and cost-effective supplement to performance validity assessment.
One error on TOMM Trial 2 constitutes sufficient evidence to question the credibility of a response set. However, the confidence in classifying a score as invalid continues to increase with each additional error. Even at the most liberal conceivable cutoff (≤49), the TOMM detected only about half of the patients who failed other criterion measures. Therefore, it should never be used in isolation to determine performance validity.
Past studies have examined the ability of the Wisconsin Card Sorting Test (WCST) to discriminate valid from invalid performance in adults using both individual embedded validity indicators (EVIs) and multivariate approaches. This study is designed to investigate whether the two most stable of these indicators-failures to maintain set (FMS) and the logistical regression equation S-B-can be extended to pediatric populations. The classification accuracy for FMS and S-B was examined in a mixed clinical sample of 226 children aged 7 to 17 years (64.6% male, MAge = 13.6 years) against a combination of established performance validity tests (PVTs). The results show that at adult cutoffs, FMS and S-B produce an unacceptably high failure rate (33.2% and 45.6%) and low specificity (.55-.72), but an upward adjustment in cutoffs significantly improves classification accuracy. Defining Pass as <2 and Fail as ≥4 on FMS results in consistently good specificity (.89-.92) but low and variable sensitivity (.00-.33). Similarly, cutting the S-B distribution at 3.68 produces good specificity (.90-.92) but variable sensitivity (.06-.38). Passing or failing FMS or S-B is unrelated to age, gender and IQ. The data from this study suggest that in a pediatric sample, adjusted cutoffs on the FMS and S-B ensure good specificity, but with low or variable sensitivity. Thus, they should not be used in isolation to determine the credibility of a response set. At the same time, they can make valuable contributions to pediatric neuropsychology by providing empirically-supported, expedient and cost-effective indicators to enhance performance validity assessment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.