This study was designed to introduce and validate a forced choice recognition trial to the Rey Complex Figure Test (FCR RCFT ). Healthy undergraduate students at a midsized Canadian university were randomly assigned to the control (n ϭ 80) or experimental malingering (n ϭ 60) conditions. All participants were administered a brief battery of neuropsychological tests. The FCR RCFT had good overall classification accuracy (area under the curve: .79 -.88) against various criterion variables. The conservative cutoff (Յ16) was highly specific (.93-.96) but not very sensitive (.38 -.51). Conversely, the liberal cutoff (Յ18) was sensitive (.57-.72) but less specific (.88 -.90). The FCR RCFT provided unique information about performance validity above and beyond the existing yes/no recognition trial. Combining multiple RCFT validity indices improved classification accuracy. The utility of previously published validity indicators embedded in the RCFT was also replicated. The FCR RCFT extends the growing trend of enhancing the clinical utility of widely used standard memory tests by developing a built-in validity check. Multivariate models were superior to univariate cutoffs. Although the FCR RCFT performed well in the current sample, replication in clinical/forensic patients is needed to establish its utility in differentiating genuine memory deficits from noncredible responding.
Although PVT failure rates varied as a function of PVTs and cutoffs, between a third and a fifth of the sample failed ≥1 PVTs, consistent with high initial estimates of invalid performance in this population. Embedded PVTs had notably higher failure rates than free-standing PVTs. Assuming optimal effort in research using students as participants without a formal assessment of performance validity introduces a potentially significant confound in the study design.
Results support the clinical utility of existing cutoffs. Given the relatively high base rate of failure even in the control group (5-15%), and the perfect specificity of CIM ≤9 and BNT-15 ≤ 11 to noncredible responding, relabeling this range of performance as "Abnormal" instead of "Impaired" would better capture the uncertainty in its clinical interpretation.
Poor effort by examinees during neuropsychological testing has a profound effect on test performance. Although neuropsychological experiments often utilize healthy undergraduate students, the test-taking effort of this population has not been investigated previously. The purpose of the present study was to determine whether undergraduate students exercise variable effort in neuropsychological testing. During two testing sessions, participants (N = 36) were administered three Symptom Validity Tests (SVTs), the Test of Memory Malingering, the Dot Counting Test, and the Victoria Symptom Validity Test (VSVT), along with various neuropsychological tests. Analyses revealed 55.6% of participants in Session 1 and 30.8% of participants in Session 2 exerted poor effort on at least one SVT. Poor effort on the SVTs was significantly correlated with poor performance on various neuropsychological tests and there was support for the temporal stability of effort. These preliminary results suggest that the base rate of suboptimal effort in a healthy undergraduate population is quite high. Accordingly, effort may serve as a source of variance in neuropsychological research when using undergraduate students.
Objective
The objective of the present study was to examine the neurocognitive profiles associated with limited English proficiency (LEP).
Method
A brief neuropsychological battery including measures with high (HVM) and low verbal mediation (LVM) was administered to 80 university students: 40 native speakers of English (NSEs) and 40 with LEP.
Results
Consistent with previous research, individuals with LEP performed more poorly on HVM measures and equivalent to NSEs on LVM measures—with some notable exceptions.
Conclusions
Low scores on HVM tests should not be interpreted as evidence of acquired cognitive impairment in individuals with LEP, because these measures may systematically underestimate cognitive ability in this population. These findings have important clinical and educational implications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.