Abstract:Time-to-completion cutoffs are valuable additions to both tests. They can function as independent validity indicators or enhance the sensitivity of accuracy scores without requiring additional measures or extending standard administration time.
“…The current findings provide a more nuanced perspective than earlier recommendations for choosing the optimal cutoff on the RMT and WCT (Davis, 2014;Erdodi, Tyson, Shahein, et al, 2017;M. S. Kim et al, 2010).…”
Section: Discussionmentioning
confidence: 59%
“…Also, the WCT appears to be generally more robust to timing artifacts, with a cutoff of Յ47 maintaining specificity standards at both times. In terms of time to completion, the original Ն171-s cutoff (Erdodi, Tyson, Shahein, et al, 2017) performed well at both times. However, a more liberal cutoff (Ն150 s) cleared the specificity threshold at Time 2.…”
This study was designed to investigate the effects of timing on the likelihood of failing the Recognition Memory Test-Words (RMT) and Word Choice Test (WCT). The RMT and WCT were administered in counterbalanced order either at the beginning (Time 1) or at the end (Time 2) of a test battery to a mixed clinical sample of 196 patients (M = 44.5 years, 55.1% female) medically referred for neuropsychological evaluation. The risk of failing the accuracy score was higher at Time 1 on both the RMT (relative risk [RR]: 1.44-1.64) and the WCT (RR: 1.21-1.50) across a range of cutoffs. Likewise, the risk of failing the time-to-completion score was higher at Time 1 on both the RMT (RR: 1.30-1.94) and the WCT (RR: 1.58-3.75). Established cutoffs failed to reach specificity standards at Time 1; more liberal cutoffs cleared specificity thresholds at Time 2. According to our findings, the RMT and WCT may be prone to false-positive errors at Time 1. Conversely, when administered at Time 2, existing cutoffs may have lower sensitivity, but they are highly specific to invalid performance. Timing should be considered during both test selection and the interpretation of RMT and WCT scores. Using conservative cutoffs for morning administrations and liberal cutoffs for afternoon administrations may be necessary to neutralize timing artifacts. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
“…The current findings provide a more nuanced perspective than earlier recommendations for choosing the optimal cutoff on the RMT and WCT (Davis, 2014;Erdodi, Tyson, Shahein, et al, 2017;M. S. Kim et al, 2010).…”
Section: Discussionmentioning
confidence: 59%
“…Also, the WCT appears to be generally more robust to timing artifacts, with a cutoff of Յ47 maintaining specificity standards at both times. In terms of time to completion, the original Ն171-s cutoff (Erdodi, Tyson, Shahein, et al, 2017) performed well at both times. However, a more liberal cutoff (Ն150 s) cleared the specificity threshold at Time 2.…”
This study was designed to investigate the effects of timing on the likelihood of failing the Recognition Memory Test-Words (RMT) and Word Choice Test (WCT). The RMT and WCT were administered in counterbalanced order either at the beginning (Time 1) or at the end (Time 2) of a test battery to a mixed clinical sample of 196 patients (M = 44.5 years, 55.1% female) medically referred for neuropsychological evaluation. The risk of failing the accuracy score was higher at Time 1 on both the RMT (relative risk [RR]: 1.44-1.64) and the WCT (RR: 1.21-1.50) across a range of cutoffs. Likewise, the risk of failing the time-to-completion score was higher at Time 1 on both the RMT (RR: 1.30-1.94) and the WCT (RR: 1.58-3.75). Established cutoffs failed to reach specificity standards at Time 1; more liberal cutoffs cleared specificity thresholds at Time 2. According to our findings, the RMT and WCT may be prone to false-positive errors at Time 1. Conversely, when administered at Time 2, existing cutoffs may have lower sensitivity, but they are highly specific to invalid performance. Timing should be considered during both test selection and the interpretation of RMT and WCT scores. Using conservative cutoffs for morning administrations and liberal cutoffs for afternoon administrations may be necessary to neutralize timing artifacts. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
“…As part of this study, the discriminative capacity of each CPT measure was calculated and compared with that found in Study 1. We expected stability in the discriminative capacity of the measures between the studies, enabling the establishment of cutoffs with adequate specificity (>90%) and sensitivity levels of >50% (as achieved using other CPTs, for example; Erdodi, Tyson, et al, 2017).…”
Objective: The objective of this study was to assess the MOXO-d-CPT utility in detecting feigned ADHD and establish cutoffs with adequate specificity and sensitivity. Method: The study had two phases. First, using a prospective design, healthy adults who simulated ADHD were compared with healthy controls and ADHD patients who performed the tasks to the best of their ability ( n = 47 per group). Participants performed the MOXO-d-CPT and an established performance validity test (PVT). Second, the MOXO-d-CPT classification accuracy, employed in Phase 1, was retrospectively compared with archival data of 47 ADHD patients and age-matched healthy controls. Results: Simulators performed significantly worse on all MOXO-d-CPT indices than healthy controls and ADHD patients. Three MOXO-d-CPT indices (attention, hyperactivity, impulsivity) and a scale combining these indices showed adequate discriminative capacity. Conclusion: The MOXO-d-CPT showed promise for the detection of feigned ADHD and, pending replication, can be employed for this aim in clinical practice and ADHD research.
“… Note . EVI: Embedded validity indicators; PVT: Performance Validity Test; RCFT: Rey Complex Figure Test; Y/N Rec: Yes/No recognition raw score; FCR: Forced Choice Recognition raw score; TOMM-1: Trial 1 on the Test of Memory Malingering ( Denning, 2012 ; Fazio et al., 2017 ; Greve et al., 2006 , 2009 ; Jones, 2013 ; Kulas et al., 2014 ; Martin et al., 2019; Powell et al., 2004 ; Rai & Erdodi, 2019 ; Webber et al., 2018 ); WCT: Word Choice Test [Fail defined as accuracy score ≤47 ( Barhon et al., 2015 ; Davis, 2014 ; Erdodi, Kirsch, et al., 2014 ; Pearson, 2009 ) or time-to-completion ≥156 seconds (Erdodi & Lichtenstein, 2020; Erdodi, Tyson, et al., 2017 ; Zuccato et al., 2018 )]; EI-5 MEM: Erdodi Index Five – Memory ( Fail defined as ≥4); EI-5 PSP: Erdodi Index Five – Processing Speed ( Fail defined as ≥4); BR Fail: Base rate of failure (% of the sample that failed a given cutoff); SENS: Sensitivity; SPEC: Specificity. …”
In this study we attempted to replicate the classification accuracy of the newly introduced Forced Choice Recognition trial (FCR) of the Rey Complex Figure Test (RCFT) in a clinical sample. We administered the RCFT FCR and the earlier Yes/No Recognition trial from the RCFT to 52 clinically referred patients as part of a comprehensive neuropsychological test battery and incentivized a separate control group of 83 university students to perform well on these measures. We then computed the classification accuracies of both measures against criterion performance validity tests (PVTs) and compared results between the two samples. At previously published validity cutoffs (≤16 & ≤17), the RCFT FCR remained specific (.84–1.00) to psychometrically defined non-credible responding. Simultaneously, the RCFT FCR was more sensitive to examinees’ natural variability in visual-perceptual and verbal memory skills than the Yes/No Recognition trial. Even after being reduced to a seven-point scale (18-24) by the validity cutoffs, both RCFT recognition scores continued to provide clinically useful information on visual memory. This is the first study to validate the RCFT FCR as a PVT in a clinical sample. Our data also support its use for measuring cognitive ability. Replication studies with more diverse samples and different criterion measures are still needed before large-scale clinical application of this scale.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.