We conducted a meta-analytic review of neuropsychological studies of mild head trauma (MHT). Studies were included if they met these criteria: patients studied at least 3 months after MHT; patients selected because of a history of MHT rather than because they were symptomatic; and attrition rate of less than 50% for longitudinal studies. Studies of children were not considered. We found a total of 8 published papers with 11 samples that met these criteria. Using the g statistics, the overall effect size of 0.07 was nonsignificant, but the d statistic yielded an effect size of 0.12, p < .03. Measurers of attention had the largest effect, g = 0.17. p < .02 and d = 0.20, p < .006. Severity of injury accounted for far more variance than did specific neuropsychological domain, however. The small effect size suggests that the maximum prevalence of persistent neuropsychological deficit is likely to be small and neuropsychological assessment is likely to have positive predictive value of less than 50%. Consequently, clinicians will more likely be correct when not diagnosing brain injury than when diagnosing a brain injury in cases with chronic disability after MHT.
Normative studies of variability in performance by healthy adults on neuropsychological batteries are reviewed. Regarding test score scatter, normative participants often have large discrepancies between best and worst scores. When "abnormality" was defined as a score more than one standard deviation below the mean, in test batteries with at least 20 measures, the great majority of normative participants had one or more abnormalities. Restricting samples to participants with above average IQ or educational levels and using more conservative definitions of abnormality, such as two standard deviations below the mean did not eliminate the presence of abnormal scores. We conclude that abnormal performance on some proportion of neuropsychological tests in a battery is psychometrically normal. Abnormalities do not necessarily signify the presence of acquired brain dysfunction because low scores and large intraindividual variability often are characteristic of healthy adults. We recommend that test battery developers provide data on the amount of variability in normal samples and also provide base rate tables with false positive rates that can be used clinically when interpreting test performance.
The meta-analytic findings of Binder et al. (1997) and Frencham et al. (2005) showed that the neuropsychological effect of mild traumatic brain injury (mTBI) was negligible in adults by 3 months post injury. Pertab et al. (2009) reported that verbal paired associates, coding tasks, and digit span yielded significant differences between mTBI and control groups. We re-analyzed data from the 25 studies used in the prior meta-analyses, correcting statistical and methodological limitations of previous efforts, and analyzed the chronicity data by discrete epochs. Three months post injury the effect size of -0.07 was not statistically different from zero and similar to that which has been found in several other meta-analyses (Belanger et al., 2005; Schretlen & Shapiro, 2003). The effect size 7 days post injury was -0.39. The effect of mTBI immediately post injury was largest on Verbal and Visual Memory domains. However, 3 months post injury all domains improved to show non-significant effect sizes. These findings indicate that mTBI has an initial small effect on neuropsychological functioning that dissipates quickly. The evidence of recovery in the present meta-analysis is consistent with previous conclusions of both Binder et al. and Frencham et al. Our findings may not apply to people with a history of multiple concussions or complicated mTBIs.
This joint position paper of the American Academy of Clinical Neuropsychology and the National Academy of Neuropsychology sets forth our position on appropriate standards and conventions for computerized neuropsychological assessment devices (CNADs). In this paper, we first define CNADs and distinguish them from examiner-administered neuropsychological instruments. We then set forth position statements on eight key issues relevant to the development and use of CNADs in the healthcare setting. These statements address (a) device marketing and performance claims made by developers of CNADs; (b) issues involved in appropriate end-users for administration and interpretation of CNADs; (c) technical (hardware/software/firmware) issues; (d) privacy, data security, identity verification, and testing environment; (e) psychometric development issues, especially reliability, and validity; (f) cultural, experiential, and disability factors affecting examinee interaction with CNADs; (g) use of computerized testing and reporting services; and (h) the need for checks on response validity and effort in the CNAD environment. This paper is intended to provide guidance for test developers and users of CNADs that will promote accurate and appropriate use of computerized tests in a way that maximizes clinical utility and minimizes risks of misuse. The positions taken in this paper are put forth with an eye toward balancing the need to make validated CNADs accessible to otherwise underserved patients with the need to ensure that such tests are developed and utilized competently, appropriately, and with due concern for patient welfare and quality of care.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.