S troke-survivors are at particular risk of cognitive decline. Three month dementia prevalence is ≥30%, and even minor stroke events have cognitive sequel.1,2 Poststroke cognitive impairment is associated with increased mortality, disability, and institutionalization. 3 The importance of cognitive change is highlighted by stroke-survivors themselves. In a national priority setting exercise, cognitive impairment was voted the single most important topic for stroke research. 4 A first step in management of cognitive problems is recognition and diagnosis. Informal clinician assessment will miss important cognitive problems, 5 and formal cognitive testing is recommended. [6][7][8] The ideal would be expert, multidisciplinary assessment informed by comprehensive investigations. This approach is not feasible at a population level. In practice, a 2-step system is adopted, with baseline cognitive testing used for screening or triage and specialist assessment to define the cognitive problem offered depending on the results.Although there is general agreement on the merits of poststroke cognitive assessment, there is no consensus on a preferred testing strategy. [6][7][8] Various cognitive screening tools are available with substantial variation in test used. 9,10 The clinical meaning of cognitive problems after stroke will vary according to test context. Cognitive impairment diagnosed in the first days post stroke may reflect a mix of delirium, strokespecific impairments, and prestroke cognitive decline. 2,11,12 In the longer term, assessments aim to make or refute a dementia diagnosis. Common to all test situations is a final diagnosis of presence/absence of clinically important impairments. A screening assessment should detect this clinical syndrome of all-cause, poststroke multidomain cognitive impairment.Background and Purpose-Guidelines recommend screening stroke-survivors for cognitive impairments. We sought to collate published data on test accuracy of cognitive screening tools. Methods-Index test was any direct, cognitive screening assessment compared against reference standard diagnosis of (undifferentiated) multidomain cognitive impairment/dementia. We used a sensitive search statement to search multiple, crossdisciplinary databases from inception to January 2014. Titles, abstracts, and articles were screened by independent researchers. We described risk of bias using Quality Assessment of Diagnostic Accuracy Studies tool and reporting quality using Standards for Reporting of Diagnostic Accuracy guidance. Where data allowed, we pooled test accuracy using bivariate methods. Results-From 19 182 titles, we reviewed 241 articles, 35 suitable for inclusion. There was substantial heterogeneity: 25 differing screening tests; differing stroke settings (acute stroke, n=11 articles), and reference standards used (neuropsychological battery, n=21 articles). One article was graded low risk of bias; common issues were case-control methodology (n=7 articles) and missing data (n=22 Lees et al Cognitive Screening in S...
Background and Purpose-Guidelines recommend cognitive screening in acute stroke. Various instruments are available, with no consensus on a preferred tool. We aimed to describe test accuracy of brief screening tools for diagnosis of cognitive impairment and delirium in acute stroke. Methods-We collected data on sequential stroke unit admission in a single center. Four assessors trained in cognitive testing independently performed screening and reference tests. Brief assessments comprised the following: 10-and 4-point Abbreviated Mental Test (AMT-10; AMT-4); 4-A Test (4AT); Clock Drawing Test (CDT); Cog-4; and Glasgow Coma Scale (GCS). We also recorded the multidisciplinary team's informal review using single question (SQ). We compared against reference standards of Montreal Cognitive Assessment (MoCA) and Confusion Assessment Method for delirium using usual diagnostic cutpoints. For MoCA, we described effects of lowering the diagnostic threshold to MoCA <24 and MoCA <20. We described sensitivity, specificity, and positive and negative predictive values. Results-Over a 10-week period, 111 subjects had cognitive assessment data. Subjects were 50% male (n=55), and median age was 74 years (interquartile range, 64-85). AMT-4, AMT-10, and SQ all had excellent (1.00) specificity for detection of cognitive impairment, although sensitivity was poor (all <0.60). The 4AT had greatest sensitivity for detecting delirium (
Background and Purpose-International guidelines recommend cognitive and mood assessments for stroke survivors; these assessments also have use in clinical trials. However, there is no consensus on the optimal assessment tool(s). We aimed to describe use of cognitive and mood measures in contemporary published stroke trials. Methods-Two independent, blinded assessors reviewed high-impact journals representing: general medicine (nϭ4), gerontology/rehabilitation (nϭ3), neurology (nϭ4), psychiatry (nϭ4), psychology (nϭ4), and stroke (nϭ3) January 2000 to October 2011 inclusive. Journals were hand-searched for relevant, original research articles that described cognitive/mood assessments in human stroke survivors. Data were checked for relevance by an independent clinician and clinical psychologist. Results-Across 8826 stroke studies, 488 (6%) included a cognitive or mood measure. Of these 488 articles, total number with cognitive assessment was 408 (83%) and mood assessment tools 247 (51%). Total number of different assessments used was 367 (cognitive, 300; mood, 67). The most commonly used cognitive measure was Folstein's Mini-Mental State Examination (nϭ180 articles, 37% of all articles with cognitive/mood outcomes); the most commonly used mood assessment was the Hamilton Rating Scale of Depression(nϭ43 [9%]). Conclusions-Cognitive and mood assessments are infrequently used in stroke research. When used, there is substantial heterogeneity and certain prevalent assessment tools may not be suited to stroke cohorts. Research and guidance on the optimal cognitive/mood assessment strategies for clinical practice and trials is required. (Stroke. 2012;43:1678-1680.)
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.