Background and purposeAccurate aphasia diagnosis is important in stroke care. A wide range of language tests are available and include informal assessments, tests developed by healthcare institutions and commercially published tests available for purchase in pre-packaged kits. The psychometrics of these tests are often reported online or within the purchased test manuals, not the peer-reviewed literature, therefore the diagnostic capabilities of these measures have not been systematically evaluated. This review aimed to identify both commercial and non-commercial language tests and tests used in stroke care and to examine the diagnostic capabilities of all identified measures in diagnosing aphasia in stroke populations.MethodsLanguage tests were identified through a systematic search of 161 publisher databases, professional and resource websites and language tests reported to be used in stroke care. Two independent reviewers evaluated test manuals or associated resources for cohort or cross-sectional studies reporting the tests’ diagnostic capabilities (sensitivity, specificity, likelihood ratios or diagnostic odds ratios) in differentiating aphasic and non-aphasic stroke populations.ResultsFifty-six tests met the study eligibility criteria. Six “non-specialist” brief screening tests reported sensitivity and specificity information, however none of these measures reported to meet the specific diagnostic needs of speech pathologists. The 50 remaining measures either did not report validity data (n = 7); did not compare patient test performance with a comparison group (n = 17); included non-stroke participants within their samples (n = 23) or did not compare stroke patient performance against a language reference standard (n = 3). Diagnostic sensitivity analysis was completed for six speech pathology measures (WAB, PICA, CADL-2, ASHA-FACS, Adult FAVRES and EFA-4), however all studies compared aphasic performance with that of non-stroke healthy controls and were consequently excluded from the review.ConclusionsNo speech pathology test was found which reported diagnostic data for identifying aphasia in stroke populations. A diagnostically validated post-stroke aphasia test is needed.
Improvement is needed in the quality of methodological rigour in development and reporting within clinical guidelines, and in aphasia-specific recommendations within stroke multidisciplinary clinical guidelines.
Purpose: To describe the development and determine the diagnostic accuracy of the Brisbane Evidence-Based Language Test in detecting aphasia. Methods: Consecutive acute stroke admissions (n ¼ 100; mean ¼ 66.49y) participated in a single (assessor) blinded cross-sectional study. Index assessment was the $45 min Brisbane Evidence-Based Language Test. The Brisbane Evidence-Based Language Test is further divided into four 15-25 min Short Tests: two Foundation Tests (severe impairment), Standard (moderate) and High Level Test (mild). Independent reference standard included the Language Screening Test, Aphasia Screening Test, Comprehensive Aphasia Test and/or Measure for Cognitive-Linguistic Abilities, treating team diagnosis and aphasia referral postward discharge. Results: Brisbane Evidence-Based Language Test cutoff score of 157 demonstrated 80.8% (LRþ ¼10.9) sensitivity and 92.6% (LRÀ ¼0.21) specificity. All Short Tests reported specificities of !92.6%. Foundation Tests I (cut-off 61) and II (cut-off 51) reported lower sensitivity (!57.5%) given their focus on severe conditions. The Standard (cut-off 90) and High Level Test (cut-off 78) reported sensitivities of !72.6%. Conclusion: The Brisbane Evidence-Based Language Test is a sensitive assessment of aphasia. Diagnostically, the High Level Test recorded the highest psychometric capabilities of the Short Tests, equivalent to the full Brisbane Evidence-Based Language Test. The test is available for download from brisbanetest.org. ä IMPLICATIONS FOR REHABILITATION Aphasia is a debilitating condition and accurate identification of language disorders is important in healthcare. Language assessment is complex and the accuracy of assessment procedures is dependent upon a variety of factors. The Brisbane Evidence-Based Language Test is a new evidence-based language test specifically designed to adapt to varying patient need, clinical contexts and co-occurring conditions. In this cross-sectional validation study, the Brisbane Evidence-Based Language Test was found to be a sensitive measure for identifying aphasia in stroke.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.