Background Computerized assessments are already used to derive accurate and reliable measures of cognitive function. Web-based cognitive assessment could improve the accessibility and flexibility of research and clinical assessment, widen participation, and promote research recruitment while simultaneously reducing costs. However, differences in context may influence task performance. Objective This study aims to determine the comparability of an unsupervised, web-based administration of the Cambridge Neuropsychological Test Automated Battery (CANTAB) against a typical in-person lab-based assessment, using a within-subjects counterbalanced design. The study aims to test (1) reliability, quantifying the relationship between measurements across settings using correlational approaches; (2) equivalence, the extent to which test results in different settings produce similar overall results; and (3) agreement, by quantifying acceptable limits to bias and differences between measurement environments. Methods A total of 51 healthy adults (32 women and 19 men; mean age 36.8, SD 15.6 years) completed 2 testing sessions, which were completed on average 1 week apart (SD 4.5 days). Assessments included equivalent tests of emotion recognition (emotion recognition task [ERT]), visual recognition (pattern recognition memory [PRM]), episodic memory (paired associate learning [PAL]), working memory and spatial planning (spatial working memory [SWM] and one touch stockings of Cambridge), and sustained attention (rapid visual information processing [RVP]). Participants were randomly allocated to one of the two groups, either assessed in-person in the laboratory first (n=33) or with unsupervised web-based assessments on their personal computing systems first (n=18). Performance indices (errors, correct trials, and response sensitivity) and median reaction times were extracted. Intraclass and bivariate correlations examined intersetting reliability, linear mixed models and Bayesian paired sample t tests tested for equivalence, and Bland-Altman plots examined agreement. Results Intraclass correlation (ICC) coefficients ranged from ρ=0.23-0.67, with high correlations in 3 performance indices (from PAL, SWM, and RVP tasks; ρ≥0.60). High ICC values were also seen for reaction time measures from 2 tasks (PRM and ERT tasks; ρ≥0.60). However, reaction times were slower during web-based assessments, which undermined both equivalence and agreement for reaction time measures. Performance indices did not differ between assessment settings and generally showed satisfactory agreement. Conclusions Our findings support the comparability of CANTAB performance indices (errors, correct trials, and response sensitivity) in unsupervised, web-based assessments with in-person and laboratory tests. Reaction times are not as easily translatable from in-person to web-based testing, likely due to variations in computer hardware. The results underline the importance of examining more than one index to ascertain comparability, as high correlations can present in the context of systematic differences, which are a product of differences between measurement environments. Further work is now needed to examine web-based assessments in clinical populations and in larger samples to improve sensitivity for detecting subtler differences between test settings.
Background Assessment of cognitive function is an important component of differential diagnosis in Alzheimer’s disease and dementia and is critical to characterising the impact of the disease. Objective assessment of function provides real world evidence for assessing effectiveness of current standards of care, and evaluating interventions designed to improve outcomes. However, sensitive, reliable tools for administering standardised tests at scale have been lacking to date. Method Digital tools, providing near‐patient assessments in the community or at home are one way to meet the demands of patient characterisation across a range of resource settings. Cambridge Neuropsychological Automated Test Battery (CANTAB) is a set of 25 computerised assessments designed to assess cognition across a broad range of domains relevant to neurological and psychiatric conditions. A number of tests from this battery have been integrated into medical device software to assess the impact of neurological and neurodegenerative disease. These tools have been deployed in community, primary and secondary health care settings in relatively resource‐rich healthcare systems (primarily UK and US). We are exploring whether such an approach can be generalised to a lower‐middle income countries/South Asia. Result We will present performance of 4000 healthy participants recruited across nine regions of India on CANTAB assessments including performance on episodic memory (Paired Associates Learning (PAL)), working memory (Spatial Working Memory (SWM)) and attention (Matching to Sample (MTS)). Evidence from this study supports the suitability of our platform for delivering cognitive assessments in health care systems outside the UK. We will contrast performance of healthy participants on PAL and SWM with recent normative data collections across the UK and US; demonstrating similarity of performance and the effectiveness of assessment platforms. The demographic diversity of these normative collections supports the need to partner these tools with appropriate recruitment and operational initiatives to ensure accessibility. Conclusion We will reflect on how this evidence guides our current and future plans for making scientifically robust cognitive assessment tools globally available, both for increasing access to treatment, but also in the context of basic research and drug development.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.