Social cognition includes a range of cognitive processes that help individuals to understand how others think and feel. There is emerging evidence that social cognitive deficits may represent a transdiagnostic issue, potentially serving as a marker of neurological abnormality. We performed an electronic database search in order to identify published, peer-reviewed meta-analyses that compared facial emotion recognition or theory of mind task performance between individuals meeting clinical criteria for a psychiatric, neurological or developmental condition against healthy controls. We identified 31 meta-analyses eligible for inclusion that examined performance across relevant tasks among 30 different clinical populations. The results suggest that social cognitive deficits appear to be a core cognitive phenotype of many clinical conditions. Across the clinical groups, deficits in social cognitive domains were broadly similar in magnitude to those previously reported for more established aspects of cognition, such as memory and executive function. There is a need to clarify the 'real world' impact of these deficits, and to develop effective transdiagnostic interventions for those individuals that are adversely affected.
Background Computerized assessments are already used to derive accurate and reliable measures of cognitive function. Web-based cognitive assessment could improve the accessibility and flexibility of research and clinical assessment, widen participation, and promote research recruitment while simultaneously reducing costs. However, differences in context may influence task performance. Objective This study aims to determine the comparability of an unsupervised, web-based administration of the Cambridge Neuropsychological Test Automated Battery (CANTAB) against a typical in-person lab-based assessment, using a within-subjects counterbalanced design. The study aims to test (1) reliability, quantifying the relationship between measurements across settings using correlational approaches; (2) equivalence, the extent to which test results in different settings produce similar overall results; and (3) agreement, by quantifying acceptable limits to bias and differences between measurement environments. Methods A total of 51 healthy adults (32 women and 19 men; mean age 36.8, SD 15.6 years) completed 2 testing sessions, which were completed on average 1 week apart (SD 4.5 days). Assessments included equivalent tests of emotion recognition (emotion recognition task [ERT]), visual recognition (pattern recognition memory [PRM]), episodic memory (paired associate learning [PAL]), working memory and spatial planning (spatial working memory [SWM] and one touch stockings of Cambridge), and sustained attention (rapid visual information processing [RVP]). Participants were randomly allocated to one of the two groups, either assessed in-person in the laboratory first (n=33) or with unsupervised web-based assessments on their personal computing systems first (n=18). Performance indices (errors, correct trials, and response sensitivity) and median reaction times were extracted. Intraclass and bivariate correlations examined intersetting reliability, linear mixed models and Bayesian paired sample t tests tested for equivalence, and Bland-Altman plots examined agreement. Results Intraclass correlation (ICC) coefficients ranged from ρ=0.23-0.67, with high correlations in 3 performance indices (from PAL, SWM, and RVP tasks; ρ≥0.60). High ICC values were also seen for reaction time measures from 2 tasks (PRM and ERT tasks; ρ≥0.60). However, reaction times were slower during web-based assessments, which undermined both equivalence and agreement for reaction time measures. Performance indices did not differ between assessment settings and generally showed satisfactory agreement. Conclusions Our findings support the comparability of CANTAB performance indices (errors, correct trials, and response sensitivity) in unsupervised, web-based assessments with in-person and laboratory tests. Reaction times are not as easily translatable from in-person to web-based testing, likely due to variations in computer hardware. The results underline the importance of examining more than one index to ascertain comparability, as high correlations can present in the context of systematic differences, which are a product of differences between measurement environments. Further work is now needed to examine web-based assessments in clinical populations and in larger samples to improve sensitivity for detecting subtler differences between test settings.
Background Assessment of cognitive function is an important component of differential diagnosis in Alzheimer’s disease and dementia and is critical to characterising the impact of the disease. Objective assessment of function provides real world evidence for assessing effectiveness of current standards of care, and evaluating interventions designed to improve outcomes. However, sensitive, reliable tools for administering standardised tests at scale have been lacking to date. Method Digital tools, providing near‐patient assessments in the community or at home are one way to meet the demands of patient characterisation across a range of resource settings. Cambridge Neuropsychological Automated Test Battery (CANTAB) is a set of 25 computerised assessments designed to assess cognition across a broad range of domains relevant to neurological and psychiatric conditions. A number of tests from this battery have been integrated into medical device software to assess the impact of neurological and neurodegenerative disease. These tools have been deployed in community, primary and secondary health care settings in relatively resource‐rich healthcare systems (primarily UK and US). We are exploring whether such an approach can be generalised to a lower‐middle income countries/South Asia. Result We will present performance of 4000 healthy participants recruited across nine regions of India on CANTAB assessments including performance on episodic memory (Paired Associates Learning (PAL)), working memory (Spatial Working Memory (SWM)) and attention (Matching to Sample (MTS)). Evidence from this study supports the suitability of our platform for delivering cognitive assessments in health care systems outside the UK. We will contrast performance of healthy participants on PAL and SWM with recent normative data collections across the UK and US; demonstrating similarity of performance and the effectiveness of assessment platforms. The demographic diversity of these normative collections supports the need to partner these tools with appropriate recruitment and operational initiatives to ensure accessibility. Conclusion We will reflect on how this evidence guides our current and future plans for making scientifically robust cognitive assessment tools globally available, both for increasing access to treatment, but also in the context of basic research and drug development.
BACKGROUND Computerised assessments already confer advantages for deriving accurate and reliable measures of cognitive function, including test standardisation, accuracy of response recordings and automated scoring. Web-based cognitive assessment could improve accessibility and flexibility of research and clinical assessment, widen participation and promote research recruitment whilst simultaneously reducing costs. However, differences between lab-based and unsupervised cognitive assessment may influence task performance. Validation is required to establish reliability, equivalency and agreement with respect to gold-standard lab-based assessments. OBJECTIVE The current study validates an unsupervised web-based version of the Cambridge Neuropsychological Test Automated Battery (CANTAB) against a typical in-person lab-based assessment, using a within-subjects counterbalanced design. The study tests: 1) reliability, the correlation between measurements across participants, 2) equivalence, the extent to which test results in different settings produce similar, or by contrast, different overall results, and 3) agreement, by quantifying acceptable limits to bias and differences between the different measurement environments. METHODS Fifty-one healthy adults (32 women, 19 men; mean age 37 years) completed two testing sessions on average one week apart. Assessments included equivalent tests of emotion recognition (Emotion Recognition Task: ERT), visual recognition (Pattern Recognition Memory: PRM), episodic memory (Paired Associate Learning: PAL), working memory and spatial planning (Spatial Working Memory: SWM; One-Touch Stockings of Cambridge: OTS), and sustained attention (Rapid Visual Information Processing: RVP). Participants were randomly allocated to one of two groups, either assessed in-person first (n=33) or using web-based assessment first (n=18). Performance measures (errors, correct trials, response sensitivity), and median reaction times were extracted. Analyses included intra-class correlations (ICC) to examine reliability, linear mixed models and Bayesian paired samples t-tests to test for equivalence, and Bland Altman plots to examine agreement. RESULTS Intra-class correlation coefficients ranged from 0.23-0.67, with high correlations in three performance measures (from PAL, SWM and RVP tasks, ≥0.60). High intra-class correlations were also seen for reaction time measures from two tasks (PRM and ERT tasks, ≥0.60). However, reaction times were slower during web-based assessments, which undermined both equivalence and agreement for reaction time measures. Performance measures did not differ between assessment modalities, and generally showed satisfactory agreement. CONCLUSIONS Our results support the use of CANTAB performance measures (errors, correct trials, response sensitivity) in unsupervised web-based assessments. Reaction times are not as easily translatable from in-person to web-based testing, likely due to variation in home computer hardware. Results underline the importance of examining more than one index to ascertain validity, since high correlations can be present in the context of consistent, systematic differences which are a product of differences between measurement environments. Further work is now needed validate web-based assessments in clinical populations, and in larger samples to improve sensitivity for detecting subtler differences between test settings.
BACKGROUND Normative cognitive data can distinguish impairment from healthy cognitive function and pathological decline from normal ageing. Traditional methods for deriving normative data typically require extremely large samples of healthy participants, stratifying test variation by pre-specified age groups and key demographic features (age, sex, education). Linear regression approaches can provide normative data from more sparsely sampled datasets, but non-normal distributions of many cognitive test results may lead to violation of model assumptions, limiting generalisability OBJECTIVE The aims of the study are to describe a novel methodological approach for generating normative cognitive data and examine the sensitivity of this novel approach in comparison to other methods for deriving normative data. METHODS The current study proposes a novel Bayesian framework for normative data generation. Participants (n=728; 368 male and 360 female, age 18-75 years), completed the Cambridge Neuropsychological Test Automated Battery via the research crowdsourcing website Prolific.ac. Participants completed tests of visuospatial recognition memory (Spatial Working Memory test), visual episodic memory (Paired Associate Learning test) and sustained attention (Rapid Visual Information Processing test). Test outcomes were modelled as a function of age using Bayesian Generalised Linear Models, which were able to derive posterior distributions of the authentic data, drawing from a wide family of distributions. RESULTS Markov Chain Monte Carlo algorithms generated a large synthetic dataset from posterior distributions for each outcome measure, capturing normative distributions of cognition as a function of age, sex and education. Comparison with stratified and linear regression methods showed converging results, with the Bayesian approach producing similar age, sex and education trends in the data, and similar categorisation of individual performance levels. CONCLUSIONS This study documents a novel, reproducible and robust method for describing normative cognitive performance with ageing using a large dataset.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.