There is growing evidence that fluctuations in brain activity may exhibit scale-free (“fractal”) dynamics. Scale-free signals follow a spectral-power curve of the form P(f ) ∝ f−β, where spectral power decreases in a power-law fashion with increasing frequency. In this study, we demonstrated that fractal scaling of BOLD fMRI signal is consistently suppressed for different sources of cognitive effort. Decreases in the Hurst exponent (H), which quantifies scale-free signal, was related to three different sources of cognitive effort/task engagement: 1) task difficulty, 2) task novelty, and 3) aging effects. These results were consistently observed across multiple datasets and task paradigms. We also demonstrated that estimates of H are robust across a range of time-window sizes. H was also compared to alternative metrics of BOLD variability (SDBOLD) and global connectivity (Gconn), with effort-related decreases in H producing similar decreases in SDBOLD and Gconn. These results indicate a potential global brain phenomenon that unites research from different fields and indicates that fractal scaling may be a highly sensitive metric for indexing cognitive effort/task engagement.
BOLD fMRI is sensitive to blood-oxygenation changes correlated with brain function; however, it is limited by relatively weak signal and significant noise confounds. Many preprocessing algorithms have been developed to control noise and improve signal detection in fMRI. Although the chosen set of preprocessing and analysis steps (the “pipeline”) significantly affects signal detection, pipelines are rarely quantitatively validated in the neuroimaging literature, due to complex preprocessing interactions. This paper outlines and validates an adaptive resampling framework for evaluating and optimizing preprocessing choices by optimizing data-driven metrics of task prediction and spatial reproducibility. Compared to standard “fixed” preprocessing pipelines, this optimization approach significantly improves independent validation measures of within-subject test-retest, and between-subject activation overlap, and behavioural prediction accuracy. We demonstrate that preprocessing choices function as implicit model regularizers, and that improvements due to pipeline optimization generalize across a range of simple to complex experimental tasks and analysis models. Results are shown for brief scanning sessions (<3 minutes each), demonstrating that with pipeline optimization, it is possible to obtain reliable results and brain-behaviour correlations in relatively small datasets.
BackgroundA need exists for easily administered assessment tools to detect mild cognitive changes that are more comprehensive than screening tests but shorter than a neuropsychological battery and that can be administered by physicians, as well as any health care professional or trained assistant in any medical setting. The Toronto Cognitive Assessment (TorCA) was developed to achieve these goals.MethodsWe obtained normative data on the TorCA (n = 303), determined test reliability, developed an iPad version, and validated the TorCA against neuropsychological assessment for detecting amnestic mild cognitive impairment (aMCI) (n = 50/57, aMCI/normal cognition). For the normative study, healthy volunteers were recruited from the Rotman Research Institute registry. For the validation study, the sample was comprised of participants with aMCI or normal cognition based on neuropsychological assessment. Cognitively normal participants were recruited from both healthy volunteers in the normative study sample and the community.ResultsThe TorCA provides a stable assessment of multiple cognitive domains. The total score correctly classified 79% of participants (sensitivity 80%; specificity 79%). In an exploratory logistic regression analysis, indices of Immediate Verbal Recall, Delayed Verbal and Visual Recall, Visuospatial Function, and Working Memory/Attention/Executive Control, a subset of the domains assessed by the TorCA, correctly classified 92% of participants (sensitivity 92%; specificity 91%). Paper and iPad version scores were equivalent.ConclusionsThe TorCA can improve resource utilization by identifying patients with aMCI who may not require more resource-intensive neuropsychological assessment. Future studies will focus on cross-validating the TorCA for aMCI, and validation for disorders other than aMCI.
Behavioral Partial-Least Squares (PLS) is often used to analyze ill-posed functional Magnetic Resonance Imaging (f MRI) datasets, for which the number of variables are far larger than the number of observations. This procedure generates a latent variable (LV) brain map, showing brain regions that are most correlated with behavioral measures. The strength of the behavioral relationship is measured by the correlation between behavior and LV scores in the data. For standard behavioral PLS, bootstrap resampling is used to evaluate the reliability of the the brain LV and its behavioral correlations. However, the bootstrap may provide biased measures of the generalizability of results across independent datasets. We used split-half resampling to obtain unbiased measures of brain-LV reproducibility and behavioral prediction of the PLS model, for independent data. We show that bootstrapped PLS gives biased measures of behavioral correlations, whereas split-half resampling identifies highly stable activation peaks across single resampling splits. The ill-posed PLS solution can also be improved by regularization; we consistently improve the prediction accuracy and spatial reproducibility of behavioral estimates by (1) projecting f MRI data onto an optimized PCA basis, and (2) optimizing data preprocessing on an individual subject basis. These results show that significant improvements in generalizability and brain pattern stability are obtained with split-half versus bootstrapped resampling of PLS results, and that model performance can be further improved by regularizing the input data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.