Cerebrovascular reactivity (CVR), defined here as the Blood Oxygenation Level Dependent (BOLD) response to a CO 2 pressure change, is a useful metric of cerebrovascular function. Both the amplitude and the timing (hemodynamic lag) of the CVR response can bring insight into the nature of a cerebrovascular pathology and aid in understanding noise confounds when using functional Magnetic Resonance Imaging (fMRI) to study neural activity. This research assessed a practical modification to a typical resting-state fMRI protocol, to improve the characterization of cerebrovascular function. In 9 healthy subjects, we modelled CVR and lag in three resting-state data segments, and in data segments which added a 2–3 minute breathing task to the start of a resting-state segment. Two different breathing tasks were used to induce fluctuations in arterial CO 2 pressure: a breath-hold task to induce hypercapnia (CO 2 increase) and a cued deep breathing task to induce hypocapnia (CO 2 decrease). Our analysis produced voxel-wise estimates of the amplitude (CVR) and timing (lag) of the BOLD-fMRI response to CO 2 by systematically shifting the CO 2 regressor in time to optimize the model fit. This optimization inherently increases gray matter CVR values and fit statistics. The inclusion of a simple breathing task, compared to a resting-state scan only, increases the number of voxels in the brain that have a significant relationship between CO 2 and BOLD-fMRI signals, and improves our confidence in the plausibility of voxel-wise CVR and hemodynamic lag estimates. We demonstrate the clinical utility and feasibility of this protocol in an incidental finding of Moyamoya disease, and explore the possibilities and challenges of using this protocol in younger populations. This hybrid protocol has direct applications for CVR mapping in both research and clinical settings and wider applications for fMRI denoising and interpretation.
Neurofeedback training using real‐time functional magnetic resonance imaging (rtfMRI‐NF) allows subjects voluntary control of localised and distributed brain activity. It has sparked increased interest as a promising non‐invasive treatment option in neuropsychiatric and neurocognitive disorders, although its efficacy and clinical significance are yet to be determined. In this work, we present the first extensive review of acquisition, processing and quality control methods available to improve the quality of the neurofeedback signal. Furthermore, we investigate the state of denoising and quality control practices in 128 recently published rtfMRI‐NF studies. We found: (a) that less than a third of the studies reported implementing standard real‐time fMRI denoising steps, (b) significant room for improvement with regards to methods reporting and (c) the need for methodological studies quantifying and comparing the contribution of denoising steps to the neurofeedback signal quality. Advances in rtfMRI‐NF research depend on reproducibility of methods and results. Notably, a systematic effort is needed to build up evidence that disentangles the various mechanisms influencing neurofeedback effects. To this end, we recommend that future rtfMRI‐NF studies: (a) report implementation of a set of standard real‐time fMRI denoising steps according to a proposed COBIDAS‐style checklist ( https://osf.io/kjwhf/ ), (b) ensure the quality of the neurofeedback signal by calculating and reporting community‐informed quality metrics and applying offline control checks and (c) strive to adopt transparent principles in the form of methods and data sharing and support of open‐source rtfMRI‐NF software. Code and data for reproducibility, as well as an interactive environment to explore the study data, can be accessed at https://github.com/jsheunis/quality‐and‐denoising‐in‐rtfmri‐nf.
The impact of in-scanner motion on functional magnetic resonance imaging (fMRI) data has a notorious reputation in the neuroimaging community. State-ofthe-art guidelines advise to scrub out excessively corrupted frames as assessed by a composite framewise displacement (FD) score, to regress out models of nuisance variables, and to include average FD as a covariate in group-level analyses.Here, we studied individual motion time courses at time points typically retained in fMRI analyses. We observed that even in this set of putatively clean time points, motion exhibited a very clear spatiotemporal structure, so that we could distinguish subjects into four groups of movers with varying characteristics.Then, we showed that this spatiotemporal motion cartography tightly relates to a broad array of anthropometric, behavioral and clinical factors. Convergent results were obtained from two different analytical perspectives: univariate assessment of behavioral differences across mover subgroups unraveled defining markers, while subsequent multivariate analysis broadened the range of involved factors and clarified that multiple motion/behavior modes of covariance overlap in the data.Our results demonstrate that even the smaller episodes of motion typically retained in fMRI analyses carry structured, behaviorally relevant information. They call for further examinations of possible biases in current regression-based motion correction strategies.
Speaking involves coordination of multiple neuromotor systems, including respiration, phonation and articulation. Developing non-invasive imaging methods to study how the brain controls these systems is critical for understanding the neurobiology of speech production. Recent models and animal research suggest that regions beyond the primary motor cortex (M1) help orchestrate the neuromotor control needed for speaking, including cortical and sub-cortical regions. Using contrasts between speech conditions with controlled respiratory behavior, this fMRI study investigates articulatory gestures involving the tongue, lips and velum (i.e., alveolars versus bilabials, and nasals versus orals), and phonatory gestures (i.e., voiced versus whispered speech). Multivariate pattern analysis (MVPA) was used to decode articulatory gestures in M1, cerebellum and basal ganglia. Furthermore, apart from confirming the role of a mid-M1 region for phonation, we found that a dorsal M1 region, linked to respiratory control, showed significant differences for voiced compared to whispered speech despite matched lung volume observations. This region was also functionally connected to tongue and lip M1 seed regions, underlying its importance in the coordination of speech. Our study confirms and extends current knowledge regarding the neural mechanisms underlying neuromotor speech control, which hold promise to study neural dysfunctions involved in motor-speech disorders non-invasively. Despite scientific interest in verbal communication, the neural mechanisms supporting speech production remain unclear. The goal of the current study is to capture the underlying representations that support the complex orchestration of articulators, respiration, and phonation needed to produce intelligible speech. Importantly, voiced speech can be defined as an orchestrated task, where concerted phonation-articulation is mediated by respiration 1. In turn, a more detailed neural specification of these gestures in fluent speakers is necessary to develop biologically plausible models of speech production. The ability to image the speech production circuitry at work using non-invasive methods holds promise for future application in studies that aim to assess potential dysfunction. Upper motor-neurons located within the primary motor cortex (M1) exhibit a somatotopic organization that projects onto the brain-stem innervating the musculature of speech 2-6. This functional organization of M1 has been replicated with functional magnetic resonance imaging (fMRI) for the lip, tongue and jaw control regions 7-11. However, the articulatory control of the velum, which has an active role in natural speech (oral and nasal sounds) remains largely underspecified. Furthermore, laryngeal muscle control, critical for phonation, has more recently been mapped onto two separate areas in M1 4,5,12 : a ventral and a dorsal laryngeal motor area (vLMA and dLMA). Whereas the vLMA (ventral to the tongue motor area) is thought to operate the extrinsic laryngeal muscles, controlling the...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.