Learning to associate auditory information of speech sounds with visual information of letters is a first and critical step for becoming a skilled reader in alphabetic languages. Nevertheless, it remains largely unknown which brain areas subserve the learning and automation of such associations. Here, we employ functional magnetic resonance imaging to study letter-speech sound integration in children with and without developmental dyslexia. The results demonstrate that dyslexic children show reduced neural integration of letters and speech sounds in the planum temporale/Heschl sulcus and the superior temporal sulcus. While cortical responses to speech sounds in fluent readers were modulated by letter-speech sound congruency with strong suppression effects for incongruent letters, no such modulation was observed in the dyslexic readers. Whole-brain analyses of unisensory visual and auditory group differences additionally revealed reduced unisensory responses to letters in the fusiform gyrus in dyslexic children, as well as reduced activity for processing speech sounds in the anterior superior temporal gyrus, planum temporale/Heschl sulcus and superior temporal sulcus. Importantly, the neural integration of letters and speech sounds in the planum temporale/Heschl sulcus and the neural response to letters in the fusiform gyrus explained almost 40% of the variance in individual reading performance. These findings indicate that an interrelated network of visual, auditory and heteromodal brain areas contributes to the skilled use of letter-speech sound associations necessary for learning to read. By extending similar findings in adults, the data furthermore argue against the notion that reduced neural integration of letters and speech sounds in dyslexia reflect the consequence of a lifetime of reading struggle. Instead, they support the view that letter-speech sound integration is an emergent property of learning to read that develops inadequately in dyslexic readers, presumably as a result of a deviant interactive specialization of neural systems for processing auditory and visual linguistic inputs.
Human communication entirely depends on the functional integrity of the neuromuscular system. This is devastatingly illustrated in clinical conditions such as the so-called locked-in syndrome (LIS), in which severely motor-disabled patients become incapable to communicate naturally--while being fully conscious and awake. For the last 20 years, research on motor-independent communication has focused on developing brain-computer interfaces (BCIs) implementing neuroelectric signals for communication (e.g., [2-7]), and BCIs based on electroencephalography (EEG) have already been applied successfully to concerned patients. However, not all patients achieve proficiency in EEG-based BCI control. Thus, more recently, hemodynamic brain signals have also been explored for BCI purposes. Here, we introduce the first spelling device based on fMRI. By exploiting spatiotemporal characteristics of hemodynamic responses, evoked by performing differently timed mental imagery tasks, our novel letter encoding technique allows translating any freely chosen answer (letter-by-letter) into reliable and differentiable single-trial fMRI signals. Most importantly, automated letter decoding in real time enables back-and-forth communication within a single scanning session. Because the suggested spelling device requires only little effort and pretraining, it is immediately operational and possesses high potential for clinical applications, both in terms of diagnostics and establishing short-term communication with nonresponsive and severely motor-impaired patients.
Abstract:The term 'locked-in' syndrome (LIS) describes a medical condition in which persons concerned are severely paralyzed and at the same time fully conscious and awake. The resulting anarthria makes it impossible for these patients to naturally communicate, which results in diagnostic as well as serious practical and ethical problems. Therefore, developing alternative, muscle-independent communication means is of prime importance. Such communication means can be realized via brain-computer interfaces (BCIs) circumventing the muscular system by using brain signals associated with preserved cognitive, sensory, and emotional brain functions. Primarily, BCIs based on electrophysiological measures have been developed and applied with remarkable success. Recently, also blood flow-based neuroimaging methods, such as functional magnetic resonance imaging (fMRI) and functional near-infrared spectroscopy (fNIRS), have been explored in this context.After reviewing recent literature on the development of especially hemodynamically based BCIs, we introduce a highly reliable and easy-to-apply communication procedure that enables untrained participants to motor-independently and relatively effortlessly answer multiple-choice questions based on intentionally generated single-trial fMRI signals that can be decoded online. Our technique takes advantage of the participants' capability to voluntarily influence certain spatio-temporal aspects of the blood oxygenation level-dependent (BOLD) signal: source location (by using different mental tasks), signal onset and offset. We show that healthy participants are capable of hemodynamically encoding at least four distinct information units on a single-trial level without extensive pretraining and with little effort. Moreover, realtime data analysis based on simple multi-filter correlations allows for automated answer decoding with a high accuracy (94.9%) demonstrating the robustness of the presented method. Following our 'proof of concept', the next step will involve clinical trials with LIS patients, undertaken in close collaboration with their relatives and caretakers in order to elaborate individually tailored communication protocols.� Corresponding author. As our procedure can be easily transferred to MRI-equipped clinical sites, it may constitute a simple and effective possibility for online detection of residual consciousness and for LIS patients to communicate basic thoughts and needs in case no other alternative communication means are available (yet) -especially in the acute phase of the LIS. Future research may focus on further increasing the efficiency and accuracy of fMRI-based BCIs by implementing sophisticated data analysis methods (e.g., multivariate and independent component analysis) and neurofeedback training techniques. Finally, the presented BCI approach could be transferred to portable fNIRS systems as only this would enable hemodynamically based communication in daily life situations.
Here we report the first quantitative analysis of spiking activity in human early visual cortex. We recorded multi-unit activity from two electrodes in area V2/V3 of a human patient implanted with depth electrodes as part of her treatment for epilepsy. We observed well-localized multi-unit receptive fields with tunings for contrast, orientation, spatial frequency, and size, similar to those reported in the macaque. We also observed pronounced gamma oscillations in the local-field potential that could be used to estimate the underlying spiking response properties. Spiking responses were modulated by visual context and attention. We observed orientation-tuned surround suppression: responses were suppressed by image regions with a uniform orientation and enhanced by orientation contrast. Additionally, responses were enhanced on regions that perceptually segregated from the background, indicating that neurons in the human visual cortex are sensitive to figure-ground structure. Spiking responses were also modulated by object-based attention. When the patient mentally traced a curve through the neurons’ receptive fields, the accompanying shift of attention enhanced neuronal activity. These results demonstrate that the tuning properties of cells in the human early visual cortex are similar to those in the macaque and that responses can be modulated by both contextual factors and behavioral relevance. Our results, therefore, imply that the macaque visual system is an excellent model for the human visual cortex.
Despite growing interest, the causal mechanisms underlying human neural network dynamics remain elusive. Transcranial Magnetic Stimulation (TMS) allows to noninvasively probe neural excitability, while concurrent fMRI can log the induced activity propagation through connected network nodes. However, this approach ignores ongoing oscillatory fluctuations which strongly affect network excitability and concomitant behavior. Here, we show that concurrent TMS-EEG-fMRI enables precise and direct monitoring of causal dependencies between oscillatory states and signal propagation throughout cortico-subcortical networks. To demonstrate the utility of this multimodal triad, we assessed how pre-TMS EEG power fluctuations influenced motor network activations induced by subthreshold TMS to right dorsal premotor cortex. In participants with adequate motor network reactivity, strong pre-TMS alpha power reduced TMS-evoked hemodynamic activations throughout the bilateral cortico-subcortical motor system (including striatum and thalamus), suggesting shunted network connectivity. Concurrent TMS-EEG-fMRI opens an exciting noninvasive avenue of subject-tailored network research into dynamic cognitive circuits and their dysfunction.
Neuroimaging studies have recently provided support for the existence of a human equivalent of the "mirror-neuron" system as first described in monkeys [1], involved in both the execution of movements as well as the observation and imitation of actions performed by others (e.g., [2-6]). A widely held conception concerning this system is that the understanding of observed actions is mediated by a covert simulation process [7]. In the present fMRI experiment, this simulation process was probed by asking subjects to discriminate between visually presented trajectories that either did or did not match previously performed but unseen continuous movement sequences. A specific network of learning-related premotor and parietal areas was found to be reactivated when participants were confronted with their movements' visual counterpart. Moreover, the strength of these reactivations was dependent on the observers' experience with executing the corresponding movement sequence. These findings provide further support for the emerging view that embodied simulations during action observation engage widespread activations in cortical motor regions beyond the classically defined mirror-neuron system. Furthermore, the obtained results extend previous work by showing experience-dependent perceptual modulations at the neural systems level based on nonvisual motor learning.
Within vision research retinotopic mapping and the more general receptive field estimation approach constitute not only an active field of research in itself but also underlie a plethora of interesting applications. This necessitates not only good estimation of population receptive fields (pRFs) but also that these receptive fields are consistent across time rather than dynamically changing. It is therefore of interest to maximize the accuracy with which population receptive fields can be estimated in a functional magnetic resonance imaging (fMRI) setting. This, in turn, requires an adequate estimation framework providing the data for population receptive field mapping. More specifically, adequate decisions with regard to stimulus choice and mode of presentation need to be made. Additionally, it needs to be evaluated whether the stimulation protocol should entail mean luminance periods and whether it is advantageous to average the blood oxygenation level dependent (BOLD) signal across stimulus cycles or not. By systematically studying the effects of these decisions on pRF estimates in an empirical as well as simulation setting we come to the conclusion that a bar stimulus presented at random positions and interspersed with mean luminance periods is generally most favorable. Finally, using this optimal estimation framework we furthermore tested the assumption of temporal consistency of population receptive fields. We show that the estimation of pRFs from two temporally separated sessions leads to highly similar pRF parameters.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.