We propose Granger causality mapping (GCM) as an approach to explore directed influences between neuronal populations (effective connectivity) in fMRI data. The method does not rely on a priori specification of a model that contains pre-selected regions and connections between them. This distinguishes it from other fMRI effective connectivity approaches that aim at testing or contrasting specific hypotheses about neuronal interactions. Instead, GCM relies on the concept of Granger causality to define the existence and direction of influence from information in the data. Temporal precedence information is exploited to compute Granger causality maps that identify voxels that are sources or targets of directed influence for any selected region-of-interest. We investigated the method by simulations and by application to fMRI data of a complex visuomotor task. The presented exploratory approach of mapping influences between a region of interest and the rest of the brain can form a useful complement to existing models of effective connectivity. D 2004 Elsevier Inc. All rights reserved.
The Functional Image Analysis Contest (FIAC) 2005 dataset was analyzed using BrainVoyager QX. First, we performed a standard analysis of the functional and anatomical data that includes preprocessing, spatial normalization into Talairach space, hypothesis-driven statistics (one- and two-factorial, single-subject and group-level random effects, General Linear Model [GLM]) of the block- and event-related paradigms. Strong sentence and weak speaker group-level effects were detected in temporal and frontal regions. Following this standard analysis, we performed single-subject and group-level (Talairach-based) Independent Component Analysis (ICA) that highlights the presence of functionally connected clusters in temporal and frontal regions for sentence processing, besides revealing other networks related to auditory stimulation or to the default state of the brain. Finally, we applied a high-resolution cortical alignment method to improve the spatial correspondence across brains and re-run the random effects group GLM as well as the group-level ICA in this space. Using spatially and temporally unsmoothed data, this cortex-based analysis revealed comparable results but with a set of spatially more confined group clusters and more differential group region of interest time courses.
Can we decipher speech content ("what" is being said) and speaker identity ("who" is saying it) from observations of brain activity of a listener? Here, we combine functional magnetic resonance imaging with a data-mining algorithm and retrieve what and whom a person is listening to from the neural fingerprints that speech and voice signals elicit in the listener's auditory cortex. These cortical fingerprints are spatially distributed and insensitive to acoustic variations of the input so as to permit the brain-based recognition of learned speech from unknown speakers and of learned voices from previously unheard utterances. Our findings unravel the detailed cortical layout and computational properties of the neural populations at the basis of human speech recognition and speaker identification.
Most people acquire literacy skills with remarkable ease, even though the human brain is not evolutionarily adapted to this relatively new cultural phenomenon. Associations between letters and speech sounds form the basis of reading in alphabetic scripts. We investigated the functional neuroanatomy of the integration of letters and speech sounds using functional magnetic resonance imaging (fMRI). Letters and speech sounds were presented unimodally and bimodally in congruent or incongruent combinations. Analysis of single-subject data and group data aligned on the basis of individual cortical anatomy revealed that letters and speech sounds are integrated in heteromodal superior temporal cortex. Interestingly, responses to speech sounds in a modality-specific region of the early auditory cortex were modified by simultaneously presented letters. These results suggest that efficient processing of culturally defined associations between letters and speech sounds relies on neural mechanisms similar to those naturally evolved for integrating audiovisual speech.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.