We propose Granger causality mapping (GCM) as an approach to explore directed influences between neuronal populations (effective connectivity) in fMRI data. The method does not rely on a priori specification of a model that contains pre-selected regions and connections between them. This distinguishes it from other fMRI effective connectivity approaches that aim at testing or contrasting specific hypotheses about neuronal interactions. Instead, GCM relies on the concept of Granger causality to define the existence and direction of influence from information in the data. Temporal precedence information is exploited to compute Granger causality maps that identify voxels that are sources or targets of directed influence for any selected region-of-interest. We investigated the method by simulations and by application to fMRI data of a complex visuomotor task. The presented exploratory approach of mapping influences between a region of interest and the rest of the brain can form a useful complement to existing models of effective connectivity. D 2004 Elsevier Inc. All rights reserved.
The Functional Image Analysis Contest (FIAC) 2005 dataset was analyzed using BrainVoyager QX. First, we performed a standard analysis of the functional and anatomical data that includes preprocessing, spatial normalization into Talairach space, hypothesis-driven statistics (one- and two-factorial, single-subject and group-level random effects, General Linear Model [GLM]) of the block- and event-related paradigms. Strong sentence and weak speaker group-level effects were detected in temporal and frontal regions. Following this standard analysis, we performed single-subject and group-level (Talairach-based) Independent Component Analysis (ICA) that highlights the presence of functionally connected clusters in temporal and frontal regions for sentence processing, besides revealing other networks related to auditory stimulation or to the default state of the brain. Finally, we applied a high-resolution cortical alignment method to improve the spatial correspondence across brains and re-run the random effects group GLM as well as the group-level ICA in this space. Using spatially and temporally unsmoothed data, this cortex-based analysis revealed comparable results but with a set of spatially more confined group clusters and more differential group region of interest time courses.
Can we decipher speech content ("what" is being said) and speaker identity ("who" is saying it) from observations of brain activity of a listener? Here, we combine functional magnetic resonance imaging with a data-mining algorithm and retrieve what and whom a person is listening to from the neural fingerprints that speech and voice signals elicit in the listener's auditory cortex. These cortical fingerprints are spatially distributed and insensitive to acoustic variations of the input so as to permit the brain-based recognition of learned speech from unknown speakers and of learned voices from previously unheard utterances. Our findings unravel the detailed cortical layout and computational properties of the neural populations at the basis of human speech recognition and speaker identification.
Most people acquire literacy skills with remarkable ease, even though the human brain is not evolutionarily adapted to this relatively new cultural phenomenon. Associations between letters and speech sounds form the basis of reading in alphabetic scripts. We investigated the functional neuroanatomy of the integration of letters and speech sounds using functional magnetic resonance imaging (fMRI). Letters and speech sounds were presented unimodally and bimodally in congruent or incongruent combinations. Analysis of single-subject data and group data aligned on the basis of individual cortical anatomy revealed that letters and speech sounds are integrated in heteromodal superior temporal cortex. Interestingly, responses to speech sounds in a modality-specific region of the early auditory cortex were modified by simultaneously presented letters. These results suggest that efficient processing of culturally defined associations between letters and speech sounds relies on neural mechanisms similar to those naturally evolved for integrating audiovisual speech.
Visual face identification requires distinguishing between thousands of faces we know. This computational feat involves a network of brain regions including the fusiform face area (FFA) and anterior inferotemporal cortex (aIT), whose roles in the process are not well understood. Here, we provide the first demonstration that it is possible to discriminate cortical response patterns elicited by individual face images with high-resolution functional magnetic resonance imaging (fMRI). Response patterns elicited by the face images were distinct in aIT but not in the FFA. Individual-level face information is likely to be present in both regions, but our data suggest that it is more pronounced in aIT. One interpretation is that the FFA detects faces and engages aIT for identification.fMRI ͉ information-based ͉ population code W hen we perceive a familiar face, we usually effortlessly recognize its identity. Identification requires distinguishing between thousands of faces we know. A puzzle to both brain and computer scientists, this computational feat involves a network of brain regions (1) including the fusiform face area (FFA) (2, 3) and anterior inferotemporal cortex (aIT) (4). There is a wealth of evidence for an involvement in face identification of both the FFA (1, 5-18) and aIT (4,16,(19)(20)(21)(22)(23)(24)(25)(26).The FFA responds vigorously whenever a face is perceived (2,3,27). This implies that the FFA distinguishes faces from objects of other categories and suggests the function of face detection (27,28). An additional role for the FFA in face identification has been suggested by three lines of evidence: (i) Lesions in the region of the FFA are frequently associated with deficits at recognizing individual faces (prosopagnosia) (6, 9, 10). (ii) The FFA response level covaries with behavioral performance at identification (11). (iii) The FFA responds more strongly to a sequence of different individuals than to the same face presented repeatedly (8,(12)(13)(14)(15)(16)(17).For aIT as well, human lesion and neuroimaging studies suggest a role in face identification. Neuroimaging studies (4,(22)(23)(24)26) found anterior temporal activation during face recognition with the activity predictive of performance (22). Lesion studies (19,20,25) suggest that right anterior temporal cortex is involved in face identification. In monkey electrophysiology, in fact, face-identity effects appear stronger in anterior than in posterior inferotemporal cortex (29-31).These lines of evidence suggest an involvement of both the FFA and aIT in face identification. A region representing faces at the individual level should distinguish individual faces by its activity pattern. However, it has never been directly demonstrated that either the FFA or aIT responds with distinct activity patterns to different individual faces.We therefore investigated response patterns elicited by two face images by means of high-resolution functional magnetic resonance imaging (fMRI) at 3 Tesla (voxels: 2 ϫ 2 ϫ 2 mm 3 ). We asked whether response pattern...
Understanding the functional organization of the human primary auditory cortex (PAC) is an essential step in elucidating the neural mechanisms underlying the perception of sound, including speech and music. Based on invasive research in animals, it is believed that neurons in human PAC that respond selectively with respect to the spectral content of a sound form one or more maps in which neighboring patches on the cortical surface respond to similar frequencies (tonotopic maps). The number and the cortical layout of such tonotopic maps in the human brain, however, remain unknown. Here we use silent, event-related functional magnetic resonance imaging at 7 Tesla and a cortex-based analysis of functional data to delineate with high spatial resolution the detailed topography of two tonotopic maps in two adjacent subdivisions of PAC. These maps share a low-frequency border, are mirror symmetric, and clearly resemble those of presumably homologous fields in the macaque monkey.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2023 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.