Is vision necessary for the development of the categorical organization of the Ventral Occipito-Temporal Cortex (VOTC)? We used fMRI to characterize VOTC responses to eight categories presented acoustically in sighted and early blind individuals, and visually in a separate sighted group. We observed that VOTC reliably encodes sound categories in sighted and blind people using a representational structure and connectivity partially similar to the one found in vision. Sound categories were, however, more reliably encoded in the blind than the sighted group, using a representational format closer to the one found in vision. Crucially, VOTC in blind represents the categorical membership of sounds rather than their acoustic features. Our results suggest that sounds trigger categorical responses in the VOTC of congenitally blind and sighted people that partially match the topography and functional profile of the visual response, despite qualitative nuances in the categorical organization of VOTC between modalities and groups.
Brain systems supporting face and voice processing both contribute to the extraction of important information for social interaction (e.g., person identity). How does the brain reorganize when one of these channels is absent? Here, we explore this question by combining behavioral and multimodal neuroimaging measures (magnetoencephalography and functional imaging) in a group of early deaf humans. We show enhanced selective neural response for faces and for individual face coding in a specific region of the auditory cortex that is typically specialized for voice perception in hearing individuals. In this region, selectivity to face signals emerges early in the visual processing hierarchy, shortly after typical face-selective responses in the ventral visual pathway. Functional and effective connectivity analyses suggest reorganization in long-range connections from early visual areas to the face-selective temporal area in individuals with early and profound deafness. Altogether, these observations demonstrate that regions that typically specialize for voice processing in the hearing brain preferentially reorganize for face processing in borndeaf people. Our results support the idea that cross-modal plasticity in the case of early sensory deprivation relates to the original functional specialization of the reorganized brain regions.cross-modal plasticity | deafness | modularity | ventral stream | identity processing T he human brain is endowed with the fundamental ability to adapt its neural circuits in response to experience. Sensory deprivation has long been championed as a model to test how experience interacts with intrinsic constraints to shape functional brain organization. In particular, decades of neuroscientific research have gathered compelling evidence that blindness and deafness are associated with cross-modal recruitment of the sensory-deprived cortices (1). For instance, in early deaf individuals, visual and tactile stimuli induce responses in regions of the cerebral cortex that are sensitive primarily to sounds in the typical hearing brain (2, 3).Animal models of congenital and early deafness suggest that specific visual functions are relocated to discrete regions of the reorganized cortex and that this functional preference in crossmodal recruitment supports superior visual performance. For instance, superior visual motion detection is selectively altered in deaf cats when a portion of the dorsal auditory cortex, specialized for auditory motion processing in the hearing cat, is transiently deactivated (4). These results suggest that cross-modal plasticity associated with early auditory deprivation follows organizational principles that maintain the functional specialization of the colonized brain regions. In humans, however, there is only limited evidence that specific nonauditory inputs are differentially localized to discrete portions of the auditory-deprived cortices. For example, Bola et al. have recently reported, in deaf individuals, cross-modal activations for visual rhythm discrimination in t...
Highlights d Cross-modal decoding of visual and auditory motion directions in hMT + /V5 d Motion-direction representation is, however, not abstracted from the sensory input d We reveal a multifaced representation of multisensory motion signals in hMT + /V5
Sounds activate occipital regions in early blind individuals. However, how different sound categories map onto specific regions of the occipital cortex remains a matter of debate. We used fMRI to characterize brain responses of early blind and sighted individuals to familiar object sounds, human voices, and their respective low-level control sounds. In addition, sighted participants were tested while viewing pictures of faces, objects, and phase-scrambled control pictures. In both early blind and sighted, a double dissociation was evidenced in bilateral auditory cortices between responses to voices and object sounds: Voices elicited categorical responses in bilateral superior temporal sulci, whereas object sounds elicited categorical responses along the lateral fissure bilaterally, including the primary auditory cortex and planum temporale. Outside the auditory regions, object sounds also elicited categorical responses in the left lateral and in the ventral occipitotemporal regions in both groups. These regions also showed response preference for images of objects in the sighted group, thus suggesting a functional specialization that is independent of sensory input and visual experience. Between-group comparisons revealed that, only in the blind group, categorical responses to object sounds extended more posteriorly into the occipital cortex. Functional connectivity analyses evidenced a selective increase in the functional coupling between these reorganized regions and regions of the ventral occipitotemporal cortex in the blind group. In contrast, vocal sounds did not elicit preferential responses in the occipital cortex in either group. Nevertheless, enhanced voice-selective connectivity between the left temporal voice area and the right fusiform gyrus were found in the blind group. Altogether, these findings suggest that, in the absence of developmental vision, separate auditory categories are not equipotent in driving selective auditory recruitment of occipitotemporal regions and highlight the presence of domain-selective constraints on the expression of cross-modal plasticity.
In humans, the occipital middle-temporal region (hMT+/V5) specializes in the processing of visual motion, while the Planum Temporale (hPT) specializes in auditory motion processing. It has been hypothesized that these regions might communicate directly to achieve fast and optimal exchange of multisensory motion information. In this study, we investigated for the first time in humans the existence of direct white matter connections between visual and auditory motion-selective regions using a combined functional-and diffusion-MRI approach.We found reliable evidence supporting the existence of direct white matter connections between individually and functionally defined hMT+/V5 and hPT. We show that projections between hMT+/V5 and hPT do not overlap with large white matter bundles such as the Inferior Longitudinal Fasciculus (ILF) nor the Inferior Frontal Occipital Fasciculus (IFOF). Moreover, we did not find evidence for the existence of reciprocal projections between the face fusiform area and hPT, supporting the functional specificity of hMT+/V5 -hPT connections. Finally, evidence supporting the existence of hMT+/V5 -hPT connections was corroborated in a large sample of participants (n=114) from the human connectome project. Altogether, this study provides first evidence supporting the existence of direct occipito-temporal projections between hMT+/V5 and hPT which may support the exchange of motion information between functionally specialized auditory and visual regions and that we propose to name the middle (or motion) occipito-temporal track (MOTT).trajectories of large white matter bundles such as the Inferior Longitudinal Fasciculus (ILF) or the Inferior Frontal Occipital Fasciculus (IFOF). To further assess the selectivity of hPT -hMT+/V5 connections, we conducted probabilistic tractography between hPT and the Face Fusiform area (FFA), another region of the visual cortex with a specific functional role not related to motion. As an additional control analysis, we investigated the existence of hMT+/V5 -hPT connections in a larger independent dataset (Human Connectome Project) to test the consistency of our findings. RESULTS Location of individually defined and group-level hMT+/V5 and hPTGroup-level coordinates for hMT+/V5 were located in MNI coordinates (44, -70, -4) and (-50, -70, -2) for right and left hemispheres respectively, which is consistent with reported MNI coordinates for this region (34) and lie within the V5 mask of the Juelich histological atlas available in FSL (35) (see Figure 1A). Subject-specific hMT+/V5 coordinates were on average 7 ± 3 (M ± SD) mm and 10 ± 4 mm away from the group-maxima on the right and the left hemisphere (32).Group-level hPT coordinates were located at coordinates (64, -34, 13) and (-44, -32, 12) for the right and left hemisphere, respectively, which is consistent with reported MNI coordinates for this region (7) and lie within the hPT Harvard-Oxford atlas from FSL (36).Individually defined hPT coordinates were on average 9 ± 5 mm and 15 ± 4 mm away from the group-m...
The ability to precisely compute the location and direction of sounds in external space is a crucial perceptual process to efficiently interact with dynamic environments. Little is known, however, about how the human brain implements spatial hearing. In our study, we used fMRI to characterize the brain activity of humans listening to left, right, up and down moving as well as static sounds. Whole brain univariate results contrasting moving and static sounds varying in their location revealed a robust functional preference for auditory motion in bilateral human Planum Temporale (hPT). Importantly, multivariate pattern classification analysis showed that hPT contains information about both auditory motion directions and, to a lesser extent, sound source locations. More precisely, we observed that our classifier successfully decoded opposite axes of motion (vertical versus horizontal) but was less able to classify opposite within-axis direction (left versus right or up versus down); reminiscent of the axis of motion organization observed in the middle-temporal cortex for vision. Further multivariate analyses demonstrated that even if motion direction and location rely on partially shared pattern geometries in PT, the responses elicited by static and moving sounds were however highly distinct. Altogether our results demonstrate that human PT codes for auditory motion and location but that the underlying neural computation linked to motion processing is more reliable and partially distinct from the one supporting sound source location.All rights reserved. No reuse allowed without permission.
Brain systems supporting face and voice processing both contribute to the extraction of important information for social interaction (e.g., person identity). How does the brain reorganize when one of these channels is absent? Here we explore this question by combining behavioral and multimodal neuroimaging measures (magneto-encephalography and functional imaging) in a group of early deaf humans. We show enhanced selective neural response for faces and for individual face coding in a specific region of the auditory cortex that is typically specialized for voice perception in hearing individuals. In this region, selectivity to face signals emerges early in the visual processing hierarchy, shortly following typical face-selective responses in the ventral visual pathway. Functional and effective connectivity analyses suggest reorganization in long-range connections from early visual areas to the face-selective temporal area in individuals with early and profound deafness. Altogether, these observations demonstrate that regions that typically specialize for voice processing in the hearing brain preferentially reorganize for face processing in born deaf people. Our results support the idea that cross-modal plasticity in case of early sensory deprivation relates to the original functional specialization of the reorganized brain regions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.