Brain systems supporting face and voice processing both contribute to the extraction of important information for social interaction (e.g., person identity). How does the brain reorganize when one of these channels is absent? Here, we explore this question by combining behavioral and multimodal neuroimaging measures (magnetoencephalography and functional imaging) in a group of early deaf humans. We show enhanced selective neural response for faces and for individual face coding in a specific region of the auditory cortex that is typically specialized for voice perception in hearing individuals. In this region, selectivity to face signals emerges early in the visual processing hierarchy, shortly after typical face-selective responses in the ventral visual pathway. Functional and effective connectivity analyses suggest reorganization in long-range connections from early visual areas to the face-selective temporal area in individuals with early and profound deafness. Altogether, these observations demonstrate that regions that typically specialize for voice processing in the hearing brain preferentially reorganize for face processing in borndeaf people. Our results support the idea that cross-modal plasticity in the case of early sensory deprivation relates to the original functional specialization of the reorganized brain regions.cross-modal plasticity | deafness | modularity | ventral stream | identity processing T he human brain is endowed with the fundamental ability to adapt its neural circuits in response to experience. Sensory deprivation has long been championed as a model to test how experience interacts with intrinsic constraints to shape functional brain organization. In particular, decades of neuroscientific research have gathered compelling evidence that blindness and deafness are associated with cross-modal recruitment of the sensory-deprived cortices (1). For instance, in early deaf individuals, visual and tactile stimuli induce responses in regions of the cerebral cortex that are sensitive primarily to sounds in the typical hearing brain (2, 3).Animal models of congenital and early deafness suggest that specific visual functions are relocated to discrete regions of the reorganized cortex and that this functional preference in crossmodal recruitment supports superior visual performance. For instance, superior visual motion detection is selectively altered in deaf cats when a portion of the dorsal auditory cortex, specialized for auditory motion processing in the hearing cat, is transiently deactivated (4). These results suggest that cross-modal plasticity associated with early auditory deprivation follows organizational principles that maintain the functional specialization of the colonized brain regions. In humans, however, there is only limited evidence that specific nonauditory inputs are differentially localized to discrete portions of the auditory-deprived cortices. For example, Bola et al. have recently reported, in deaf individuals, cross-modal activations for visual rhythm discrimination in t...
Although sound position is initially head-centred (egocentric coordinates), our brain can also represent sounds relative to one another (allocentric coordinates). Whether reference frames for spatial hearing are independent or interact remained largely unexplored. Here we developed a new allocentric spatial-hearing training and tested whether it can improve egocentric sound-localisation performance in normal-hearing adults listening with one ear plugged. Two groups of participants (N = 15 each) performed an egocentric sound-localisation task (point to a syllable), in monaural listening, before and after 4-days of multisensory training on triplets of white-noise bursts paired with occasional visual feedback. Critically, one group performed an allocentric task (auditory bisection task), whereas the other processed the same stimuli to perform an egocentric task (pointing to a designated sound of the triplet). Unlike most previous works, we tested also a no training group (N = 15). Egocentric sound-localisation abilities in the horizontal plane improved for all groups in the space ipsilateral to the ear-plug. This unexpected finding highlights the importance of including a no training group when studying sound localisation re-learning. Yet, performance changes were qualitatively different in trained compared to untrained participants, providing initial evidence that allocentric and multisensory procedures may prove useful when aiming to promote sound localisation re-learning.
Brain systems supporting face and voice processing both contribute to the extraction of important information for social interaction (e.g., person identity). How does the brain reorganize when one of these channels is absent? Here we explore this question by combining behavioral and multimodal neuroimaging measures (magneto-encephalography and functional imaging) in a group of early deaf humans. We show enhanced selective neural response for faces and for individual face coding in a specific region of the auditory cortex that is typically specialized for voice perception in hearing individuals. In this region, selectivity to face signals emerges early in the visual processing hierarchy, shortly following typical face-selective responses in the ventral visual pathway. Functional and effective connectivity analyses suggest reorganization in long-range connections from early visual areas to the face-selective temporal area in individuals with early and profound deafness. Altogether, these observations demonstrate that regions that typically specialize for voice processing in the hearing brain preferentially reorganize for face processing in born deaf people. Our results support the idea that cross-modal plasticity in case of early sensory deprivation relates to the original functional specialization of the reorganized brain regions.
Neuroplasticity following sensory deprivation has long inspired neuroscience research in the quest of understanding how sensory experience and genetics interact in developing the brain functional and structural architecture. Many studies have shown that sensory deprivation can lead to cross-modal functional recruitment of sensory deprived cortices. Little is known however about how structural reorganization may support these functional changes. In this study, we examined early deaf, hearing signer and hearing non-signer individuals using diffusion MRI to evaluate the potential structural connectivity linked to the functional recruitment of the temporal voice area by face stimuli in deaf individuals. More specifically, we characterized the structural connectivity between occipital, fusiform and temporal regions typically supporting voice- and face-selective processing. Despite the extensive functional reorganization for face processing in the temporal cortex of the deaf, macroscopic properties of these connections did not differ across groups. However, both occipito- and fusiform-temporal connections showed significant microstructural changes between groups (fractional anisotropy reduction, radial diffusivity increase). We propose that the reorganization of temporal regions after early auditory deprivation builds on intrinsic and mainly preserved anatomical connectivity between functionally specific temporal and occipital regions.
The human capacity for semantic knowledge entails not only the representation of single concepts but the capacity to combine these concepts into the increasingly complex ideas that underlie human thought. This process involves not only the combination of concepts from within the same semantic category but frequently the conceptual combination across semantic domains. In this fMRI study (N = 24) we investigate the cortical mechanisms underlying our ability to combine concepts across different semantic domains. Using five different semantic domains (People, Places, Food, Objects, and Animals), we present sentences depicting concepts drawn from a single semantic domain as well as sentences that combine concepts from two of these domains. Contrasting single-category and combined-category sentences reveals that the precuneus is more active when concepts from different domains have to be combined. At the same time, we observe that distributed category selectivity representations persist when higher-order meaning involves the combination of categories and that this category-selective response is captured by the combination of the single categories composing the sentence. Collectively, these results suggest that the precuneus plays a role in the combination of concepts across different semantic domains, potentially functioning to link together category-selective representations distributed across the cortex.
In early deaf individuals, the auditory deprived temporal brain regions become engaged in visual processing. In our study we tested further the hypothesis that intrinsic functional specialization guides the expression of cross-modal responses in the deprived auditory cortex. We used functional MRI to characterize the brain response to horizontal, radial and stochastic visual motion in early deaf and hearing individuals matched for the use of oral or sign language. Visual motion showed enhanced response in the ‘deaf’ mid-lateral planum temporale, a region selective to auditory motion as demonstrated by a separate auditory motion localizer in hearing people. Moreover, multivariate pattern analysis revealed that this reorganized temporal region showed enhanced decoding of motion categories in the deaf group, while visual motion-selective region hMT+/V5 showed reduced decoding when compared to hearing people. Dynamic Causal Modelling revealed that the ‘deaf’ motion-selective temporal region shows a specific increase of its functional interactions with hMT+/V5 and is now part of a large-scale visual motion selective network. In addition, we observed preferential responses to radial, compared to horizontal, visual motion in the ‘deaf’ right superior temporal cortex region that also show preferential response to approaching/receding sounds in the hearing brain. Overall, our results suggest that the early experience of auditory deprivation interacts with intrinsic constraints and triggers a large-scale reallocation of computational load between auditory and visual brain regions that typically support the multisensory processing of motion information.HighlightsAuditory motion-sensitive regions respond to visual motion in the deafReorganized auditory cortex can discriminate between visual motion trajectoriesPart of the deaf auditory cortex shows preference for in-depth visual motionDeafness might lead to computational reallocation between auditory/visual regions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.