Humans have a unique ability to learn more than one language--a skill that is thought to be mediated by functional (rather than structural) plastic changes in the brain. Here we show that learning a second language increases the density of grey matter in the left inferior parietal cortex and that the degree of structural reorganization in this region is modulated by the proficiency attained and the age at acquisition. This relation between grey-matter density and performance may represent a general principle of brain organization.
To form a veridical percept of the environment, the brain needs to integrate sensory signals from a common source but segregate those from independent sources. Thus, perception inherently relies on solving the “causal inference problem.” Behaviorally, humans solve this problem optimally as predicted by Bayesian Causal Inference; yet, the underlying neural mechanisms are unexplored. Combining psychophysics, Bayesian modeling, functional magnetic resonance imaging (fMRI), and multivariate decoding in an audiovisual spatial localization task, we demonstrate that Bayesian Causal Inference is performed by a hierarchy of multisensory processes in the human brain. At the bottom of the hierarchy, in auditory and visual areas, location is represented on the basis that the two signals are generated by independent sources (= segregation). At the next stage, in posterior intraparietal sulcus, location is estimated under the assumption that the two signals are from a common source (= forced fusion). Only at the top of the hierarchy, in anterior intraparietal sulcus, the uncertainty about the causal structure of the world is taken into account and sensory signals are combined as predicted by Bayesian Causal Inference. Characterizing the computational operations of signal interactions reveals the hierarchical nature of multisensory perception in human neocortex. It unravels how the brain accomplishes Bayesian Causal Inference, a statistical computation fundamental for perception and cognition. Our results demonstrate how the brain combines information in the face of uncertainty about the underlying causal structure of the world.
How does the bilingual brain distinguish and control which language is in use? Previous functional imaging experiments have not been able to answer this question because proficient bilinguals activate the same brain regions irrespective of the language being tested. Here, we reveal that neuronal responses within the left caudate are sensitive to changes in the language or the meaning of words. By demonstrating this effect in populations of German-English and Japanese-English bilinguals, we suggest that the left caudate plays a universal role in monitoring and controlling the language in use.
Practicing a musical instrument is a rich multisensory experience involving the integration of visual, auditory, and tactile inputs with motor responses. This combined psychophysics-fMRI study used the musician's brain to investigate how sensory-motor experience molds temporal binding of auditory and visual signals. Behaviorally, musicians exhibited a narrower temporal integration window than nonmusicians for music but not for speech. At the neural level, musicians showed increased audiovisual asynchrony responses and effective connectivity selectively for music in a superior temporal sulcus-premotor-cerebellar circuitry. Critically, the premotor asynchrony effects predicted musicians' perceptual sensitivity to audiovisual asynchrony. Our results suggest that piano practicing fine tunes an internal forward model mapping from action plans of piano playing onto visible finger movements and sounds. This internal forward model furnishes more precise estimates of the relative audiovisual timings and hence, stronger prediction error signals specifically for asynchronous music in a premotor-cerebellar circuitry. Our findings show intimate links between action production and audiovisual temporal binding in perception.audiovisual synchrony | multisensory integration | sensorimotor learning | crossmodal integration | experience-dependent plasticity P racticing a musical instrument is a rich multisensory experience involving the integration of visual, auditory, and tactile inputs with motor responses. The musician's brain, thus, provides an ideal model to study experience-dependent plasticity in humans (1, 2).Previous research in musicians has focused on neural plasticity affecting unisensory and motor processing. Little is known about how musical expertise alters the integration of inputs from multiple senses. Because musical performance requires precise timing, musical expertise may specifically modulate the temporal binding of sensory signals. Given the variability in physical and neural transmission times, sensory signals do not have to be precisely synchronous but must co-occur within a temporal window that flexibly adapts to the temporal statistics of the sensory inputs as a consequence of music (3) or audiovisual training (4). At the neural level, audiovisual (a)synchrony processing relies on a widespread neural system encompassing subcortical, primary sensory, higher-order association, cerebellar, and premotor areas (5-8).This study used the musician's brain as a model to investigate how long-term sensory-motor experience (i.e., piano practicing) shapes the neural processes underlying temporal binding of auditory and visual signals. We presented subjects with synchronous and asynchronous speech and piano music as two stimulus classes that are both characterized by a rich hierarchical temporal structure but linked to different motor effectors (mouth vs. hand). Comparing the effect of musical expertise on synchrony perception of speech and music allowed us to dissociate generic and context-specific neural mechanisms ...
To obtain a coherent percept of the environment, the brain should integrate sensory signals from common sources and segregate those from independent sources. Recent research has demonstrated that humans integrate audiovisual information during spatial localization consistent with Bayesian Causal Inference (CI). However, the decision strategies that human observers employ for implicit and explicit CI remain unclear. Further, despite the key role of sensory reliability in multisensory integration, Bayesian CI has never been evaluated across a wide range of sensory reliabilities. This psychophysics study presented participants with spatially congruent and discrepant audiovisual signals at four levels of visual reliability. Participants localized the auditory signals (implicit CI) and judged whether auditory and visual signals came from common or independent sources (explicit CI). Our results demonstrate that humans employ model averaging as a decision strategy for implicit CI; they report an auditory spatial estimate that averages the spatial estimates under the two causal structures weighted by their posterior probabilities. Likewise, they explicitly infer a common source during the common-source judgment when the posterior probability for a common source exceeds a fixed threshold of 0.5. Critically, sensory reliability shapes multisensory integration in Bayesian CI via two distinct mechanisms: First, higher sensory reliability sensitizes humans to spatial disparity and thereby sharpens their multisensory integration window. Second, sensory reliability determines the relative signal weights in multisensory integration under the assumption of a common source. In conclusion, our results demonstrate that Bayesian CI is fundamental for integrating signals of variable reliabilities.
The cognitive and neural mechanisms mediating category-selective responses in the human brain remain controversial. Using functional magnetic resonance imaging and effective connectivity analyses (Dynamic Causal Modelling), we investigated animal- and tool-selective responses by manipulating stimulus modality (pictures versus words) and task (implicit versus explicit semantic). We dissociated two distinct mechanisms that engender category selectivity: in the ventral occipito-temporal cortex, tool-selective responses were observed irrespective of task, greater for pictures and mediated by bottom-up effects. In a left temporo-parietal action system, tool-selective responses were observed irrespective of modality, greater for explicit semantic tasks and mediated by top-down modulation from the left prefrontal cortex. These distinct activation and connectivity patterns suggest that the two systems support different cognitive operations, with the ventral occipito-temporal regions engaged in structural processing and the dorsal visuo-motor system in strategic semantic processing. Consistent with current semantic theories, explicit semantic processing of tools might thus rely on reactivating their associated action representations via top-down modulation. In terms of neuronal mechanisms, the category selectivity may be mediated by distinct top-down (task-dependent) and bottom-up (stimulus-dependent) mechanisms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.