While everyone has experienced that vision of lip movements may improve speech perception, little is known about the neural mechanisms by which audiovisual speech information is combined. Event-related potentials (ERPs) were recorded while subjects performed an auditory recognition task among four different natural syllables randomly presented in the auditory (A), visual (V) or congruent bimodal (AV) condition. We found that(1) bimodal syllables were identified more rapidly than auditory-alone stimuli; (2) this behavioural facilitation was associated with crossmodal [AV -(A+V)] ERP effects around 120-190 ms latency, mainly expressed as a decrease of unimodal N1 generator activities in the auditory cortex. This finding provides evidence for suppressive, speech-specific audiovisual integration mechanisms, which are likely to be related to the dominance of the auditory modality for speech perception. Furthermore, the latency of the effect indicates that integration operates at pre-representational stages of stimulus analysis, probably via feedback projections from visual and/or polymodal areas.
Previous studies raise the hypothesis that attentional bias in the phase of neocortical excitability fluctuations (oscillations) represents a fundamental mechanism for tuning the brain to the temporal dynamics of task-relevant event patterns. To evaluate this hypothesis, we recorded intracranial electrocortical activity in human epilepsy patients while they performed an audiovisual stream selection task. Consistent with our hypothesis, (1) attentional modulation of oscillatory entrainment operates in a distinct network of areas including auditory, visual, posterior parietal, inferior motor, inferior frontal and superior midline frontal cortex, (2) the degree of oscillatory entrainment depends on the predictability of the stimulus stream, and (3) the attentional phase shift of entrained oscillation cooccurs with classical attentional effects observed on phase-locked evoked activity in sensory-specific areas but seems to operate on entrained low-frequency oscillations that cannot be explained by sensory activity evoked at the rate of stimulation. Thus, attentional entrainment appears to tune a network of brain areas to the temporal dynamics of behaviorally relevant event streams, contributing to its perceptual and behavioral selection.
In noisy environments, we use auditory selective attention to actively ignore distracting sounds and select relevant information, as during a cocktail party to follow one particular conversation. The present electrophysiological study aims at deciphering the spatiotemporal organization of the effect of selective attention on the representation of concurrent sounds in the human auditory cortex. Sound onset asynchrony was manipulated to induce the segregation of two concurrent auditory streams. Each stream consisted of amplitude modulated tones at different carrier and modulation frequencies. Electrophysiological recordings were performed in epileptic patients with pharmacologically resistant partial epilepsy, implanted with depth electrodes in the temporal cortex. Patients were presented with the stimuli while they either performed an auditory distracting task or actively selected one of the two concurrent streams. Selective attention was found to affect steady-state responses in the primary auditory cortex, and transient and sustained evoked responses in secondary auditory areas. The results provide new insights on the neural mechanisms of auditory selective attention: stream selection during sound rivalry would be facilitated not only by enhancing the neural representation of relevant sounds, but also by reducing the representation of irrelevant information in the auditory cortex. Finally, they suggest a specialization of the left hemisphere in the attentional selection of fine-grained acoustic information.
Hemodynamic studies have shown that the auditory cortex can be activated by visual lip movements and is a site of interactions between auditory and visual speech processing. However, they provide no information about the chronology and mechanisms of these crossmodal processes. We recorded intracranial event-related potentials to auditory, visual, and bimodal speech syllables from depth electrodes implanted in the temporal lobe of 10 epileptic patients (altogether 932 contacts). We found that lip movements activate secondary auditory areas, very shortly (Ϸ10 ms) after the activation of the visual motion area MT/V5. After this putatively feedforward visual activation of the auditory cortex, audiovisual interactions took place in the secondary auditory cortex, from 30 ms after sound onset and before any activity in the polymodal areas. Audiovisual interactions in the auditory cortex, as estimated in a linear model, consisted both of a total suppression of the visual response to lipreading and a decrease of the auditory responses to the speech sound in the bimodal condition compared with unimodal conditions. These findings demonstrate that audiovisual speech integration does not respect the classical hierarchy from sensory-specific to associative cortical areas, but rather engages multiple cross-modal mechanisms at the first stages of nonprimary auditory cortex activation.
The primary somatosensory cortex (S1) can be subdivided cytoarchitectonically into four distinct Brodmann areas (3a, 3b, 1, and 2), but these areas have never been successfully delineated in vivo in single human subjects. Here, we demonstrate the functional parcellation of four areas of S1 in individual human subjects based on high-resolution functional MRI measurements made at 7 T using vibrotactile stimulation. By stimulating four sites along the length of the index finger, we were able to identify and locate map reversals of the base to tip representation of the index finger in S1. We suggest that these reversals correspond to the areal borders between the mirrored representations in the four Brodmann areas, as predicted from electrophysiology measurements in nonhuman primates. In all subjects, maps were highly reproducible across scanning sessions and stable over weeks. In four of the six subjects scanned, four, mirrored, within-finger somatotopic maps defining the extent of the Brodmann areas could be directly observed on the cortical surface. In addition, by using multivariate classification analysis, the location of stimulation on the index finger (four distinct sites) could be decoded with a mean accuracy of 65% across subjects. Our measurements thus show that within-finger topography is present at the millimeter scale in the cortex and is highly reproducible. The ability to identify functional areas of S1 in vivo in individual subjects will provide a framework for investigating more complex aspects of tactile representation in S1.
A desirable goal of functional MRI (fMRI), both clinically and for basic research, is to produce detailed maps of cortical function in individual subjects. Single-subject mapping of the somatotopic hand representation in the human primary somatosensory cortex (S1) has been performed using both phase-encoding and block/event-related designs. Here, we review the theoretical strengths and limits of each method and empirically compare high-resolution (1.5 mm isotropic) somatotopic maps obtained using fMRI at ultrahigh magnetic field (7 T) with phase-encoding and event-related designs in six subjects in response to vibrotactile stimulation of the five fingertips. Results show that the phase-encoding design is more efficient than the event-related design for mapping fingertip-specific responses and in particular allows us to describe a new additional somatotopic representation of fingertips on the precentral gyrus. However, with sufficient data, both designs yield very similar fingertip-specific maps in S1, which confirms that the assumption of local representational continuity underlying phase-encoding designs is largely valid at the level of the fingertips in S1. In addition, it is shown that the event-related design allows the mapping of overlapping cortical representations that are difficult to estimate using the phase-encoding design. The event-related data show a complex pattern of overlapping cortical representations for different fingertips within S1 and demonstrate that regions of S1 responding to several adjacent fingertips can incorrectly be identified as responding preferentially to one fingertip in the phase-encoding data.
Recent fMRI studies of the human primary somatosensory cortex have been able to differentiate the cortical representations of different fingertips at a single-subject level. These studies did not, however, investigate the expected overlap in cortical activation due to the stimulation of different fingers. Here, we used an event-related design in six subjects at 7 Tesla to explore the overlap in cortical responses elicited in S1 by vibrotactile stimulation of the five fingertips. We found that all parts of S1 show some degree of spatial overlap between the cortical representations of adjacent or even nonadjacent fingertips. In S1, the posterior bank of the central sulcus showed less overlap than regions in the post-central gyrus, which responded to up to five fingertips. The functional properties of these two areas are consistent with the known layout of cytoarchitectonically defined subareas, and we speculate that they correspond to subarea 3b (S1 proper) and subarea 1, respectively. In contrast with previous fMRI studies, however, we did not observe discrete activation clusters that could unequivocally be attributed to different subareas of S1. Venous maps based on T2*-weighted structural images suggest that the observed overlap is not driven by extra-vascular contributions from large veins. Hum Brain Mapp 35:2027–2043, 2014. © 2013 The Authors Human Brain Mapping published by Wiley Periodicals, Inc.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.