The functional organization of human auditory cortex has not yet been characterized beyond a rudimentary level of detail. Here, we use functional MRI to measure the microstructure of orthogonal tonotopic and periodotopic gradients forming complete auditory field maps (AFMs) in human core and belt auditory cortex. These AFMs show clear homologies to subfields of auditory cortex identified in nonhuman primates and in human cytoarchitectural studies. In addition, we present measurements of the macrostructural organization of these AFMs into "clover leaf" clusters, consistent with the macrostructural organization seen across human visual cortex. As auditory cortex is at the interface between peripheral hearing and central processes, improved understanding of the organization of this system could open the door to a better understanding of the transformation from auditory spectrotemporal signals to higher-order information such as speech categories.tonotopy | periodotopy | cochleotopy | temporal receptive field | traveling wave H umans have evolved a highly sophisticated auditory system for the transduction and analysis of acoustic information, such as the spectral content of sounds and the temporal modulation of sound energy. The basilar membrane of the cochlea is organized tonotopically to represent the spectral content of sounds from high to low frequencies. This tonotopic (or cochleotopic) organization is preserved as auditory information is processed and passed on from the cochlea to the superior olive, the inferior colliculus, the medial geniculate nucleus, and into primary auditory cortex. Such cortical preservation of the peripheral sensory topography creates a common topographic sensory matrix in hierarchically organized sensory systems, important for consistent sensory computations. The current state of knowledge of the functional organization of human auditory cortex indicates the existence of multiple cortical subfields organized tonotopically. However, the number of these human cortical subfields, their boundaries, and their orientations relative to anatomical landmarks remain equivocal, due in part to an inability to measure cortical representations of a second acoustic dimension orthogonal to tonotopy to accurately delineate them.This ambiguity of human auditory subfield definitions contrasts dramatically with the current understanding of the functional organization of human visual cortex, in which detailed maps of the organization of the retina, called visual field maps (VFMs), have been well characterized (1-9). In vision, there are two orthogonal dimensions of visual space, eccentricity and polar angle, which together allow for the mapping of cortical representations to unique locations in visual space and the complete delineation of the boundaries of individual visual field maps. In audition, there has been only one dimension of sensory topography clearly mapped in cortex, which makes it impossible to use sensory topography to accurately differentiate specific human cortical auditory field maps (AFM...
The human superior temporal sulcus (STS) is responsive to visual and auditory information, including sounds and facial cues during speech recognition. We investigated the functional organization of STS with respect to modality-specific and multimodal speech representations. Twenty younger adult participants were instructed to perform an oddball detection task and were presented with auditory, visual, and audiovisual speech stimuli, as well as auditory and visual nonspeech control stimuli in a block fMRI design. Consistent with a hypothesized anterior-posterior processing gradient in STS, auditory, visual and audiovisual stimuli produced the largest BOLD effects in anterior, posterior and middle STS (mSTS), respectively, based on whole-brain, linear mixed effects and principal component analyses. Notably, the mSTS exhibited preferential responses to multisensory stimulation, as well as speech compared to nonspeech. Within the mid-posterior and mSTS regions, response preferences changed gradually from visual, to multisensory, to auditory moving posterior to anterior. Post hoc analysis of visual regions in the posterior STS revealed that a single subregion bordering the mSTS was insensitive to differences in low-level motion kinematics yet distinguished between visual speech and nonspeech based on multi-voxel activation patterns. These results suggest that auditory and visual speech representations are elaborated gradually within anterior and posterior processing streams, respectively, and may be integrated within the mSTS, which is sensitive to more abstract speech information within and across presentation modalities. The spatial organization of STS is consistent with processing streams that are hypothesized to synthesize perceptual speech representations from sensory signals that provide convergent information from visual and auditory modalities.
Research on the neural basis of speech-reading implicates a network of auditory language regions involving inferior frontal cortex, premotor cortex and sites along superior temporal cortex. In audiovisual speech studies, neural activity is consistently reported in posterior superior temporal Sulcus (pSTS) and this site has been implicated in multimodal integration. Traditionally, multisensory interactions are considered high-level processing that engages heteromodal association cortices (such as STS). Recent work, however, challenges this notion and suggests that multisensory interactions may occur in low-level unimodal sensory cortices. While previous audiovisual speech studies demonstrate that high-level multisensory interactions occur in pSTS, what remains unclear is how early in the processing hierarchy these multisensory interactions may occur. The goal of the present fMRI experiment is to investigate how visual speech can influence activity in auditory cortex above and beyond its response to auditory speech. In an audiovisual speech experiment, subjects were presented with auditory speech with and without congruent visual input. Holding the auditory stimulus constant across the experiment, we investigated how the addition of visual speech influences activity in auditory cortex. We demonstrate that congruent visual speech increases the activity in auditory cortex.
Recent evidence suggests that the speech motor system may play a significant role in speech perception. Repetitive transcranial magnetic stimulation (TMS) applied to a speech region of premotor cortex impaired syllable identification, while stimulation of motor areas for different articulators selectively facilitated identification of phonemes relying on those articulators. However, in these experiments performance was not corrected for response bias. It is not currently known how response bias modulates activity in these networks. The present functional magnetic resonance imaging experiment was designed to produce specific, measureable changes in response bias in a speech perception task. Minimal consonant-vowel stimulus pairs were presented between volume acquisitions for same-different discrimination. Speech stimuli were embedded in Gaussian noise at the psychophysically determined threshold level. We manipulated bias by changing the ratio of same-to-different trials: 1:3, 1:2, 1:1, 2:1, 3:1. Ratios were blocked by run and subjects were cued to the upcoming ratio at the beginning of each run. The stimuli were physically identical across runs. Response bias (criterion, C) was measured in individual subjects for each ratio condition. Group mean bias varied in the expected direction. We predicted that activation in frontal but not temporal brain regions would co-vary with bias. Group-level regression of bias scores on percent signal change revealed a fronto-parietal network of motor and sensory-motor brain regions that were sensitive to changes in response bias. We identified several pre- and post-central clusters in the left hemisphere that overlap well with TMS targets from the aforementioned studies. Importantly, activity in these regions covaried with response bias even while the perceptual targets remained constant. Thus, previous results suggesting that speech motor cortex participates directly in the perceptual analysis of speech should be called into question.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.