Visual speech influences the perception of heard speech. A classic example of this is the McGurk effect, whereby an auditory /pa/ overlaid onto a visual /ka/ induces the fusion percept of /ta/. Recent behavioral and neuroimaging research has highlighted the importance of both articulatory representations and motor speech regions of the brain, particularly Broca’s area, in audiovisual (AV) speech integration. Alternatively, AV speech integration may be accomplished by the sensory system through multisensory integration in the posterior superior temporal sulcus (pSTS). We assessed the claims regarding the involvement of the motor system in AV integration in two experiments: (i) examining the effect of articulatory suppression on the McGurk effect, and (ii) determining if motor speech regions show an AV integration profile. The hypothesis regarding experiment (i) is that if the motor system plays a role in McGurk fusion, distracting the motor system through articulatory suppression should result in a reduction of McGurk fusion. The results of experiment (i) showed that articulatory suppression results in no such reduction, suggesting that the motor system is not responsible for the McGurk effect. The hypothesis of experiment (ii) was that if the brain activation to AV speech in motor regions (such as Broca’s area) reflects AV integration, the profile of activity should reflect AV integration: AV > AO (auditory-only) and AV > VO (visual-only). The results of experiment (ii) demonstrate that motor speech regions do not show this integration profile, while the pSTS does. Instead, activity in motor regions is task-dependent. The combined results suggest that AV speech integration does not rely on the motor system.
Visual features such as edges and corners are carried by high-order statistics. Previous analysis of discrimination of "isodipole" textures, which isolate specific high-order statistics, demonstrates visual sensitivity to these statistics but stops short of analyzing the underlying computations. Here we use a new "texture centroid" paradigm to probe these co mputations. We focus on two canonical isodipole textures, the "even" and "odd" textures: any 2×2 block of even (odd) texture contains an even (odd) number of black (and white) checks. Each stimulus comprised a spatially random array of black-and-white texture-disks (background = mean gray) that varied in their fourth-order statistics. In the Even (Odd) condition, disks varied along the continuum between random "coinflip" texture and pure (highly structured) even (odd) target texture. The task was to mouse-click the centroid of the disk array, weighting each disk location by the target structure level of the disk-texture (ranging from 0 for coinflip to 1 for even or odd). For each of block-sizes S = 2×2, 2×3, 2×4 and 3×3, a linear model was used to estimate the weight exerted on the subject's responses by the differently patterned blocks of size S. Only the results with 2×4 and 3×3 blocks were consistent with the data. In the Even condition, homogeneous blocks exerted the most weight; in the odd condition, block-pattern symmetry was important. These findings show that visual mechanisms sensitive to four-point correlations do not compute "evenness" or "oddness" per se, but rather are activated selectively by features whose frequency varies across isodipole textures.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.