Hearing one's own speech is important for language learning and maintenance of accurate articulation. For example, people with postlinguistically acquired deafness often show a gradual deterioration of many aspects of speech production. In this manuscript, data are presented that address the role played by acoustic feedback in the control of voice fundamental frequency (F0). Eighteen subjects produced vowels under a control (normal F0 feedback) and two experimental conditions: F0 shifted up and F0 shifted down. In each experimental condition subjects produced vowels during a training period in which their F0 was slowly shifted without their awareness. Following this exposure to transformed F0, their acoustic feedback was returned to normal. Two effects were observed. Subjects compensated for the change in F0 and showed negative aftereffects. When F0 feedback was returned to normal, the subjects modified their produced F0 in the opposite direction to the shift. The results suggest that fundamental frequency is controlled using auditory feedback and with reference to an internal pitch representation. This is consistent with current work on internal models of speech motor control.
This fMRI study explores brain regions involved with perceptual enhancement afforded by observation of visual speech gesture information. Subjects passively identified words presented in the following conditions: audio-only, audiovisual, audio-only with noise, audiovisual with noise, and visual only. The brain may use concordant audio and visual information to enhance perception by integrating the information in a converging multisensory site. Consistent with response properties of multisensory integration sites, enhanced activity in middle and superior temporal gyrus/sulcus was greatest when concordant audiovisual stimuli were presented with acoustic noise. Activity found in brain regions involved with planning and execution of speech production in response to visual speech presented with degraded or absent auditory stimulation, is consistent with the use of an additional pathway through which speech perception is facilitated by a process of internally simulating the intended speech act of the observed speaker.
A unique matrix metalloproteinase and tissue inhibitor of metalloproteinase portfolio was observed in ascending thoracic aortic aneurysms from patients with bicuspid aortic valve compared with patients with tricuspid aortic valve. These differences, suggesting disparate mechanisms of extracellular matrix remodeling, may provide unique biochemical targets for ascending thoracic aortic aneurysm prognostication and treatment in these 2 groups of patients.
fMRI was used to assess the relationship between brain activation and the degree of audiovisual integration of speech information during a phoneme categorization task. Twelve subjects heard a speaker say the syllable /aba/ paired either with video of the speaker saying the same consonant or a different one (/ava/). In order to manipulate the degree of audiovisual integration, the audio was either synchronous or +/- 400 ms out of phase with the visual stimulus. Subjects reported whether they heard the consonant /b/ or another consonant; fewer /b/ responses when the audio and visual stimuli were mismatched indicated higher levels of visual influence on speech perception (McGurk effect). Active brain regions during presentation of the incongruent stimuli included the superior temporal and inferior frontal gyrus, as well as extrastriate, premotor and posterior parietal cortex. A regression analysis related the strength of the McGurk effect to levels of brain activation. Paradoxically, higher numbers of /b/ responses were positively correlated with activation in the left occipito-temporal junction, an area often associated with processing visual motion. This activation suggests that auditory information modulates visual processing to affect perception.
Evidence regarding visually guided limb movements suggests that the motor system learns and maintains neural maps between motor commands and sensory feedback. Such systems are hypothesized to be used in a feed-forward control strategy that permits precision and stability without the delays of direct feedback control. Human vocalizations involve precise control over vocal and respiratory muscles. However, little is known about the sensorimotor representations underlying speech production. Here, we manipulated the heard fundamental frequency of the voice during speech to demonstrate learning of auditory-motor maps. Mandarin speakers repeatedly produced words with specific pitch patterns (tone categories). On each successive utterance, the frequency of their auditory feedback was increased by 1/100 of a semitone until they heard their feedback one full semitone above their true pitch. Subjects automatically compensated for these changes by lowering their vocal pitch. When feedback was unexpectedly returned to normal, speakers significantly increased the pitch of their productions beyond their initial baseline frequency. This adaptation was found to generalize to the production of another tone category. However, results indicate that a more robust adaptation was produced for the tone that was spoken during feedback alteration. The immediate aftereffects suggest a global remapping of the auditory-motor relationship after an extremely brief training period. However, this learning does not represent a complete transformation of the mapping; rather, it is in part target dependent.
Little is known about the basic processes underlying the behavior of singing. This experiment was designed to examine differences in the representation of the mapping between fundamental frequency (F0) feedback and the vocal production system in singers and nonsingers. Auditory feedback regarding F0 was shifted down in frequency while participants sang the consonant-vowel /ta/. During the initial frequency-altered trials, singers compensated to a lesser degree than nonsingers, but this difference was reduced with continued exposure to frequency-altered feedback. After brief exposure to frequency altered auditory feedback, both singers and nonsingers suddenly heard their F0 unaltered. When participants received this unaltered feedback, only singers' F0 values were found to be significantly higher than their F0 values produced during baseline and control trials. These aftereffects in singers were replicated when participants sang a different note than the note they produced while hearing altered feedback. Together, these results suggest that singers rely more on internal models than nonsingers to regulate vocal productions rather than real time auditory feedback.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.