The present functional magnetic resonance imaging (fMRI) study examined the neurophysiological processing of voice information. The impact of the major acoustic parameters as well as the role of the listener's and the speaker's gender were investigated. Male and female, natural, and manipulated voices were presented to 16 young adults who were asked to judge the naturalness of each voice. The hemodynamic responses were acquired by a 3T Bruker scanner utilizing an event-related design. The activation was generally stronger in response to female voices as well as to manipulated voice signals, and there was no interaction with the listener's gender. Most importantly, the results suggest a functional segregation of the right superior temporal cortex for the processing of different voice parameters, whereby (1) voice pitch is processed in regions close and anterior to Heschl's Gyrus, (2) voice spectral information is processed in posterior parts of the superior temporal gyrus (STG) and areas surrounding the planum parietale (PP) bilaterally, and (3) information about prototypicality is predominately processed in anterior parts of the right STG. Generally, by identifying distinct functional regions in the right STG, our study supports the notion of a fundamental role of the right hemisphere in spoken language comprehension.
The present study investigated the influence of implicit speaker information on the sentence interpretation. We auditorily presented sentences that comprised of either stereotypically male or stereotypically female self-referent utterances. In the congruent conditions, these utterances were produced by speakers whose gender matched the semantic content. In the incongruent condition, stereotypically male utterances were produced by female speakers and vice versa. The event-related brain potentials (ERP) of 32 listeners exhibited a late positivity (P600) for the incongruent condition. No significant differences were observed between male and female listeners. In the absence of any ERP effect in the earlier time range, it was concluded that the access of the semantic information as such is independent of the speaker's voice, but that speaker property, semantic content and stereotypical knowledge are integrated in a later processing stage. In speech communication the listener, not only decodes the speaker's intended linguistic message from the acoustic signal, but at the same time he/she extracts information about age, gender and other properties of the speaker. In the discourse situation, this implicit information serves as part of the context knowledge. The aim of the present experiment was to investigate how this information influences the interpretation of an utterance.Previous studies registrating event-related brain potentials (ERPs) while subjects were presented with sentences, have shown that a violation of explicit context knowledge about a person leads to a distinct brain response. In one study [3], subjects had learned facts about fictious people (e.g. Mary is a lawyer). In the test session they were presented with facts confirming or contradicting these learned facts. A contradicting target utterance (e.g. Mary is a chemist) led to a negative ERP called N400 which is event-locked to the presentation of the contradicting word. Similar effects have been reported in relation to the preceding discourse information (e.g. 'Jane told her brother that he was exceptionally slow', presented in a context where he was described as being very fast) [14]. As the N400 is typically interpreted as reflecting difficulties in semantic/pragmatic integration [1,6,9 -11,15] it was concluded from the latter result that rapid word integration is influenced by a broad range of context factors including explicit knowledge as well as the discourse information.However, another study demonstrated that a target word violating stereotypical assumptions (e.g. The driver of the wrecked car pulled herself through the window) does not lead to an N400 effect but to a late positive deflection (P600) of the ERP [12]. The P600 is to date assumed to reflect processes of repair or reanalysis, particularly during grammatical violations [4,5,7,8]. It has therefore been suggested that the P600 effect observed in response to stereotype violations may reflect similar processes involving re-integration of semantic meaning and stereotypical be...
The present study investigates the relationship of linguistic (phonetic) and extralinguistic (voice) information in preattentive auditory processing. We provide neurophysiological data, which show for the first time that both kinds of information are processed in parallel at an early preattentive stage. In order to establish the temporal and spatial organization of the underlying neuronal processes, we studied the conjunction of voice and word deviations in a mismatch negativity experiment, whereby the listener's brain responses were collected using magnetoencephalography. The stimuli consisted of single spoken words, whereby the deviants manifested a change of the word, of the voice, or both word and voice simultaneously (combined). First, we identified the N100m (overlain by mismatch field, MMF) and localized its generators, analyzing N100m/MMF latency, dipole localization, and dipole strength. While the responses evoked by deviant stimuli were more anterior than the standard, localization differences between the deviants could not be shown. The dipole strength was larger for deviants than the standard stimulus, but again, no differences between the deviants could be established. There was no difference in the hemispheric lateralization of the responses. However, a difference between the deviants was observed in the latencies. The N100m/MMF revealed a significantly shorter and less variant latency for the combined stimulus compared to all other experimental conditions. The data suggest an integral parallel processing model, which describes the early extraction of phonetic and voice information from the speech signal as parallel and contingent processes. © 2002 Elsevier Science (USA)
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.