Spoken language, in contrast to written text, provides prosodic information such as rhythm, pauses, accents, amplitude and pitch variations. However, little is known about when and how these features are used by the listener to interpret the speech signal. Here we use event-related brain potentials (ERP) to demonstrate that intonational phrasing guides the initial analysis of sentence structure. Our finding of a positive shift in the ERP at intonational phrase boundaries suggests a specific on-line brain response to prosodic processing. Additional ERP components indicate that a false prosodic boundary is sufficient to mislead the listener's sentence processor. Thus, the application of ERP measures is a promising approach for revealing the time course and neural basis of prosodic information processing.
In order to investigate the lateralization of emotional speech we recorded the brain responses to three emotional intonations in two conditions, i.e., ''normal'' speech and ''prosodic'' speech (i.e., speech with no linguistic meaning, but retaining the Ôslow prosodic modulationsÕ of speech). Participants listened to semantically neutral sentences spoken with a positive, neutral, or negative intonation in both conditions and judged how positive, negative, or neutral the intonation was on a five-point scale. Core peri-sylvian language areas, as well as some frontal and subcortical areas were activated bilaterally in the normal speech condition. In contrast, a bilateral fronto-opercular region was active when participants listened to prosodic speech. Positive and negative intonations elicited a bilateral fronto-temporal and subcortical pattern in the normal speech condition, and more frontal activation in the prosodic speech condition. The current results call into question an exclusive right hemisphere lateralization of emotional prosody and expand patient data on the functional role of the basal ganglia during the perception of emotional prosody.
By means of fMRI measurements, the present study identifies brain regions in left and right peri-sylvian areas that subserve grammatical or prosodic processing. Normal volunteers heard 1) normal sentences; 2) so-called syntactic sentences comprising syntactic, but no lexical-semantic information; and 3) manipulated speech signals comprising only prosodic information, i.e., speech melody. For all conditions, significant blood oxygenation signals were recorded from the supratemporal plane bilaterally. Left hemisphere areas that surround Heschl gyrus responded more strongly during the two sentence conditions than to speech melody. This finding suggests that the anterior and posterior portions of the superior temporal region (STR) support lexical-semantic and syntactic aspects of sentence processing. In contrast, the right superior temporal region, in especially the planum temporale, responded more strongly to speech melody. Significant brain activation in the fronto-opercular cortices was observed when participants heard pseudo sentences and was strongest during the speech melody condition. In contrast, the fronto-opercular area is not prominently involved in listening to normal sentences. Thus, the functional activation in fronto-opercular regions increases as the grammatical information available in the sentence decreases. Generally, brain responses to speech melody were stronger in right than left hemisphere sites, suggesting a particular role of right cortical areas in the processing of slow prosodic modulations. Hum. Brain Mapping 17:73-88, 2002.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.