The extent to which sound identification and sound localization depend on specialized auditory pathways was examined by using functional magnetic resonance imaging and event-related brain potentials. Participants performed an S1-S2 match-to-sample task in which S1 differed from S2 in its pitch and͞or location. In the pitch task, participants indicated whether S2 was lower, identical, or higher in pitch than S1. In the location task, participants were asked to localize S2 relative to S1 (i.e., leftward, same, or rightward). Relative to location, pitch processing generated greater activation in auditory cortex and the inferior frontal gyrus. Conversely, identifying the location of S2 relative to S1 generated greater activation in posterior temporal cortex, parietal cortex, and the superior frontal sulcus. Differential task-related effects on eventrelated brain potentials (ERPs) were seen in anterior and posterior brain regions beginning at 300 ms poststimulus and lasting for several hundred milliseconds. The converging evidence from two independent measurements of dissociable brain activity during identification and localization of identical stimuli provides strong support for specialized auditory streams in the human brain. These findings are analogous to the ''what'' and ''where'' segregation of visual information processing, and suggest that a similar functional organization exists for processing information from the auditory modality.A uditory scene analysis involves identifying the content (''what'') and the location (''where'') of sounds in the environment. Evidence from anatomical and neurophysiological studies in non-human primates (1-5) suggests that identification and localization of auditory events may be functionally segregated in specialized auditory streams. Combining anatomical and electrophysiological recording methods in non-human primates, Romanski et al. (5) have recently identified two separate auditory streams that originate in caudal and rostral auditory cortex, respectively, and project to different regions within the frontal lobe. The functional significance of these separate pathways has not been determined, although they suggest functional dissociations for auditory processes analogous to the ''what'' and ''where'' or ventral and dorsal cortical information streams for identifying and localizing visual (6, 7) and somatosensory (8) stimuli.Auditory neuroimaging studies employing positron emission tomography or functional magnetic resonance imaging (fMRI) have revealed enhanced blood flow in parietal areas during sound localization (9-11). In comparison, tasks requiring individuals to make tone discriminations (12) or identify auditory stimuli (e.g., words or environmental sounds) show enhanced activation in inferior frontal cortex (13,14). Although these results suggest that the processing of sound identity and sound location is functionally separable, the segregation in auditory information processing has yet to be demonstrated within the same individuals when using the same set of stimu...
The mismatch negativity (MMN) is a frontal negative deflection in the human event-related potential that typically occurs when a repeating auditory stimulus changes in some manner. The MMN can be elicited by many kinds of stimulus change, varying from simple changes in a single stimulus feature to abstract changes in the relationship between stimuli. The main intracerebral sources for the MMN are located in the auditory cortices of the temporal lobe. Since it occurs whether or not stimuli are being attended, the MMN represents an automatic cerebral process for detecting change. The MMN is clinically helpful in terms of demonstrating disordered sensory processing or disordered memory in groups of patients. Improvements in the techniques for measuring the MMN and in the paradigms for eliciting it will be needed before the MMN can become clinically useful as an objective measurement of such disorders in individual patients.
Unlike most other objects that are processed analytically, faces are processed configurally. This configural processing is reflected early in visual processing following face inversion and contrast reversal, as an increase in the N170 amplitude, a scalp-recorded event-related potential. Here, we show that these face-specific effects are mediated by the eye region. That is, they occurred only when the eyes were present, but not when eyes were removed from the face. The N170 recorded to inverted and negative faces likely reflects the processing of the eyes. We propose a neural model of face processing in which face- and eye-selective neurons situated in the superior temporal sulcus region of the human brain respond differently to the face configuration and to the eyes depending on the face context. This dynamic response modulation accounts for the N170 variations reported in the literature. The eyes may be central to what makes faces so special.
Abstract& A general assumption underlying auditory scene analysis is that the initial grouping of acoustic elements is independent of attention. The effects of attention on auditory stream segregation were investigated by recording event-related potentials (ERPs) while participants either attended to sound stimuli and indicated whether they heard one or two streams or watched a muted movie. The stimuli were pure-tone ABAÀ patterns that repeated for 10.8 sec with a stimulus onset asynchrony between A and B tones of 100 msec in which the A tone was fixed at 500 Hz, the B tone could be 500, 625, 750, or 1000 Hz, and À was a silence. In both listening conditions, an enhancement of the auditory-evoked response (P1-N1-P2 and N1c) to the B tone varied with Áf and correlated with perception of streaming. The ERP from 150 to 250 msec after the beginning of the repeating ABAÀ patterns became more positive during the course of the trial and was diminished when participants ignored the tones, consistent with behavioral studies indicating that streaming takes several seconds to build up. The N1c enhancement and the buildup over time were larger at right than left temporal electrodes, suggesting a right-hemisphere dominance for stream segregation. Sources in Heschl's gyrus accounted for the ERP modulations related to Áf-based segregation and buildup. These findings provide evidence for two cortical mechanisms of streaming: automatic segregation of sounds and attention-dependent buildup process that integrates successive tones within streams over several seconds. &
The 'temporality' hypothesis of confabulation posits that confabulations are true memories displaced in time, while the 'strategic retrieval' hypothesis suggests a general retrieval failure of which temporal confusion is a common symptom. Four confabulating patients with rupture of an anterior communicating artery (ACoA) aneurysm, eight non-confabulating ACoA controls and 16 normal controls participated in three experiments designed to test the two hypotheses. In Experiment 1, participants were tested on two continuous recognition tasks, one requiring temporal context distinctions, previously shown to be sensitive to confabulation and another that only requires content distinctions. Both manipulations were sensitive to confabulation, but not specific to it. Temporal context and content confusions (TCCs and CCs) can be explained as failures to make fine-grained distinctions within memory. In Experiment 2, free recall of semantic narratives that require strategic retrieval but are independent of temporal context was used to induce confabulations associated with remote memory, acquired before the onset of amnesia. Confabulators produced significantly more errors. Thus, when retrieval demands are equated, confabulations can be induced in the absence of temporal confusions. Only confabulators conflated semantic content from different remote semantic narratives and introduced idiosyncratic content, suggesting that qualitatively different mechanisms are responsible for distortions due to normal memory failure and for confabulation. Lesion analyses revealed that damage to ventromedial prefrontal cortex is sufficient for temporal context errors to occur, but additional orbitofrontal damage is crucial for spontaneous confabulation. In Experiment 3, we tested whether failure in memory monitoring is crucial for confabulation. Recognition of details from semantic and autobiographical narratives was used to minimize the initiation and search components of strategic retrieval. Only confabulators made more false alarms on both tasks, endorsed even highly implausible lures related to autobiographical events and were indiscriminately confident about their choices. These findings support a strategic retrieval account of confabulation of which monitoring is a critical component. Post-retrieval monitoring has at least two components: one is early, rapid and pre-conscious and the other is conscious and elaborate. Failure of at least the former is necessary and sufficient for confabulation. Other deficits, including TCC and CC, may be required for spontaneous confabulations to arise. The confluence of different sub-components of strategic retrieval would determine the content of confabulation and exacerbate its occurrence.
The physiological processes underlying the segregation of concurrent sounds were investigated through the use of event-related brain potentials. The stimuli were complex sounds containing multiple harmonics, one of which could be mistuned so that it was no longer an integer multiple of the fundamental. Perception of concurrent auditory objects increased with degree of mistuning and was accompanied by negative and positive waves that peaked at 180 and 400 ms poststimulus, respectively. The negative wave, referred to as object-related negativity, was present during passive listening, but the positive wave was not. These findings indicate bottom-up and top-down influences during auditory scene analysis. Brain electrical source analyses showed that distinguishing simultaneous auditory objects involved a widely distributed neural network that included auditory cortices, the medial temporal lobe, and posterior association cortices.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.