Most work on how pitch is encoded in the auditory cortex has focused on tonotopic (absolute) pitch maps. However, melodic information is thought to be encoded in the brain in two different "relative pitch" forms, a domain-general contour code (up/down pattern of pitch changes) and a music-specific interval code (exact pitch distances between notes). Event-related potentials were analyzed in nonmusicians from both passive and active oddball tasks where either the contour or the interval of melody-final notes was occasionally altered. The occasional deviant notes generated a right frontal positivity peaking around 350 msec and a central parietal P3b peaking around 580 msec that were present only when participants focused their attention on the auditory stimuli. Both types of melodic information were encoded automatically in the absence of absolute pitch cues, as indexed by a mismatch negativity wave recorded during the passive conditions. The results indicate that even in the absence of musical training, the brain is set up to automatically encode music-specific melodic information, even when absolute pitch information is not available.
The neural processes underlying concurrent sound segregation were examined by using event-related brain potentials. Participants were presented with complex sounds comprised of multiple harmonics, one of which could be mistuned so that it was no longer an integer multiple of the fundamental. In separate blocks of trials, short-, middle-, and long-duration sounds were presented and participants indicated whether they heard one sound (i.e., buzz) or two sounds (i.e., buzz plus another sound with a pure-tone quality). The auditory stimuli were also presented while participants watched a silent movie in order to evaluate the extent to which the mistuned harmonic could be automatically detected. The perception of the mistuned harmonic as a separate sound was associated with a biphasic negative-positive potential that peaked at about 150 and 350 ms after sound onset, respectively. Long duration sounds also elicited a sustained potential that was greater in amplitude when the mistuned harmonic was perceptually segregated from the complex sound. The early negative wave, referred to as the object-related negativity (ORN), was present during both active and passive listening, whereas the positive wave and the mistuning-related changes in sustained potentials were present only when participants attended to the stimuli. These results are consistent with a two-stage model of auditory scene analysis in which the acoustic wave is automatically decomposed into perceptual groups that can be identified by higher executive functions. The ORN and the positive waves were little affected by sound duration, indicating that concurrent sound segregation depends on transient neural responses elicited by the discrepancy between the mistuned harmonic and the harmonic frequency expected based on the fundamental frequency of the incoming stimulus.
In this article, the authors show that aging differentially affects peoples' ability to automatically and voluntarily process auditory information. Young, middle-aged, and older adults matched behaviorally in an auditory discrimination task showed similar patterns of neural activity indexing the voluntary and conscious detection of deviant (i.e., target) stimuli. In contrast, a negative wave indexing automatic processing (the mismatch negativity) was elicited only in young adults for near-threshold stimuli. These results indicate that aging affects the ability to automatically register small changes in a stream of homogeneous stimuli. However, this age-related decline in automatic detection of small change in the auditor environment can be compensated for by top-down controlled processes.
Deficits in parsing concurrent auditory events are believed to contribute to older adults' difficulties in understanding speech in adverse listening conditions (e.g., cocktail party). To explore the level at which aging impairs sound segregation, we measured auditory evoked fields (AEFs) using magnetoencephalography while young, middle-aged, and older adults were presented with complex sounds that either had all of their harmonics in tune or had the third harmonic mistuned by 4 or 16% of its original value. During the recording, participants were asked to ignore the stimuli and watch a muted subtitled movie of their choice. For each participant, the AEFs were modeled with a pair of dipoles in the superior temporal plane, and the effects of age and mistuning were examined on the amplitude and latency of the resulting source waveforms. Mistuned stimuli generated an early positivity (60 -100 ms), an object-related negativity (ORN) (140 -180 ms) that overlapped the N1 and P2 waves, and a positive displacement that peaked at ϳ230 ms (P230) after sound onset. The early mistuning-related enhancement was similar in all three age groups, whereas the subsequent modulations (ORN and P230) were reduced in older adults. These age differences in auditory cortical activity were associated with a reduced likelihood of hearing two sounds as a function of mistuning. The results reveal that inharmonicity is rapidly and automatically registered in all three age groups but that the perception of concurrent sounds declines with age.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.