Microsecond differences in the arrival time of a sound at the two ears (interaural time differences, ITDs) are the main cue for localizing low-frequency sounds in space. Traditionally, ITDs are thought to be encoded by an array of coincidence-detector neurons, receiving excitatory inputs from the two ears via axons of variable length ('delay lines'), to create a topographic map of azimuthal auditory space. Compelling evidence for the existence of such a map in the mammalian lTD detector, the medial superior olive (MSO), however, is lacking. Equally puzzling is the role of a--temporally very precise glycine--mediated inhibitory input to MSO neurons. Using in vivo recordings from the MSO of the Mongolian gerbil, we found the responses of ITD-sensitive neurons to be inconsistent with the idea of a topographic map of auditory space. Moreover, local application of glycine and its antagonist strychnine by iontophoresis (through glass pipette electrodes, by means of an electric current) revealed that precisely timed glycine-controlled inhibition is a critical part of the mechanism by which the physiologically relevant range of ITDs is encoded in the MSO. A computer model, simulating the response of a coincidence-detector neuron with bilateral excitatory inputs and a temporally precise contralateral inhibitory input, supports this conclusion.
Interaural time difference (ITD) is a critical cue to sound-source localization. Traditional models assume that sounds leading at one ear, and perceived on that side, are processed in the opposite midbrain. Using functional magnetic resonance imaging we demonstrate that as the ITDs of sounds increase, midbrain activity can switch sides, even though perceived location remains on the same side. The data require a new model for human ITD processing.
The ability to locate the direction of a target sound in a background of competing sources is critical to the survival of many species and important for human communication. Nevertheless, brain mechanisms that provide for such accurate localization abilities remain poorly understood. In particular, it remains unclear how the auditory brain is able to extract reliable spatial information directly from the source when competing sounds and reflections dominate all but the earliest moments of the sound wave reaching each ear. We developed a stimulus mimicking the mutual relationship of sound amplitude and binaural cues, characteristic to reverberant speech. This stimulus, named amplitude modulated binaural beat, allows for a parametric and isolated change of modulation frequency and phase relations. Employing magnetoencephalography and psychoacoustics it is demonstrated that the auditory brain uses binaural information in the stimulus fine structure only during the rising portion of each modulation cycle, rendering spatial information recoverable in an otherwise unlocalizable sound. The data suggest that amplitude modulation provides a means of "glimpsing" low-frequency spatial cues in a manner that benefits listening in noisy or reverberant environments.spatial hearing | binaural processing | auditory system | psychoacoustics | auditory MEG H uman listeners are able to determine the location of a talker against a background of competing voices, even in rooms where walls generate reflections that, taken together, can be more intense than sounds arriving directly from the source. The dominant cues for localization in such complex sound fields are the interaural time differences (ITDs) conveyed in the temporal fine structure (TFS) of low-frequency (<1,500 Hz) sounds (1); normal-hearing listeners can discriminate ITDs as low as 10-20 μs in 500-and 1,000-Hz pure tones to judge the source location (2). In addition to source localization-the focus of the current study-sensitivity to ITDs is also reported to contribute to "spatial unmasking": different spatial configurations of the signal and background noise enable sources to be heard out, increasing their intelligibility (3, 4).The majority of real-world sounds are strongly modulated in amplitude. Without these modulations humans are completely insensitive to the sound source location after its onset in reverberant environments [the Franssen effect (5)]. Human speech, for example, contains amplitude modulation (AM) rates ranging from those of syllables and phonemes to those conveying information about voice pitch (i.e., from 2 Hz up to about 300 Hz). These modulations act as potent grouping cues, enabling listeners to fuse sounds originating from a single talker, segregating them from competing talkers (6). Despite the importance of AM in real-world listening, however, behavioral measures of ITD sensitivity are commonly assessed for stimuli in which the amplitude is unmodulated. This is especially so when the focus of interest concerns ITDs conveyed in the TFS. Fig. 1 il...
This study examines the relation between a static and a dynamic measure of interaural correlation discrimination: (1) the just noticeable difference (JND) in interaural correlation and (2) the minimum detectable duration of a fixed interaural correlation change embedded within a single noise-burst of a given reference correlation. For the first task, JNDs were obtained from reference interaural correlations of + 1, -1, and from 0 interaural correlation in either the positive or negative direction. For the dynamic task, duration thresholds were obtained for a brief target noise of +1, -1, and 0 interaural correlation embedded in reference marker noise of +1, -1, and 0 interaural correlation. Performance with a reference interaural correlation of +1 was significantly better than with a reference correlation of -1. Similarly, when the reference noise was interaurally uncorrelated, discrimination was significantly better for a target correlation change towards +1 than towards -1. Thus, for both static and dynamic tasks, interaural correlation discrimination in the positive range was significantly better than in the negative range. Using the two measures, the length of a binaural temporal window was estimated. Its equivalent rectangular duration (ERD) was approximately 86 ms and independent of the interaural correlation configuration.
Previous physiological studies investigating the transfer of low-frequency sound into the cochlea have been invasive. Predictions about the human cochlea are based on anatomical similarities with animal cochleae but no direct comparison has been possible. This paper presents a noninvasive method of observing low frequency cochlear vibration using distortion product otoacoustic emissions (DPOAE) modulated by low-frequency tones. For various frequencies (15-480 Hz), the level was adjusted to maintain an equal DPOAE-modulation depth, interpreted as a constant basilar membrane displacement amplitude. The resulting modulator level curves from four human ears match equal-loudness contours (ISO226:2003) except for an irregularity consisting of a notch and a peak at 45 Hz and 60 Hz, respectively, suggesting a cochlear resonance. This resonator interacts with the middle ear stiffness. The irregularity separates two regions of the middle ear transfer function in humans: A slope of 12 dB/octave below the irregularity suggests mass-controlled impedance resulting from perilymph movement through the helicotrema; a 6-dB/octave slope above the irregularity suggests resistive cochlear impedance and the existence of a traveling wave. The results from four guinea pig ears showed a 6-dB/octave slope on either side of an irregularity around 120 Hz, and agree with published data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.