Two of the primary cues used to localize the sources of sounds are interaural level differences (ILDs) and interaural time differences (ITDs). We conducted two experiments to explore how practice affects the human discrimination of values of ILDs and ongoing ITDs presented over headphones. We measured discrimination thresholds of 13 to 32 naive listeners in a variety of conditions during a pretest and again, 2 weeks later, during a posttest. Between those two tests, we trained a subset of listeners 1 h per day for 9 days on a single ILD or ITD condition. Listeners improved on both ILD and ITD discrimination. Improvement was initially rapid for both cue types and appeared to generalize broadly across conditions, indicating conceptual or procedural learning. A subsequent slower-improvement stage, which occurred solely for the ILD cue, only affected conditions with the trained stimulus frequency, suggesting that stimulus processing had fundamentally changed. These different learning patterns indicate that practice affects the attention to, or low-level encoding of, ILDs and ITDs at sites at which the two cue types are processed separately. Thus, these data reveal differences in the effect of practice on ILD and ITD discrimination, and provide insight into the encoding of these two cues to sound-source location in humans.A listener who determines the position of a singing bird concealed among tree leaves, a jet passing overhead hidden by clouds, or a car approaching from behind, does so by using several auditory cues to the location of sound sources. Here we report the results of two investigations into how practice influences the ability of human listeners to discriminate small differences in each of two of these cues, interaural level differences (ILDs) and interaural time differences (ITDs).In humans, the horizontal location, or azimuth, of sound sources is computed from differences in the information that arrives at the two ears. At frequencies above about 1.5 kHz, listeners determine sound azimuth primarily from sensitivity to differences in sound level at the two ears. These interaural level differences occur because the head forms a sound barrier between the two ears, so sounds are attenuated at the ear farthest from the source relative to the ear nearest to the source (1, 2). At frequencies below about 1.5 kHz, listeners determine sound azimuth primarily from sensitivity to differences in the arrival time of the sound at the two ears. These interaural time differences arise because there is distance between the two ears, so sounds reach the ear nearest to the sound source first and the other ear later (1, 2). For sound durations greater than about 150 ms, listeners are far more sensitive to differences at the two ears in the ongoing fine time structure of the sound than in the onset time of the sound (3, 4). These ongoing time differences are equivalent to interaural phase differences (IPDs) for tonal stimuli, but will be referred to as ITDs in this paper.There have been many investigations into whether huma...
Perceptual skills can be improved even in adulthood, but this learning seldom occurs by stimulus exposure alone. Instead, it requires considerable practice performing a perceptual task with relevant stimuli. It is thought that task performance permits the stimuli to drive learning. A corresponding assumption is that the same stimuli do not contribute to improvement when encountered separately from relevant task performance because of the absence of this permissive signal. However, these ideas are based on only two types of studies, in which the task was either always performed or not performed at all. Here we demonstrate enhanced perceptual learning on an auditory frequency-discrimination task in human listeners when practice on that target task was combined with additional stimulation. Learning was enhanced regardless of whether the periods of additional stimulation were interleaved with or provided exclusively before or after target-task performance, and even though that stimulation occurred during the performance of an irrelevant (auditory or written) task. The additional exposures were only beneficial when they shared the same frequency with, though they did not need to be identical to, those used during target-task performance. Their effectiveness also was diminished when they were presented 15 minutes after practice on the target task and was eliminated when that separation was increased to 4 hours. These data show that exposure to an acoustic stimulus can facilitate learning when encountered outside of the time of practice on a perceptual task. By properly utilizing additional stimulation one may markedly improve the efficiency of perceptual training regimens.
Normal perception depends, in part, on accurate judgments of the temporal relationships between sensory events. Two such relativetiming skills are the ability to detect stimulus asynchrony and to discriminate stimulus order. Here we investigated the neural processes contributing to the performance of auditory asynchrony and order tasks in humans, using a perceptual-learning paradigm. In each of two parallel experiments, we tested listeners on a pretest and a posttest consisting of auditory relative-timing conditions. Between these two tests, we trained a subset of listeners ϳ1 h/d for 6 -8 d on a single relative-timing condition. The trained listeners practiced asynchrony detection in one experiment and order discrimination in the other. Both groups were trained at sound onset with tones at 0.25 and 4.0 kHz. The remaining listeners in each experiment, who served as controls, did not receive multihour training during the 8 -10 d between the pretest and posttest. These controls improved even without intervening training, adding to evidence that a single session of exposure to perceptual tasks can yield learning. Most importantly, each of the two groups of trained listeners learned more on their respective trained conditions than controls, but this learning occurred only on the two trained conditions. Neither group of trained listeners generalized their learning to the other task (order or asynchrony), an untrained temporal position (sound offset), or untrained frequency pairs. Thus, it appears that multihour training on relative-timing skills affects task-specific neural circuits that are tuned to a given temporal position and combination of stimulus components.
Conclusion The human frequency-to-place map may be modified by experience, even in adult listeners. However, such plasticity has limitations. Knowledge of the extent and the limitations of human auditory plasticity can help optimize parameter settings in users of auditory prostheses. Objectives To what extent can adults adapt to sharply different frequency-to-place maps across ears? This question was investigated in two bilateral cochlear implant users who had a full electrode insertion in one ear, a much shallower insertion in the other ear, and standard frequency-to-electrode maps in both ears. Method Three methods were used to assess adaptation to the frequency-to-electrode maps in each ear: 1) pitch matching of electrodes in opposite ears, 2) listener-driven selection of the most intelligible frequency-to-electrode map, and 3) speech perception tests. Based on these measurements, one subject was fitted with an alternative frequency-to-electrode map, which sought to compensate for her incomplete adaptation to the standard frequency-to-electrode map. Results Both listeners showed remarkable ability to adapt, but such adaptation remained incomplete for the ear with the shallower electrode insertion, even after extended experience. The alternative frequency-to-electrode map that was tested resulted in substantial increases in speech perception for one subject in the short—insertion ear.
What is the time course of human attention in a simple auditory detection task? To investigate this question, we determined the detectability of a 20-msec, 1000-Hz tone presented at expected and unexpected times. Twelve listeners who expected the tone to occur at a specific time after a 300-msec narrowband noise rarely detected signals presented 150-375 msec before or 100-200 msec after that expected time. The shape of this temporal-attention window depended on the expected presentation time of the tone and the temporal markers available in the trials. Further, though expecting the signal to occur in silence, listeners often detected signals presented at unexpected times during the noise. Combined with previous data, these results further clarify the listening strategy humans use when trying to detect an expected sound: Humans seem to listen specifically for that sound, while ignoring the background in which it is presented, around the time when the sound is expected to occur.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.