Some of the most common interfering background sounds a listener experiences are the sounds of other talkers. In Experiment 1, recognition for natural Institute of Electrical and Electronics Engineers (IEEE) sentences was measured in normal-hearing adults at two fixed signal-to-noise ratios (SNRs) in 16 backgrounds with the same long-term spectrum: unprocessed speech babble (1, 2, 4, 8, and 16 talkers), noise-vocoded versions of the babbles (12 channels), noise modulated with the wide-band envelope of the speech babbles, and unmodulated noise. All talkers were adult males. For a given number of talkers, natural speech was always the most effective masker. The greatest changes in performance occurred as the number of talkers in the maskers increased from 1 to 2 or 4, with small changes thereafter. In Experiment 2, the same targets and maskers (1, 2, and 16 talkers) were used to measure speech reception thresholds (SRTs) adaptively. Periodicity in the target was also manipulated by noise-vocoding, which led to considerably higher SRTs. The greatest masking effect always occurred for the masker type most similar to the target, while the effects of the number of talkers were generally small. Implications are drawn with reference to glimpsing, informational vs energetic masking, overall SNR, and aspects of periodicity.
Hearing aid amplification can be used as a model for studying the effects of auditory stimulation on the central auditory system (CAS). We examined the effects of stimulus presentation level on the physiological detection of sound in unaided and aided conditions. P1, N1, P2, and N2 cortical evoked potentials were recorded in sound field from 13 normal-hearing young adults in response to a 1000-Hz tone presented at seven stimulus intensity levels. As expected, peak amplitudes increased and peak latencies decreased with increasing intensity for unaided and aided conditions. However, there was no significant effect of amplification on latencies or amplitudes. Taken together, these results demonstrate that 20 dB of hearing aid gain affects neural responses differently than 20 dB of stimulus intensity change. Hearing aid signal processing is discussed as a possible contributor to these results. This study demonstrates (1) the importance of controlling for stimulus intensity when evoking responses in aided conditions, and (2) the need to better understand the interaction between the hearing aid and the CAS.
time, use of compression hearing aids has increased dramatically, from half of hearing aids dispensed only 5 years ago to four out of five hearing aids dispensed today (Strom, 2002b). Most of today's digital and digitally programmable hearing aids are compression devices (Strom, 2002a). It is probable that within a few years, very few patients will be fit with linear hearing aids. Furthermore, compression has increased in complexity, with greater numbers of parameters under the clinician's control. Ideally, these changes will translate to greater flexibility and precision in fitting and selection. However, they also increase the need for information about the effects of compression amplification on speech perception and speech quality. As evidenced by the large number of sessions at professional conferences on fitting compression hearing aids, clinicians continue to have questions about compression technology and when and how it should be used. How does compression work? Who are the best candidates for this technology? How should adjustable parameters be set to provide optimal speech recognition? What effect will compression have on speech quality? These and other questions continue to drive our interest in this technology. This article reviews the effects of compression on the speech signal and the implications for speech intelligibility, quality, and design of clinical procedures.Categorizing Compression tensity vowels such as /i/, and from whispered speech to shouting, the benefit of a linear hearing With a linear hearing aid, a constant gain is ap-aid is restricted when the amplification needed to plied to all input levels until the hearing aid's sat-make low-intensity sounds audible amplifies uration limit is reached. Because daily speech in-high-intensity sounds to the point of discomfort. cludes such a wide range of intensity levels, from In other words, linear hearing aids have a limited low-intensity consonants such as /f/ to high-in-capacity to maximize audibility across a range of 131 From the
Speech-evoked cortical potentials can be recorded reliably in individuals during hearing aid use. A better understanding of how amplification (and device settings) affects neural response patterns is still needed.
Compression hearing aids have the inherent, and often adjustable, feature of release time from compression. Research to date does not provide a consensus on how to choose or set release time. The current study had 2 purposes: (a) a comprehensive evaluation of the acoustic effects of release time for a single-channel compression system in quiet and (b) an evaluation of the relation between the acoustic changes and speech recognition. The release times under study were 12, 100, and 800 ms. All of the stimuli were VC syllables from the Nonsense Syllable Task spoken by a female talker. The stimuli were processed through a hearing aid simulator at 3 input levels. Two acoustic measures were made on individual syllables: the envelope-difference index and CV ratio. These measurements allowed for quantification of the short-term amplitude characteristics of the speech signal and the changes to these amplitude characteristics caused by compression. The acoustic analyses revealed statistically significant effects among the 3 release times. The size of the effect was dependent on characteristics of the phoneme. Twelve listeners with moderate sensorineural hearing loss were tested for their speech recognition for the same stimuli. Although release time for this single-channel, 3:1 compression ratio system did not directly predict overall intelligibility for these nonsense syllables in quiet, the acoustic measurements reflecting the changes due to release time were significant predictors of phoneme recognition. Increased temporal-envelope distortion was predictive of reduced recognition for some individual phonemes, which is consistent with previous research on the importance of relative amplitude as a cue to syllable recognition for some phonemes.
In older veterans, screening for hearing loss led to significantly more hearing aid use. Screening with the tone-emitting otoscope was more efficient. The results are most applicable to older populations with few cost barriers to hearing aids.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.