This study was designed to address individual differences in aided speech understanding among a relatively large group of older adults. The group of older adults consisted of 98 adults (50 female and 48 male) ranging in age from 60 to 86 (mean = 69.2). Hearing loss was typical for this age group and about 90% had not worn hearing aids. All subjects completed a battery of tests, including cognitive (6 measures), psychophysical (17 measures), and speech-understanding (9 measures), as well as the Speech, Spatial, and Qualities of Hearing (SSQ) self-report scale. Most of the speech-understanding measures made use of competing speech and the non-speech psychophysical measures were designed to tap phenomena thought to be relevant for the perception of speech in competing speech (e.g., stream segregation, modulation-detection interference). All measures of speech understanding were administered with spectral shaping applied to the speech stimuli to fully restore audibility through at least 4000 Hz. The measures used were demonstrated to be reliable in older adults and, when compared to a reference group of 28 young normal-hearing adults, age-group differences were observed on many of the measures. Principal-components factor analysis was applied successfully to reduce the number of independent and dependent (speech understanding) measures for a multiple-regression analysis. Doing so yielded one global cognitive-processing factor and five non-speech psychoacoustic factors (hearing loss, dichotic signal detection, multi-burst masking, stream segregation, and modulation detection) as potential predictors. To this set of six potential predictor variables were added subject age, Environmental Sound Identification (ESI), and performance on the text-recognition-threshold (TRT) task (a visual analog of interrupted speech recognition). These variables were used to successfully predict one global aided speech-understanding factor, accounting for about 60% of the variance.
In some cases, small modifications to the gammachirp filter produced better quantitative predictions of curvature changes across frequency, but this filter, as Harmonic complexes with identical component frequencies and amplitudes but different phase spectra implemented here, was unable to accurately represent all the data. may be differentially effective as maskers. Such harmonic waveforms, constructed with positive or negaKeywords: masking, phase, auditory filters, harmonic complexes tive Schroeder phases, have similar envelopes and identical long-term power spectra, but the positive Schroeder-phase waveform is typically a less effective masker than the negative Schroeder-phase waveform. These masking differences have been attributed to an interaction between the masker phase spectrum and the phase characteristic of the basilar membrane. To
This study determined whether listeners with hearing loss received reduced benefits due to an onset asynchrony between sounds. Seven normal-hearing listeners and 7 listeners with hearing impairment (HI) were presented with 2 synthetic, steady-state vowels. One vowel (the late-arriving vowel) was 250 ms in duration, and the other (the early-arriving vowel) varied in duration between 350 and 550 ms. The vowels had simultaneous offsets, and therefore an onset asynchrony between the 2 vowels ranged between 100 and 300 ms. The early-arriving and late-arriving vowels also had either the same or different fundamental frequencies. Increases in onset asynchrony and differences in fundamental frequency led to better vowel-identification performance for both groups, with listeners with HI benefiting less from onset asynchrony than normal-hearing listeners. The presence of fundamental frequency differences did not influence the benefit received from onset asynchrony for either group. Excitation pattern modeling indicated that the reduced benefit received from onset asynchrony was not easily predicted by the reduced audibility of the vowel sounds for listeners with HI. Therefore, suprathreshold factors such as loss of the cochlear nonlinearity, reduced temporal integration, and the perception of vowel dominance probably play a greater role in the reduced benefit received from onset asynchrony in listeners with HI.
Auditory filter bandwidths were estimated in three experiments. The first experiment was a profile-analysis experiment. The stimuli were composed of sinusoidal components ranging in frequency from 200 to 5000 Hz. The standard stimulus was the sum of equal-amplitude tones, and the signal stimulus had a power spectrum that varied up-down ... up-down. The number of components ranged from four to 60. Interval-by-interval level randomization prevented the change in level of a single component from reliably indicating the change from standard to signal. The second experiment was a notched-noise experiment in which the 1000-Hz tone to be detected was added to a noise with a notch arithmetically centered at 1000 Hz. Detection thresholds were estimated both in the presence of and in the absence of level randomization. In the third, hybrid, experiment a 1000-Hz tone was to be detected, and the masker was composed of equal-amplitude sinusoidal components ranging in frequency from 200 to 5000 Hz. For this experiment, thresholds were estimated both in the presence and absence of level variation. For both the notched-noise and hybrid experiments, only modest effects of level randomization were obtained. A variant of Durlach et al.'s channel model ["Towards a model for discrimination of broadband signals," J. Acoust. Soc. Am. 80, 63-72 (1986)] was used to estimate auditory filter bandwidths for all three experiments. When a two-parameter roex(p,r) filter weighting function was used to fit the data, bandwidth estimates were approximately two to three times as large for the two detection tasks than for the profile-analysis task.
Masking by harmonic complexes depends on the frequency content of the masker and its phase spectrum. Harmonic complexes created with negative Schroeder phases (component phases decreasing with increasing frequency) produce more masking than those with positive Schroeder phases (increasing phase) in humans, but not in birds. The masking differences in humans have been attributed to interactions between the masker phase spectrum and the phase characteristic of the basilar membrane. In birds, the similarity in masking by positive and negative Schroeder maskers, and reduced masking by cosine-phase maskers (constant phase), suggests a phase characteristic that does not change much along the basilar papilla. To evaluate this possibility, the rate of phase change across masker bandwidth was varied by systematically altering the Schroeder algorithm. Humans and three species of birds detected tones added in phase to a single component of a harmonic complex. As observed in earlier studies, the minimum amount of masking in humans occurred for positive phase gradients. However, minimum masking in birds occurred for a shallow negative phase gradient. These results suggest a cochlear delay in birds that is reduced compared to that found in humans, probably related to the shorter avian basilar epithelia.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.