Recent work has demonstrated that extended high-frequency (EHF; >8 kHz) hearing is valuable for speech-in-noise recognition. These findings contradict the broadly accepted “speech bandwidth” that has historically been limited to below 8 kHz. Several studies also indicate that EHF pure-tone thresholds predict speech-in-noise performance. One question that has arisen is whether the association that has been observed between EHF pure-tone thresholds and speech-in-noise recognition is causal—that loss of audibility of EHF cues in speech degrades speech-in-noise recognition. Indeed, this effect has been demonstrated using low-pass filtering, but whether elevated EHF thresholds would produce a similar effect is not certain. Another possibility is that EHF thresholds are a marker for subclinical dysfunction at lower frequencies that degrades speech recognition. These two possibilities are not mutually exclusive (nor exhaustive) and each could contribute to the observed relationship. Here we present a reanalysis of previous data collected in our lab, the results of which suggest that 16-kHz pure-tone thresholds are consistent predictors of speech-in-speech recognition, regardless of whether EHF cues are present in the speech signal. These findings suggest elevated EHF thresholds may indicate subclinical auditory dysfunction impairing speech-in-speech recognition.
Children born preterm are at risk for speech and language developmental delays and disorders. It has been proposed that adverse auditory exposures during stay in the neonatal intensive care unit (NICU) may contribute to this risk, as it is well-established that the preterm infant’s developing auditory system is sensitive to acoustic input during this time. While reduced speech exposure, noxious noise levels, and excessive silence in the NICU are of concern, another potential cause for concern is abrupt changes in auditory exposure during NICU stay. Our previous data indicate that NICU incubator walls have a low-pass filtering effect, attenuating external sounds at frequencies above 200 Hz. However, internal sounds generated by the incubator or other life-saving devices still produce noise exposure. Furthermore, when infants transition to an open crib from the incubator based on improved overall health, they may be at risk of increased exposure to higher frequencies. Here we present spectral analyses of auditory exposure recordings made for several preterm infants throughout NICU stay. Our analysis reveals that for frequencies above 500 Hz, sound levels are significantly higher in the open crib; below 500 Hz, levels are generally higher in the incubator. These data point to yet another potentially disruptive effect of the NICU environment caused by an abrupt change in auditory exposure.
Infants born preterm are at greater risk for auditory dysfunction than full-term infants. To better understand and characterize the neonatal intensive care unit (NICU) auditory experience, we sought to examine the sound pressure levels (SPLs) in the NICU and the presence of a circadian pattern of sound level exposure. Data were collected for very preterm infants (born ≤ 32 weeks’ gestation; n = 36) during NICU stay. Audio recordings were collected over 24-hour intervals, three times per week for each subject using a LENA recorder that was adhered to the inside wall of the infant’s incubator or crib. Average hourly SPL values were calculated from the raw recordings. Preliminary analysis indicates that the highest hourly exposures occurred during the hours of 8–9 AM and 8–9 PM, presumably corresponding to a shift change of the NICU nursing staff. Ongoing analyses are examining whether 24-hour patterns of exposure are affected by bed type and location in the NICU. It is hoped that this line of study will lead to interventions designed to prevent audiological impairments associated with preterm birth and NICU environmental exposures. [Work supported by NIH Grant R21-DC017820.]
Extended high-frequency (EHF; 8–20 kHz) cues support speech recognition in noisy backgrounds, particularly when the masker has reduced EHF levels relative to the target. This scenario can occur in natural auditory scenes when the target talker is facing the listener, but the masker talkers are not. The EHF benefit stands in contrast to past studies that have focused on lower frequencies and presumed that EHFs play no role in speech intelligibility. Although EHF cues improve speech recognition, it is unclear how the magnitude of benefit compares to that of other portions of the speech spectrum. In this ongoing study, we measure band importance functions (BIFs) for a female target and two-talker masker by notch filtering individual contiguous bands from 40 to 20 000 Hz. With the target facing the listener, two masking conditions were tested: (1) masker facing the listener; (2) masker facing 56.25°. Preliminary data indicate an interaction between the filtered band and masker head orientation. For the facing condition, the BIF shows a peak between 0.4 and 3 kHz and drops sharply at higher frequencies, resembling previous data. When the masker faces away, however, the benefit of EHFs increases relative to the lower bands, somewhat flattening the BIF. [Work supported by NIH grant R01-DC019745].
Current evidence suggests extended high-frequency (EHF) speech cues support speech perception. Audibility of these cues likely depends on speech spectral levels at EHFs. These levels may vary across genders and different speech materials. In this study, we investigated the effect of talker gender and speech materials on EHF levels in speech. A group of 30 (15female) native speakers of American-English was recruited to participate in this study. A three-minute spontaneous narrative was recorded for each participant along with a subset of the Bamford-Kowal-Bench (BKB) sentences. An ERB-scaled long-term average speech spectrum was calculated for the narrative and for the BKB sentences for each subject. Linear mixed-effects models were used to test intersubject and intrasubject variability in 8 EHF ERB bands. There was a significant effect of gender with female EHF levels ∼4 dB higher than male EHF levels. Within-subjects comparison of BKB sentences and narratives revealed no significant difference in EHF levels between speech materials. These findings highlight the possibility that EHFs could play a more prominent role in female speech perception than male speech perception, and suggest that EHF levels are relatively stable across speech materials for a given talker. [Work supported by NIH under Grant No. R01-DC019745.]
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.