Abstract:It is commonly assumed that difficulty in listening to speech in noise is at least partly due to deficits in neural temporal processing. Given that noise reduces the temporal fidelity of the auditory brainstem response (ABR) to speech, it has been suggested that the speech ABR may serve as an index of such neural deficits. However, the temporal fidelity of ABRs, to both speech and non-speech sounds, is also known to be influenced by the cochlear origin of the response, as responses from higher-frequency cochle… Show more
“…Results from the present study are not directly comparable with the majority of previous EFR studies evaluating the effects of noise and reverberation due to the use of broadband vowel stimuli. In the two known studies that used a frequency‐specific approach, EFRs at f 0 generated by F1, but not higher formants, were found to be robust or unaffected by noise (Boer et al, 2020; Laroche et al, 2013). Evidence from guinea pigs suggest that the robustness of f 0 encoding in the F1 region is supported by phase‐locking to individual resolved harmonics (i.e., fine structure).…”
Section: Discussionmentioning
confidence: 99%
“…Given the good correspondence between our experimental data and simulations with identical stimuli at least for F1 EFRs (Figure 6), we speculate that the discrepancies between our experimental data and previous studies arise from multiple methodological differences, at least in part. Influential methodological factors may include varied excitation patterns for filtered formants presented independently (Laroche et al, 2013), noise type (white noise in Laroche et al, 2013, vs. equal‐excitation noise per auditory filter in Boer et al, 2020, vs. speech shaped noise in the present study) and SNR (−5 dB in Laroche et al, 2013, vs. 20 and 10 dB in Boer et al, 2020 vs. 5 dB in the present study).…”
Section: Discussionmentioning
confidence: 99%
“…A constraint with EFRs elicited by vowels that are naturally broadband is the difficulty ascertaining the cause of noise‐induced deterioration on EFR amplitude. The attenuation may not only reflect neural desynchronization but may also reflect a shift in the dominant cochlear place of EFR initiation (Boer et al, 2020). While EFRs in quiet conditions are mostly dominated by contributions from mid‐to‐high frequency harmonics that are unresolved in the cochlea (i.e., multiple harmonics pass through an auditory filter; Easwar et al, 2018; Nuttall et al, 2018), the presence of noise could shift the dominance to low frequency resolved harmonics (i.e., only one harmonic passes through an auditory filter) because EFRs from the resolved harmonics may be less affected or attenuated by noise (Boer et al, 2020; Laroche et al, 2013).…”
Environmental noise and reverberation challenge speech understanding more significantly in children than in adults. However, the neural/sensory basis for the difference is poorly understood. We evaluated the impact of noise and reverberation on the neural processing of the fundamental frequency of voice (f 0 )-an important cue to tag or recognize a speaker. In a group of 39 6-to 15-year-old children and 26 adults with normal hearing, envelope following responses (EFRs) were elicited by a male-spoken /i/ in quiet, noise, reverberation, and both noise and reverberation. Due to increased resolvability of harmonics at lower than higher vowel formants that may affect susceptibility to noise and/or reverberation, the /i/ was modified to elicit two EFRs: one initiated by the low frequency first formant (F1) and the other initiated by mid to high frequency second and higher formants (F2+) with predominantly resolved and unresolved harmonics, respectively. F1 EFRs were more susceptible to noise whereas F2+ EFRs were more susceptible to reverberation. Reverberation resulted in greater attenuation of F1 EFRs in adults than children, and greater attenuation of F2+ EFRs in older than younger children. Reduced modulation depth caused by reverberation and noise explained changes in F2-+ EFRs but was not the primary determinant for F1 EFRs. Experimental data paralleled modelled EFRs, especially for F1. Together, data suggest that noise or reverberation influences the robustness of f 0 encoding depending on the resolvability of vowel harmonics and that maturation of processing temporal/ envelope information of voice is delayed in reverberation, particularly for low frequency stimuli.
“…Results from the present study are not directly comparable with the majority of previous EFR studies evaluating the effects of noise and reverberation due to the use of broadband vowel stimuli. In the two known studies that used a frequency‐specific approach, EFRs at f 0 generated by F1, but not higher formants, were found to be robust or unaffected by noise (Boer et al, 2020; Laroche et al, 2013). Evidence from guinea pigs suggest that the robustness of f 0 encoding in the F1 region is supported by phase‐locking to individual resolved harmonics (i.e., fine structure).…”
Section: Discussionmentioning
confidence: 99%
“…Given the good correspondence between our experimental data and simulations with identical stimuli at least for F1 EFRs (Figure 6), we speculate that the discrepancies between our experimental data and previous studies arise from multiple methodological differences, at least in part. Influential methodological factors may include varied excitation patterns for filtered formants presented independently (Laroche et al, 2013), noise type (white noise in Laroche et al, 2013, vs. equal‐excitation noise per auditory filter in Boer et al, 2020, vs. speech shaped noise in the present study) and SNR (−5 dB in Laroche et al, 2013, vs. 20 and 10 dB in Boer et al, 2020 vs. 5 dB in the present study).…”
Section: Discussionmentioning
confidence: 99%
“…A constraint with EFRs elicited by vowels that are naturally broadband is the difficulty ascertaining the cause of noise‐induced deterioration on EFR amplitude. The attenuation may not only reflect neural desynchronization but may also reflect a shift in the dominant cochlear place of EFR initiation (Boer et al, 2020). While EFRs in quiet conditions are mostly dominated by contributions from mid‐to‐high frequency harmonics that are unresolved in the cochlea (i.e., multiple harmonics pass through an auditory filter; Easwar et al, 2018; Nuttall et al, 2018), the presence of noise could shift the dominance to low frequency resolved harmonics (i.e., only one harmonic passes through an auditory filter) because EFRs from the resolved harmonics may be less affected or attenuated by noise (Boer et al, 2020; Laroche et al, 2013).…”
Environmental noise and reverberation challenge speech understanding more significantly in children than in adults. However, the neural/sensory basis for the difference is poorly understood. We evaluated the impact of noise and reverberation on the neural processing of the fundamental frequency of voice (f 0 )-an important cue to tag or recognize a speaker. In a group of 39 6-to 15-year-old children and 26 adults with normal hearing, envelope following responses (EFRs) were elicited by a male-spoken /i/ in quiet, noise, reverberation, and both noise and reverberation. Due to increased resolvability of harmonics at lower than higher vowel formants that may affect susceptibility to noise and/or reverberation, the /i/ was modified to elicit two EFRs: one initiated by the low frequency first formant (F1) and the other initiated by mid to high frequency second and higher formants (F2+) with predominantly resolved and unresolved harmonics, respectively. F1 EFRs were more susceptible to noise whereas F2+ EFRs were more susceptible to reverberation. Reverberation resulted in greater attenuation of F1 EFRs in adults than children, and greater attenuation of F2+ EFRs in older than younger children. Reduced modulation depth caused by reverberation and noise explained changes in F2-+ EFRs but was not the primary determinant for F1 EFRs. Experimental data paralleled modelled EFRs, especially for F1. Together, data suggest that noise or reverberation influences the robustness of f 0 encoding depending on the resolvability of vowel harmonics and that maturation of processing temporal/ envelope information of voice is delayed in reverberation, particularly for low frequency stimuli.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.