Abstract:Studies in multiple species, including in post-mortem human tissue, have shown that normal aging and/or acoustic overexposure can lead to a significant loss of afferent synapses innervating the cochlea. Hypothetically, this cochlear synaptopathy can lead to perceptual deficits in challenging environments and can contribute to central neural effects such as tinnitus. However, because cochlear synaptopathy can occur without any measurable changes in audiometric thresholds, synaptopathy can remain hidden from sta… Show more
“…This may suggest that either cochlear synaptopathy has little influence on auditory difficulty or that it has no significant prevalence, at least in normal-hearing people. However, in this study we did not test for high-frequency hearing loss above 8 kHz, which may serve an early indicator of hearing loss at lower frequencies and may indicate cochlear synaptopathy in a broader frequency range 73,74 . Moreover, the measures that we have employed may not have been optimal for detecting cochlear synaptopathy: the latency shift of wave V in noise, for instance, has been recently shown to have only moderate test-retest reliability 55 .…”
People with normal hearing thresholds can nonetheless have difficulty with understanding speech in noisy backgrounds. The origins of such supra-threshold hearing deficits remain largely unclear. Previously we showed that the auditory brainstem response to running speech is modulated by selective attention, evidencing a subcortical mechanism that contributes to speech-in-noise comprehension. We observed, however, significant variation in the magnitude of the brainstem's attentional modulation between the different volunteers. Here we show that this variability relates to the ability of the subjects to understand speech in background noise. In particular, we assessed 43 young human volunteers with normal hearing thresholds for their speech-in-noise comprehension. We also recorded their auditory brainstem responses to running speech when selectively attending to one of two competing voices. To control for potential peripheral hearing deficits, and in particular for cochlear synaptopathy, we further assessed noise exposure, the temporal sensitivity threshold, the middle-ear muscle reflex, and the auditory-brainstem response to clicks in various levels of background noise. These tests did not show evidence for cochlear synaptopathy amongst the volunteers. Furthermore, we found that only the attentional modulation of the brainstem response to speech was significantly related to speech-in-noise comprehension. Our results therefore evidence an impact of topdown modulation of brainstem activity on the variability in speech-in-noise comprehension amongst the subjects.Understanding speech in noisy backgrounds such as other competing speakers is a challenging task at which humans excel 1,2 It requires the separation of different sound sources, selective attention to the target speaker, and the processing of degraded signals 3-5 Hearing impairment such as resulting from noise exposure often leads to an increase of hearing thresholds, a reduction in the information conveyed about a sound to the central auditory system, and thus to greater difficulty in understanding speech in noise 6-8 However, even listeners with normal hearing thresholds can have problems with understanding speech in noisy environments 9,10 .An extensive neural network of efferent fibers can feed information from the central auditory cortex back to the auditory brainstem and even to the cochlea 11,12 . Research on the role of these neural feedback loops for speech-in-noise listening has mostly focused on the medial olivocochlear reflex (MOCR), in which stimulation of the medial olivocochlear fibers that synapse on the outer hair cells in the cochlea reduces cochlear amplification across a wide frequency band 13 . Computational modelling as well as animal studies have shown that such reduced broad-band amplification can improve the signal-to-noise ratio of a transient signal embedded in background noise [14][15][16][17] . However, it remains debated whether the reduction of cochlear amplification through the MOCR contributes to better speech-in-noise comprehe...
“…This may suggest that either cochlear synaptopathy has little influence on auditory difficulty or that it has no significant prevalence, at least in normal-hearing people. However, in this study we did not test for high-frequency hearing loss above 8 kHz, which may serve an early indicator of hearing loss at lower frequencies and may indicate cochlear synaptopathy in a broader frequency range 73,74 . Moreover, the measures that we have employed may not have been optimal for detecting cochlear synaptopathy: the latency shift of wave V in noise, for instance, has been recently shown to have only moderate test-retest reliability 55 .…”
People with normal hearing thresholds can nonetheless have difficulty with understanding speech in noisy backgrounds. The origins of such supra-threshold hearing deficits remain largely unclear. Previously we showed that the auditory brainstem response to running speech is modulated by selective attention, evidencing a subcortical mechanism that contributes to speech-in-noise comprehension. We observed, however, significant variation in the magnitude of the brainstem's attentional modulation between the different volunteers. Here we show that this variability relates to the ability of the subjects to understand speech in background noise. In particular, we assessed 43 young human volunteers with normal hearing thresholds for their speech-in-noise comprehension. We also recorded their auditory brainstem responses to running speech when selectively attending to one of two competing voices. To control for potential peripheral hearing deficits, and in particular for cochlear synaptopathy, we further assessed noise exposure, the temporal sensitivity threshold, the middle-ear muscle reflex, and the auditory-brainstem response to clicks in various levels of background noise. These tests did not show evidence for cochlear synaptopathy amongst the volunteers. Furthermore, we found that only the attentional modulation of the brainstem response to speech was significantly related to speech-in-noise comprehension. Our results therefore evidence an impact of topdown modulation of brainstem activity on the variability in speech-in-noise comprehension amongst the subjects.Understanding speech in noisy backgrounds such as other competing speakers is a challenging task at which humans excel 1,2 It requires the separation of different sound sources, selective attention to the target speaker, and the processing of degraded signals 3-5 Hearing impairment such as resulting from noise exposure often leads to an increase of hearing thresholds, a reduction in the information conveyed about a sound to the central auditory system, and thus to greater difficulty in understanding speech in noise 6-8 However, even listeners with normal hearing thresholds can have problems with understanding speech in noisy environments 9,10 .An extensive neural network of efferent fibers can feed information from the central auditory cortex back to the auditory brainstem and even to the cochlea 11,12 . Research on the role of these neural feedback loops for speech-in-noise listening has mostly focused on the medial olivocochlear reflex (MOCR), in which stimulation of the medial olivocochlear fibers that synapse on the outer hair cells in the cochlea reduces cochlear amplification across a wide frequency band 13 . Computational modelling as well as animal studies have shown that such reduced broad-band amplification can improve the signal-to-noise ratio of a transient signal embedded in background noise [14][15][16][17] . However, it remains debated whether the reduction of cochlear amplification through the MOCR contributes to better speech-in-noise comprehe...
“…43,NO. 1,[9][10][11][12][13][14][15][16][17][18][19][20][21][22] employed stimuli to elicit electrophysiological responses that may be better suited to identify CS-related AN degeneration than those utilized by many other human studies [see Table 2 in Supplemental Digital Content 1, http://links.lww.com/EANDH/ A935; Bharadwaj et al 2019;Vasilkov et al 2021; see discussions in Grant et al (2020); Mepani et al (2020);and Mepani et al (2021) for further details]. These findings thus highlight the importance of developing both electrophysiological proxies of CS and speech perception tasks in difficult listening conditions that are sensitive to CS.…”
Section: Summary and Implicationsmentioning
confidence: 99%
“…The lack of direct assessments of CS in living humans complicates attempts to link synaptic damage to auditory perceptual impairments. Some proxies of CS may also be more sensitive than others-heterogenous methods to predict CS levels likely contribute to the inconsistent results of prior studies, as has been discussed in several recent reviews (Bharadwaj et al 2019;Bramhall et al 2019;Le Prell et al 2019). Further, individual differences in synapse counts from genetic and/or developmental factors may be a source of variability that obscures correlations between noise exposure history and speech perception in challenging listening conditions.…”
“…The human FFR is an auditory-evoked potential that reflects the synchronous neural activity originating in the auditory brainstem. However, it should be noted that recent evidence suggests that the FFR may additionally have a cortical contribution (Coffey, Herholz, Chepesiuk, Baillet, & Zatorre, 2016;Coffey, Musacchia, & Zatorre, 2017) though these contributions are likely weak (Bharadwaj et al, 2019;Bidelman, 2018) and unnecessary for FFR generation (White-Schwoch, Anderson, Krizman, Nicol, & Kraus, 2019). Unlike other electrophysiological measures such as the auditory brainstem response (ABR), the FFR is unique in that it accurately represents auditory characteristics of the stimulus, including temporal and spectral properties below $1500 Hz.…”
Multiple studies have shown significant speech recognition benefit when acoustic hearing is combined with a cochlear implant (CI) for a bimodal hearing configuration. However, this benefit varies greatly between individuals. There are few clinical measures correlated with bimodal benefit and those correlations are driven by extreme values prohibiting data-driven, clinical counseling. This study evaluated the relationship between neural representation of fundamental frequency (F0) and temporal fine structure via the frequency following response (FFR) in the nonimplanted ear as well as spectral and temporal resolution of the nonimplanted ear and bimodal benefit for speech recognition in quiet and noise. Participants included 14 unilateral CI users who wore a hearing aid (HA) in the nonimplanted ear. Testing included speech recognition in quiet and in noise with the HA-alone, CI-alone, and in the bimodal condition (i.e., CI þ HA), measures of spectral and temporal resolution in the nonimplanted ear, and FFR recording for a 170-ms/da/stimulus in the nonimplanted ear. Even after controlling for four-frequency pure-tone average, there was a significant correlation (r ¼ .83) between FFR F0 amplitude in the nonimplanted ear and bimodal benefit. Other measures of auditory function of the nonimplanted ear were not significantly correlated with bimodal benefit. The FFR holds potential as an objective tool that may allow data-driven counseling regarding expected benefit from the nonimplanted ear. It is possible that this information may eventually be used for clinical decisionmaking, particularly in difficult-to-test populations such as young children, regarding effectiveness of bimodal hearing versus bilateral CI candidacy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.