Abstract:Hearing loss is often asymmetric, such that hearing thresholds differ substantially between the two ears. The extreme case of such asymmetric hearing is single-sided deafness. A unilateral cochlear implant (CI) on the more severely impaired ear is an effective treatment to restore hearing. The neuro-cognitive cost of listening with a unilateral CI in multi-talker situations is at present unclear. Here, we simulated listening with a unilateral CI in young, normal-hearing listeners (N = 22) who were presented wi… Show more
“…In several studies, it has been shown that NH listeners show an increased neural response latency with increasing task demand due to lower stimulus intensity, increasing background noise or stimulus vocoding. This is the case for neural processing of continuous speech (Kraus et al, 2021; Mirkovic et al, 2019; Verschueren et al, 2021) as well as simple sounds (Billings et al, 2015; Maamor & Billings, 2017; McClannahan et al, 2019; Van Dun et al, 2016). In the current study, we observed the same effect for NH listeners: When speech understanding decreases, NH listeners showed a prominent increase in latency of the neural responses.…”
Section: Discussionmentioning
confidence: 99%
“…Hence, they might allocate more effort to listen the stimulus. Listening to degraded speech (e.g., Kraus et al, 2021; Mirkovic et al, 2019; Verschueren et al, 2021) or spending more effort to listen to the stimulus (e.g., Dimitrijevic et al, 2019) affects the neural responses to continuous speech. By providing amplification, we aimed to make the peripheral activation levels as similar as possible in both groups, which motivates our choice to model the neural responses to the non‐amplified speech features for HI listeners.…”
We investigated the impact of hearing loss on the neural processing of speech. Using a forward modelling approach, we compared the neural responses to continuous speech of 14 adults with sensorineural hearing loss with those of age‐matched normal‐hearing peers. Compared with their normal‐hearing peers, hearing‐impaired listeners had increased neural tracking and delayed neural responses to continuous speech in quiet. The latency also increased with the degree of hearing loss. As speech understanding decreased, neural tracking decreased in both populations; however, a significantly different trend was observed for the latency of the neural responses. For normal‐hearing listeners, the latency increased with increasing background noise level. However, for hearing‐impaired listeners, this increase was not observed. Our results support the idea that the neural response latency indicates the efficiency of neural speech processing: More or different brain regions are involved in processing speech, which causes longer communication pathways in the brain. These longer communication pathways hamper the information integration among these brain regions, reflected in longer processing times. Altogether, this suggests decreased neural speech processing efficiency in HI listeners as more time and more or different brain regions are required to process speech. Our results suggest that this reduction in neural speech processing efficiency occurs gradually as hearing deteriorates. From our results, it is apparent that sound amplification does not solve hearing loss. Even when listening to speech in silence at a comfortable loudness, hearing‐impaired listeners process speech less efficiently.
“…In several studies, it has been shown that NH listeners show an increased neural response latency with increasing task demand due to lower stimulus intensity, increasing background noise or stimulus vocoding. This is the case for neural processing of continuous speech (Kraus et al, 2021; Mirkovic et al, 2019; Verschueren et al, 2021) as well as simple sounds (Billings et al, 2015; Maamor & Billings, 2017; McClannahan et al, 2019; Van Dun et al, 2016). In the current study, we observed the same effect for NH listeners: When speech understanding decreases, NH listeners showed a prominent increase in latency of the neural responses.…”
Section: Discussionmentioning
confidence: 99%
“…Hence, they might allocate more effort to listen the stimulus. Listening to degraded speech (e.g., Kraus et al, 2021; Mirkovic et al, 2019; Verschueren et al, 2021) or spending more effort to listen to the stimulus (e.g., Dimitrijevic et al, 2019) affects the neural responses to continuous speech. By providing amplification, we aimed to make the peripheral activation levels as similar as possible in both groups, which motivates our choice to model the neural responses to the non‐amplified speech features for HI listeners.…”
We investigated the impact of hearing loss on the neural processing of speech. Using a forward modelling approach, we compared the neural responses to continuous speech of 14 adults with sensorineural hearing loss with those of age‐matched normal‐hearing peers. Compared with their normal‐hearing peers, hearing‐impaired listeners had increased neural tracking and delayed neural responses to continuous speech in quiet. The latency also increased with the degree of hearing loss. As speech understanding decreased, neural tracking decreased in both populations; however, a significantly different trend was observed for the latency of the neural responses. For normal‐hearing listeners, the latency increased with increasing background noise level. However, for hearing‐impaired listeners, this increase was not observed. Our results support the idea that the neural response latency indicates the efficiency of neural speech processing: More or different brain regions are involved in processing speech, which causes longer communication pathways in the brain. These longer communication pathways hamper the information integration among these brain regions, reflected in longer processing times. Altogether, this suggests decreased neural speech processing efficiency in HI listeners as more time and more or different brain regions are required to process speech. Our results suggest that this reduction in neural speech processing efficiency occurs gradually as hearing deteriorates. From our results, it is apparent that sound amplification does not solve hearing loss. Even when listening to speech in silence at a comfortable loudness, hearing‐impaired listeners process speech less efficiently.
“…The latency of the first peak increases with increasing speech rate. Increased latencies are often observed in more complex conditions with a higher task demand, like for example lower stimulus intensity, vocoded speech or speech in noise (Mirkovic et al, 2019; Verschueren et al, 2021; Kraus et al, 2020). The latency of the neural responses can also be related to neural processing efficiency (Bidelman et al, 2019; Gillis et al, 2021a).…”
When listening to continuous speech, the human brain can track features of the presented speech signal. It has been shown that neural tracking of acoustic features is a prerequisite for speech understanding and can predict speech understanding in controlled circumstances. However, the brain also tracks linguistic features of speech, which may be more directly related to speech understanding. We investigated acoustic and linguistic speech processing as a function of varying speech understanding by manipulating the speech rate. In this paradigm, acoustic and linguistic speech processing are affected simultaneously but in opposite directions: When the speech rate increases, more acoustic information per second is present. In contrast, linguistic information decreases as speech becomes less intelligible at higher speech rates. We measured the EEG of 18 participants who listened to speech at various speech rates. As expected and confirmed by the behavioral results, speech understanding decreased with increasing speech rate. Accordingly, linguistic neural tracking decreased with increasing speech rate, but acoustic neural tracking increased. This indicates that neural tracking of linguistic representations can capture the gradual effect of decreasing speech understanding. In addition, increased acoustic neural tracking does not necessarily imply better speech understanding. This suggests that, although more challenging to measure due to the low signal-to-noise ratio, linguistic neural tracking may be a more direct predictor of speech understanding.
“…In several studies, it has been shown that NH listeners show an increased neural response latency with increasing task demand due to lower stimulus intensity, increasing background noise or stimulus vocoding. This is the case for neural processing of continuous speech (Mirkovic et al, 2019; Verschueren et al, 2020; Kraus et al, 2020) as well as simple sounds (Billings et al, 2015; Van Dun et al, 2016; Maamor and Billings, 2017; McClannahan et al, 2019). Our results show that this increase in latency is absent for adults with a higher degree of hearing loss (Figure 5.C).…”
Section: Discussionmentioning
confidence: 99%
“…Listening to degraded speech (e.g. Mirkovic et al, 2019;Verschueren et al, 2021;Kraus et al, 2020) or spending more effort to listen to the stimulus (e.g. Dimitrijevic et al, 2019) affects the neural responses to continuous speech.…”
We investigated the impact of hearing loss on the neural processing of speech. Using a forward modelling approach, we compared the neural responses to continuous speech of 14 adults with sensorineural hearing loss with those of age-matched normal-hearing peers.Compared to their normal-hearing peers, hearing-impaired listeners had increased neural tracking and delayed neural responses to continuous speech in quiet. The latency also increased with the degree of hearing loss. As speech understanding decreased, neural tracking decreased in both population; however, a significantly different trend was observed for the latency of the neural responses. For normal-hearing listeners, the latency increased with increasing background noise level. However, for hearing-impaired listeners, this increase was not observed.Our results support that the neural response latency indicates the efficiency of neural speech processing. Hearing-impaired listeners process speech in silence less efficiently then normal-hearing listeners. Our results suggest that this reduction in neural speech processing efficiency is a gradual effect which occurs as hearing deteriorates. Moreover, the efficiency of neural speech processing in hearing-impaired listeners is already at its lowest level when listening to speech in quiet, while normal-hearing listeners show a further decrease in efficiently when the noise level increases.From our results, it is apparent that sound amplification does not solve hearing loss. Even when intelligibility is apparently perfect, hearing-impaired listeners process speech less efficiently.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.