2020
DOI: 10.1523/jneurosci.1936-19.2020
|View full text |Cite
|
Sign up to set email alerts
|

Effects of Sensorineural Hearing Loss on Cortical Synchronization to Competing Speech during Selective Attention

Abstract: When selectively attending to a speech stream in multi-talker scenarios, low-frequency cortical activity is known to synchronize selectively to fluctuations in the attended speech signal. Older listeners with age-related sensorineural hearing loss (presbycusis) often struggle to understand speech in such situations, even when wearing a hearing aid. Yet, it is unclear whether a peripheral hearing loss degrades the attentional modulation of cortical speech tracking. Here, we used psychoacoustics and electroencep… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

14
83
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 95 publications
(117 citation statements)
references
References 87 publications
14
83
0
Order By: Relevance
“…Petersen et al (2017) reported that adults with a higher degree of hearing loss showed a higher neural tracking of the ignored speech and no change in the attended stream, suggesting that they experience more difficulties inhibiting irrelevant information. Although Mirkovic et al (2019) and Presacco et al (2019) did not report a neural difference between the two populations, Decruy et al (2020) and Fuglsang et al (2020) observed, in contrast to Petersen et al (2017), an enhanced neural tracking in HI listeners for the attended-speech compared to their normal-hearing peers. This enhancement can indicate a compensation mechanism: HI listeners need to compensate for the degraded auditory input and therefore show increased cortical neural responses.…”
Section: Introductionmentioning
confidence: 60%
“…Petersen et al (2017) reported that adults with a higher degree of hearing loss showed a higher neural tracking of the ignored speech and no change in the attended stream, suggesting that they experience more difficulties inhibiting irrelevant information. Although Mirkovic et al (2019) and Presacco et al (2019) did not report a neural difference between the two populations, Decruy et al (2020) and Fuglsang et al (2020) observed, in contrast to Petersen et al (2017), an enhanced neural tracking in HI listeners for the attended-speech compared to their normal-hearing peers. This enhancement can indicate a compensation mechanism: HI listeners need to compensate for the degraded auditory input and therefore show increased cortical neural responses.…”
Section: Introductionmentioning
confidence: 60%
“…Delta oscillations are linked to parsing speech at the level of words and phrases as well as processing prosodic cues that are vital for constructing higher-level linguistic structures (Ding and Simon, 2014;Ding et al, 2017;Teoh et al, 2019); theta oscillations track the primary energetic rhythm in speech that is driven by lowlevel segmental features (syllables) (Ghitza, 2017). TRF analyses examining cortical tracking of the speech envelope by delta and theta oscillations have been applied to studies of continuous speech perception in neurotypical (Di Liberto et al, 2015) and clinical (Di Liberto et al, 2018;Fuglsang et al, 2020) populations.…”
Section: Introductionmentioning
confidence: 99%
“…Identifying the degree and direction of attention in near realtime requires that this information can be extracted from short time intervals. Several studies have shown that attention can be reliably decoded from single-trial EEG data in the two competing speaker paradigm (Horton et al, 2014;Mirkovic et al, 2015Mirkovic et al, , 2016O'Sullivan et al, 2015;Biesmans et al, 2017;Fiedler et al, 2017;Fuglsang et al, 2017Fuglsang et al, , 2020Haghighi et al, 2017) using various auditory attention decoding (AAD) methods (for a review see: Alickovic et al, 2019). In these studies, AAD procedures demonstrated above chance-level accuracy for evaluation periods of time ranging from 2 to 60 s. In a neurofeedback application, features should be obtained as quickly as possible.…”
Section: Introductionmentioning
confidence: 99%