Abstract:The result of the present study suggests that neural encoding of speech sound at the brainstem level might be mediated distinctly in good hearing aid performers from that of poor hearing aid performers. Thus, it can be inferred that subtle physiological changes are evident at the auditory brainstem in a person who is willing to accept noise from those who are not willing to accept noise.
“…The brainstem more generally has been demonstrated to encode acoustic features of speech (LeBel & D’Mello, 2023; Russo et al, 2004) and supporting role in speech in noise processing (Bramhall et al, 2015; Shetty & Puttabasappa, 2017). This supporting process appears to be exclusive to challenges presented by masked speech, with auditory brainstem and inferior colliculi indicative of ‘cochlear gain’ assisting speech processing only when speech is degraded by external noise, rather than intrinsically degradation such as vocoded speech (Hernández-Pérez et al, 2021).…”
Decoding affect information encoded within a vocally produced signal is a key part of daily communication. The acoustic channels that carry the affect information, however, are not uniformly distributed across a spectrotemporal space meaning that natural listening environments with dynamic, competing noise may unpredictably obscure some spectrotemporal regions of the vocalisation, reducing the potential information available to the listener. In this study, we utilise behavioural and functional MRI investigations to first assess which spectrotemporal regions of a human vocalisation contribute to affect perception in the listener, and then use a reverse-correlation fMRI analysis to see which structures underpin this perceptually challenging task when categorisation relevant acoustic information is unmasked by noise. Our results show that, despite the challenging task and non-uniformity of contributing spectral regions of affective vocalizations, a distributed network of (non-primary auditory) brain regions in the frontal cortex, basal ganglia, and lateral limbic regions supports affect processing in noise. Given the conditions for recruitment and previously established functional contributions of these regions, we propose that this task is underpinned by a reciprocal network between frontal cortical regions and ventral limbic regions that assist in flexible adaptation and tuning to stimuli, while a hippocampal and parahippocampal regions support the auditory system’s processing of the degraded auditory information via associative and contextual processing.
“…The brainstem more generally has been demonstrated to encode acoustic features of speech (LeBel & D’Mello, 2023; Russo et al, 2004) and supporting role in speech in noise processing (Bramhall et al, 2015; Shetty & Puttabasappa, 2017). This supporting process appears to be exclusive to challenges presented by masked speech, with auditory brainstem and inferior colliculi indicative of ‘cochlear gain’ assisting speech processing only when speech is degraded by external noise, rather than intrinsically degradation such as vocoded speech (Hernández-Pérez et al, 2021).…”
Decoding affect information encoded within a vocally produced signal is a key part of daily communication. The acoustic channels that carry the affect information, however, are not uniformly distributed across a spectrotemporal space meaning that natural listening environments with dynamic, competing noise may unpredictably obscure some spectrotemporal regions of the vocalisation, reducing the potential information available to the listener. In this study, we utilise behavioural and functional MRI investigations to first assess which spectrotemporal regions of a human vocalisation contribute to affect perception in the listener, and then use a reverse-correlation fMRI analysis to see which structures underpin this perceptually challenging task when categorisation relevant acoustic information is unmasked by noise. Our results show that, despite the challenging task and non-uniformity of contributing spectral regions of affective vocalizations, a distributed network of (non-primary auditory) brain regions in the frontal cortex, basal ganglia, and lateral limbic regions supports affect processing in noise. Given the conditions for recruitment and previously established functional contributions of these regions, we propose that this task is underpinned by a reciprocal network between frontal cortical regions and ventral limbic regions that assist in flexible adaptation and tuning to stimuli, while a hippocampal and parahippocampal regions support the auditory system’s processing of the degraded auditory information via associative and contextual processing.
“…5 and 6). We thus extended prior findings on response amplitude in speech EEG (PEA) of hearing-aid users (Shetty and Puttabasappa, 2017) to listeners with no or only mild hearing impairment and in a wide age range.…”
Section: Discussionmentioning
confidence: 62%
“…5 and 6). We thus extended prior findings on response amplitude in speech EEG (PEA) of hearing-aid users (Shetty and Puttabasappa, 2017) to listeners with no or only mild hearing impairment and in a wide age range. Our results highlight the importance of temporal delays in neuroelectric responses for speech comprehension.…”
Section: Discussionmentioning
confidence: 71%
“…One candidate for a more precise method to measure brain correlates of speech comprehension is EEG in response to individual phonemes and series of phonemes, revealing neural responses at middle and longer latencies (>6ms). It has been shown that the amplitude of speech EEG responses to syllables is a good predictor of speech comprehension in hearing aid users (Shetty and Puttabasappa, 2017).…”
The comprehension of phonemes is a fundamental component of speech processing which relies on both, temporal fine structure (TFS) and temporal envelope (TE) coding. EEG amplitude in response to phonemes has been identified as indicator of speech performance in hearing aid users. Presbyacusis may also alter neuro-electric responses to phonemes, even with minimally or unaffected hearing thresholds. Elevated speech reception thresholds (SRT) in absence of pure-tone threshold (PTT) elevation suggest central processing deficits. We therefore collected audiometric data (PTT, SRT) and EEG during passive listening in 80 subjects, ranging in age from 18 to 76 years. We confirm phoneme-evoked EEG response amplitude (PEA) as indicator of speech comprehension. Specifically, PEA decreased with elevated SRT, PTT and increased age. As novel observation, we report the temporal delay of phoneme-evoked EEG responses (PED) to increase with age and PTT. The absolute duration of PED, its age-correlation, and the lack of PEA-lateralization combined with the frequency of phoneme stimuli used here suggest a predominantly thalamic generator of phoneme-evoked EEG responses. Hearing loss in extended high-frequencies affects PED more than PEA. In our sample, neural compensation for increased PTT came at the cost of decreased temporal processing speed. Most importantly, PED correlates with SRT and explains SRT-variance in quiet and in ipsilateral noise that PTT cannot. PED was a better predictor of TFS coding in quiet and of TE coding in ipsilateral noise. As PED reflects both TFS and TE coding, thalamic activity may provide integrated information at the gate of neocortex.Significance StatementIntact speech comprehension is essential for social participation which protects against depression and dementia. Age-related hearing loss is a growing problem in aging societies, as hearing deficits constitute the third most important modifiable risk factor for cognitive decline. This work uses electrical brain responses to phonemes in a cohort covering age 18 to 76 years. As the temporal delay of phoneme responses showed the most significant correlations with age and high-frequency thresholds, we demonstrated that speed of neural processing seems essential for speech comprehension. The observed neural signals likely originate from thalamus which receives feedback from neocortex and is embedded in cognitive processing. Developing objective markers for speech processing is key for ensuring cognitive fitness in aging.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.