Perceiving speech-in-noise (SIN) demands precise neural coding between brainstem and cortical levels of the hearing system. Attentional processes can then select and prioritize task-relevant cues over competing background noise for successful speech perception. In animal models, brainstem-cortical interplay is achieved via descending corticofugal projections from cortex that shape midbrain responses to behaviorally-relevant sounds. Attentional engagement of corticofugal feedback may assist SIN understanding but has never been confirmed and remains highly controversial in humans. To resolve these issues, we recorded source-level, anatomically constrained brainstem frequency-following responses (FFRs) and cortical event-related potentials (ERPs) to speech via high-density EEG while listeners performed rapid SIN identification tasks. We varied attention with active vs. passive listening scenarios whereas task difficulty was manipulated with additive noise interference. Active listening (but not arousal-control tasks) exaggerated both ERPs and FFRs, confirming attentional gain extends to lower subcortical levels of speech processing. We used functional connectivity to measure the directed strength of coupling between levels and characterize “bottom-up” vs. “top-down” (corticofugal) signaling within the auditory brainstem-cortical pathway. While attention strengthened connectivity bidirectionally, corticofugal transmission disengaged under passive (but not active) SIN listening. Our findings (i) show attention enhances the brain’s transcription of speech even prior to cortex and (ii) establish a direct role of the human corticofugal feedback system as an aid to cocktail party speech perception.
45Age-related hearing loss leads to poorer speech comprehension, particularly in noise. Speech-in-noise 46 (SIN) deficits among the elderly could result from weaker neural activity within, or poorer signal 47 transmission between brainstem and auditory cortices. By recording neuroelectric responses from 48 brainstem (BS) and primary auditory cortex (PAC), we show that beyond simply attenuating neural 49 activity, hearing loss in older adults compromises the transmission of speech information between 50 subcortical and cortical hubs of the auditory system. The strength of afferent BS→PAC neural signaling 51 (but not the reverse efferent flow; PAC→BS) varied with mild declines in hearing acuity and this 52 "bottom-up" functional connectivity robustly predicted older adults' SIN perception. Our neuroimaging 53 findings underscore the importance of brain connectivity, particularly afferent neural communication, in 54 understanding the biological basis of age-related hearing deficits in real-world listening environments. 55 56 57 58 59 Keywords: Aging; auditory evoked potentials; auditory cortex; frequency-following response (FFR); 60 functional connectivity; source waveform analysis; neural speech processing 61 62 63 Importantly, besides hearing, the groups were otherwise matched in age (NH: 66.2±6.1 years, HL: 131 70.4±4.9 years; t 2.22 =-2.05, p = 0.052) and gender balance (NH: 5/8 male/female; HL: 11/8; Fisher's exact 132 test, p=0.47). Age and hearing loss were not correlated in our sample (Pearson's r=0.29, p=0.10). 133 Participants were compensated for their time and gave written informed consent in compliance with a 134 protocol approved by the Baycrest Centre research ethics committee. 135
Stimuli and task 136Three tokens from the standardized UCLA version of the Nonsense Syllable Test were used in 137 this study (Dubno and Schaefer, 1992). These tokens were naturally produced English consonant-vowel 138 phonemes (/ba/, /pa/, and /ta/), spoken by a female talker. Each phoneme was 100-ms in duration and 139 matched in terms of average root mean square sound pressure level (SPL). Each had a common voice 140 fundamental frequency (F0=150 Hz) and first and second formants (F1= 885, F2=1389 Hz). This 141
Experimental evidence in animals demonstrates cortical neurons innervate subcortex bilaterally to tune brainstem auditory coding. Yet, the role of the descending (corticofugal) auditory system in modulating earlier sound processing in humans during speech perception remains unclear. Here, we measured EEG activity as listeners performed speech identification tasks in different backgrounds aiming to tax perceptual and attentional processing. Brainstem speech coding might be tied to attention and arousal states (indexed by cortical α power) that actively modulate the interplay of brainstem-cortical signal processing. When speech-evoked brainstem frequency-following responses (FFRs) were categorized according to their α states, low α FFRs in noise were found to be weaker, correlated positively with behavioral response times, more "decodable" via classifiers, and associated indirectly with other signal-in-noise perceptual performance. Our data provide evidence for online corticofugal interplay in humans and establish that brainstem sensory representations are continuously yoked to the ebb and flow of cortical states to dynamically update perceptual processing.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.