Objective We investigated a neural basis of speech-in-noise perception in older adults. Hearing loss, the third most common chronic condition in older adults, is most often manifested by difficulty with understanding speech in background noise. This trouble with understanding speech in noise, which occurs even in individuals who have normal hearing thresholds, may arise, in part, from age-related declines in central auditory processing of the temporal and spectral components of speech. We hypothesized that older adults with poorer speech-in-noise (SIN) perception demonstrate impairments in the subcortical representation of speech. Design In all participants (28 adults, ages 60 to 73 years), average hearing thresholds calculated from 500 to 4000 Hz were ≤ 25 dB HL. The participants were evaluated behaviorally with the Hearing in Noise Test (HINT) and neurophysiologically using speech-evoked auditory brainstem responses recorded in quiet and in background noise. The participants were divided based on their HINT scores into top and bottom performing groups that were matched for audiometric thresholds and IQ. We compared brainstem responses in the two groups, specifically, the average spectral magnitudes of the neural response and the degree to which background noise affected response morphology. Results In the quiet condition, the bottom SIN group had reduced neural representation of the fundamental frequency of the speech stimulus and an overall reduction in response magnitude. In the noise condition, the bottom SIN group demonstrated greater effects of noise, which may reflect reduction in neural synchrony. All physiologic measures correlated with SIN perception. Conclusion Adults in the bottom SIN group differed from the audiometrically-matched top SIN group in how speech was neurally encoded. The strength of subcortical encoding of the fundamental frequency appears to be a factor in successful speech-in-noise perception in older adults. Given the limitations of amplification for improving central auditory processing, our results indicate the need for inclusion of auditory training in intervention plans for older adults with SIN perception difficulties.
The human superior temporal gyrus (STG) is critical for extracting meaningful linguistic features from speech input. Local neural populations are tuned to acoustic-phonetic features of all consonants and vowels and to dynamic cues for intonational pitch. These populations are embedded throughout broader functional zones that are sensitive to amplitude-based temporal cues. Beyond speech features, STG representations are strongly modulated by learned knowledge and perceptual goals. Currently, a major challenge is to understand how these features are integrated across space and time in the brain during natural speech comprehension. We present a theory that temporally recurrent connections within STG generate context-dependent phonological representations, spanning longer temporal sequences relevant for coherent percepts of syllables, words, and phrases.
Dual-systems models of visual category learning posit the existence of an explicit, hypothesistesting 'reflective' system, as well as an implicit, procedural-based 'reflexive' system. The reflective and reflexive learning systems are competitive and neurally dissociable. Relatively little is known about the role of these domain-general learning systems in speech category learning. Given the multidimensional, redundant, and variable nature of acoustic cues in speech categories, our working hypothesis is that speech categories are learned reflexively. To this end, we examined the relative contribution of these learning systems to speech learning in adults. Native English speakers learned to categorize Mandarin tone categories over 480 trials. The training protocol involved trial-by-trial feedback and multiple talkers. Experiment 1 and 2 examined the effect of manipulating the timing (immediate vs. delayed) and information content (full vs. minimal) of feedback. Dual-systems models of visual category learning predict that delayed feedback and providing rich, informational feedback enhance reflective learning, while immediate and minimally informative feedback enhance reflexive learning. Across the two experiments, our results show feedback manipulations that targeted reflexive learning enhanced category learning success. In Experiment 3, we examined the role of trial-to-trial talker information (mixed vs. blocked presentation) on speech category learning success. We hypothesized that the mixed condition would enhance reflexive learning by not allowing an association between talker-related acoustic cues and speech categories. Our results show that the mixed talker condition led to relatively greater accuracies. Our experiments demonstrate that speech categories are optimally learned by training methods that target the reflexive learning system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.