The noise-induced and age-related loss of synaptic connections between auditory-nerve fibers and cochlear hair cells is well-established from histopathology in several mammalian species; however, its prevalence in humans, as inferred from electrophysiological measures, remains controversial. Here we look for cochlear neuropathy in a temporal-bone study of "normal-aging" humans, using autopsy material from 20 subjects aged 0-89 yrs, with no history of otologic disease. Cochleas were immunostained to allow accurate quantification of surviving hair cells in the organ Corti and peripheral axons of auditory-nerve fibers. Mean loss of outer hair cells was 30-40% throughout the audiometric frequency range (0.25-8.0 kHz) in subjects over 60 yrs, with even greater losses at both apical (low-frequency) and basal (high-frequency) ends. In contrast, mean inner hair cell loss across audiometric frequencies was rarely >15%, at any age. Neural loss greatly exceeded inner hair cell loss, with 7/11 subjects over 60 yrs showing >60% loss of peripheral axons re the youngest subjects, and with the age-related slope of axonal loss outstripping the age-related loss of inner hair cells by almost 3:1. The results suggest that a large number of auditory neurons in the aging ear are disconnected from their hair cell targets. This primary neural degeneration would not affect the audiogram, but likely contributes to age-related hearing impairment, especially in noisy environments. Thus, therapies designed to regrow peripheral axons could provide clinically meaningful improvement in the aged ear.
In social settings, speech waveforms from nearby speakers mix together in our ear canals. Normally, the brain unmixes the attended speech stream from the chorus of background speakers using a combination of fast temporal processing and cognitive active listening mechanisms. Of >100,000 patient records,~10% of adults visited our clinic because of reduced hearing, only to learn that their hearing was clinically normal and should not cause communication difficulties. We found that multi-talker speech intelligibility thresholds varied widely in normal hearing adults, but could be predicted from neural phase-locking to frequency modulation (FM) cues measured with ear canal EEG recordings. Combining neural temporal fine structure processing, pupil-indexed listening effort, and behavioral FM thresholds accounted for 78% of the variability in multi-talker speech intelligibility. The disordered bottom-up and top-down markers of poor multi-talker speech perception identified here could inform the design of next-generation clinical tests for hidden hearing disorders.
Objectives: Permanent threshold elevation after noise exposure, ototoxic drugs or aging is caused by loss of sensory cells; however, animal studies show that hair cell loss is often preceded by degeneration of synapses between sensory cells and auditory nerve fibers. The silencing of these neurons, especially those with high thresholds and low spontaneous rates, degrades auditory processing and may contribute to difficulties understanding speech in noise. Although cochlear synaptopathy can be diagnosed in animals by measuring suprathreshold auditory brainstem responses, its diagnosis in humans remains a challenge. In mice, cochlear synaptopathy is also correlated with measures of middle-ear muscle (MEM) reflex strength, possibly because the missing high-threshold neurons are important drivers of this reflex. We hypothesized that measures of the MEM reflex might be better than other assays of peripheral function in predicting difficulties hearing in difficult listening environments in human subjects. Design:We recruited 165 normal-hearing healthy subjects, between the ages of 18 and 63, with no history of ear or hearing problems, no history of neurologic disorders and unremarkable otoscopic examinations. Word recognition in quiet and in difficult listening situations was measured in four ways: using isolated words from the NU-6 corpus with either a) 0 dB signal-to noise, b) 45% time compression with reverberation, or c) 65% time compression with reverberation and d) with a modified version of the QuickSIN. Audiometric thresholds were assessed at standard and extended high frequencies (EHFs). Outer hair cell function was assessed by distortion product otoacoustic emissions (DPOAEs). Middle-ear function and reflexes were assessed using three methods: the acoustic reflex threshold as measured clinically, wideband tympanometry as measured clinically and a custom wideband method that uses a pair of click probes flanking an ipsilateral noise elicitor. Other aspects of peripheral auditory function were
Permanent threshold elevation after noise exposure or aging is caused by loss of sensory cells; however, animal studies show that hair cell loss is often preceded by degeneration of the synapses between sensory cells and auditory nerve fibers. Silencing these neurons is likely to degrade auditory processing and may contribute to difficulties understanding speech in noisy backgrounds. Reduction of suprathreshold ABR amplitudes can be used to quantify synaptopathy in inbred mice. However, ABR amplitudes are highly variable in humans, and thus more challenging to use. Since noise-induced neuropathy preferentially targets fibers with high thresholds and low-spontaneous rate, and because phase locking to temporal envelopes is particularly strong in these fibers, measuring envelope following responses (EFRs) might be a more robust measure of cochlear synaptopathy. A recent auditory model further suggests that modulation of carrier tones with rectangular envelopes should be less sensitive to cochlear amplifier dysfunction and therefore a better metric of cochlear neural damage than sinusoidal amplitude modulation. Here, we measure performance scores on a variety of difficult word-recognition tasks among listeners with normal audiograms and assess correlations with EFR magnitudes to rectangular vs. sinusoidal modulation. Higher harmonics of EFR magnitudes evoked by a rectangular-envelope stimulus were significantly correlated with word scores, whereas those evoked by sinusoidally modulated tones did not. These results support previous reports that individual differences in synaptopathy may be a source of speech recognition variability despite the presence of normal thresholds at standard audiometric frequencies.
1In social settings, speech waveforms from nearby speakers mix together in our ear canals. The brain 2 unmixes the attended speech stream from the chorus of background speakers using a combination of fast 3 temporal processing and cognitive active listening mechanisms. Multi-talker speech perception is 4 vulnerable to aging or auditory abuse. We found that ~10% of adult visitors to our clinic have no 5 measurable hearing loss, yet offer a primary complaint of poor hearing. Multi-talker speech intelligibility 6 in these adults was strongly correlated with neural phase locking to frequency modulation (FM) cues, as 7 determined from ear canal EEG recordings. Combining neural temporal fine structure (TFS) processing 8 with pupil-indexed measures of cognitive listening effort could predict most of the individual variance in 9 speech intelligibility thresholds. These findings identify a confluence of disordered bottom-up and top-10 down processes that predict poor multi-talker speech perception and could be useful in next-generation 11 tests of hidden hearing disorders. 12 13 2015). Here, we apply parallel psychophysical and neurophysiological tests of sTFS processing in 1 combination with physiological measures of effortful listening to converge on a set of neural biomarkers 2 that identify poor multi-talker speech intelligibility in adults with clinically normal hearing. 3 4 Results 5Many individuals seek medical care for poor hearing but have no evidence of hearing loss 6We identified the first visit records of English-speaking adult patients from the Mass. Eye and Ear 7 audiology database over a 16-year period, with complete bilateral audiometric records at six octave 8 frequencies from 250 Hz to 8000 Hz according to the inclusion criteria in Figure 1A. Of the 106,787 patient 9 records that met these criteria, we found that approximately one out of every five individuals had no 10 clinical evidence of hearing loss, defined as thresholds > 20 dB HL at test frequencies up to 8 KHz (19,952, 11 19%, Figure 1B). The majority of these individuals were between 20-50 years old ( Figure 1C) and had no 12 conductive hearing impairment, nor focal threshold shifts or "notches" in their audiograms greater than 13 10 dB ( Fig. 1 -Fig. supplement 1A). The thresholds between their left and right ears were also symmetrical 14 within 10 dB for >95% of these patients ( Fig. 1 -Fig. supplement 1B). Despite these clinically normal 15measures of hearing, 45% of these individuals presented to the clinic reporting a primary complaint of 16 decreased hearing or hearing loss (Figure 1D). Absent any objective measure of hearing difficulty, these 17 patients are typically informed that their hearing is "normal" and that they are not expected to experience 18 communication problems. 19 20 Speech-in-noise intelligibility varies widely in individuals with clinically normal hearing 21
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.