Noise-induced cochlear synaptopathy has been demonstrated in numerous rodent studies. In these animal models, the disorder is characterized by a reduction in amplitude of wave I of the auditory brainstem response (ABR) to high-level stimuli, whereas the response at threshold is unaffected. The aim of the present study was to determine if this disorder is prevalent in young adult humans with normal audiometric hearing. One hundred and twenty six participants (75 females) aged 18–36 were tested. Participants had a wide range of lifetime noise exposures as estimated by a structured interview. Audiometric thresholds did not differ across noise exposures up to 8 kHz, although 16-kHz audiometric thresholds were elevated with increasing noise exposure for females but not for males. ABRs were measured in response to high-pass (1.5 kHz) filtered clicks of 80 and 100 dB peSPL. Frequency-following responses (FFRs) were measured to 80 dB SPL pure tones from 240 to 285 Hz, and to 80 dB SPL 4 kHz pure tones amplitude modulated at frequencies from 240 to 285 Hz (transposed tones). The bandwidth of the ABR stimuli and the carrier frequency of the transposed tones were chosen to target the 3–6 kHz characteristic frequency region which is usually associated with noise damage in humans. The results indicate no relation between noise exposure and the amplitude of the ABR. In particular, wave I of the ABR did not decrease with increasing noise exposure as predicted. ABR wave V latency increased with increasing noise exposure for the 80 dB peSPL click. High carrier-frequency (envelope) FFR signal-to-noise ratios decreased as a function of noise exposure in males but not females. However, these correlations were not significant after the effects of age were controlled. The results suggest either that noise-induced cochlear synaptopathy is not a significant problem in young, audiometrically normal adults, or that the ABR and FFR are relatively insensitive to this disorder in young humans, although it is possible that the effects become more pronounced with age.
Cochlear synaptopathy (or hidden hearing loss), due to noise exposure or aging, has been demonstrated in animal models using histological techniques. However, diagnosis of the condition in individual humans is problematic because of (a) test reliability and (b) lack of a gold standard validation measure. Wave I of the transient-evoked auditory brainstem response is a noninvasive electrophysiological measure of auditory nerve function and has been validated in the animal models. However, in humans, Wave I amplitude shows high variability both between and within individuals. The frequency-following response, a sustained evoked potential reflecting synchronous neural activity in the rostral brainstem, is potentially more robust than auditory brainstem response Wave I. However, the frequency-following response is a measure of central activity and may be dependent on individual differences in central processing. Psychophysical measures are also affected by intersubject variability in central processing. Differential measures may help to reduce intersubject variability due to unrelated factors. A measure can be compared, within an individual, between conditions that are affected differently by cochlear synaptopathy. Validation of the metrics is also an issue. Comparisons with animal models, computational modeling, auditory nerve imaging, and human temporal bone histology are all potential options for validation, but there are technical and practical hurdles and difficulties in interpretation. Despite the obstacles, a diagnostic test for hidden hearing loss is a worthwhile goal, with important implications for clinical practice and health surveillance.
"Masking release" (MR), the improvement of speech intelligibility in modulated compared with unmodulated maskers, is typically smaller than normal for hearing-impaired listeners. The extent to which this is due to reduced audibility or to suprathreshold processing deficits is unclear. Here, the effects of audibility were controlled by using stimuli restricted to the low- (≤1.5 kHz) or mid-frequency (1-3 kHz) region for normal-hearing listeners and hearing-impaired listeners with near-normal hearing in the tested region. Previous work suggests that the latter may have suprathreshold deficits. Both spectral and temporal MR were measured. Consonant identification was measured in quiet and in the presence of unmodulated, amplitude-modulated, and spectrally modulated noise at three signal-to-noise ratios (the same ratios for the two groups). For both frequency regions, consonant identification was poorer for the hearing-impaired than for the normal-hearing listeners in all conditions. The results suggest the presence of suprathreshold deficits for the hearing-impaired listeners, despite near-normal audiometric thresholds over the tested frequency regions. However, spectral MR and temporal MR were similar for the two groups. Thus, the suprathreshold deficits for the hearing-impaired group did not lead to reduced MR.
This is the first study that systematically investigated the clinical feasibility of speech-ABRs in terms of stimulus duration, background noise, and number of epochs. Speech-ABRs can be reliably recorded to the 40 msec [da] without compromising response quality even when presented in background noise. Because fewer epochs were needed for the 40 msec [da], this would be the optimal stimulus for clinical use. Finally, given that there was no effect of consonant-vowel on speech-ABR peak latencies, there is no evidence that speech-ABRs are suitable for assessing auditory discrimination of the stimuli used.This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial License 4.0 (CCBY-NC), where it is permissible to download, share, remix, transform, and buildup the work provided it is properly cited. The work cannot be used commercially without permission from the journal.
Sensitivity to slow amplitude modulations is correlated with vowel and consonant perception in CI users. However, reduced sensitivity to slow modulations does not entirely explain the limited capacity of CI recipients to understand speech in noise.
Consonant-identification ability was examined in normal-hearing (NH) and hearing-impaired (HI) listeners in the presence of steady-state and 10-Hz square-wave interrupted speech-shaped noise. The Hilbert transform was used to process speech stimuli (16 consonants in a-C-a syllables) to present envelope cues, temporal fine-structure (TFS) cues, or envelope cues recovered from TFS speech. The performance of the HI listeners was inferior to that of the NH listeners both in terms of lower levels of performance in the baseline condition and in the need for higher signal-to-noise ratio to yield a given level of performance. For NH listeners, scores were higher in interrupted noise than in steady-state noise for all speech types (indicating substantial masking release). For HI listeners, masking release was typically observed for TFS and recovered-envelope speech but not for unprocessed and envelope speech. For both groups of listeners, TFS and recovered-envelope speech yielded similar levels of performance and consonant confusion patterns. The masking release observed for TFS and recovered-envelope speech may be related to level effects associated with the manner in which the TFS processing interacts with the interrupted noise signal, rather than to the contributions of TFS cues per se.
Several studies in the past have demonstrated the existence of an Otolith-Ocular Reflex (OOR) in man, although much less sensitive than canal ocular reflex. The present paper 1 confirms these previous results. Nystagmic eye movements (L-nystagmus) appear in the seated subject during horizontal acceleration along the interaural axis in the dark for an acceleration level (1 m/s2) about ten times the perception threshold with a sensitivity of about 0.035 rad/m. When sinusoidal linear acceleration is combined with optokinetic stimulation, the recorded nystagmus slow phase velocity exhibits strong periodic modulation related to subject motion. This marked effect of linear acceleration on the optokinetic nystagmus (OKN) appears at a level (0.1 m/s2) close to the acceleration perception threshold and has a 4-fold higher sensitivity than L-nystagmus. Modulation of OKN can reach a peak-to-peak amplitude as great as 20 degrees/s for a given optokinetic field size it increases with the velocity of the optokinetic stimulus, i.e. with the slow phase eye velocity. In parallel with changes in OKN slow phase velocity, linear acceleration induces a motion related decrease in the perceived velocity of the visual scene and modifications in self-motion perception. The results are interpreted in terms of a mathematical model of visual-vestibular interaction. They show that sensory interaction processes can magnify the contribution given to the control of eye movements by the otolithic system and provide a way of exploring its function at low levels of acceleration.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.