Otoacoustic emissions (OAEs) are useful for studying medial olivocochlear (MOC) efferents, but several unresolved methodological issues cloud the interpretation of the data they produce. Most efferent assays use a ''probe stimulus'' to produce an OAE and an ''elicitor stimulus'' to evoke efferent activity and thereby change the OAE. However, little attention has been given to whether the probe stimulus itself elicits efferent activity. In addition, most studies use only contralateral (re the probe) elicitors and do not include measurements to rule out middle-ear muscle (MEM) contractions. Here we describe methods to deal with these problems and present a new efferent assay based on stimulus frequency OAEs (SFOAEs) that incorporates these methods. By using a postelicitor window, we make measurements in individual subjects of efferent effects from contralateral, ipsilateral, and bilateral elicitors. Using our SFOAE assay, we demonstrate that commonly used probe sounds (clicks, tone pips, and tone pairs) elicit efferent activity, by themselves. Thus, results of efferent assays using these probe stimuli can be confounded by unwanted efferent activation. In contrast, the single 40 dB SPL tone used as the probe sound for SFOAEbased measurements evoked little or no efferent activity. Since they evoke efferent activation, clicks, tone pips, and tone pairs can be used in an adaptation efferent assay, but such paradigms are limited in measurement scope compared to paradigms that separate probe and elicitor stimuli. Finally, we describe tests to distinguish middle-ear muscle (MEM) effects from MOC effects for a number of OAE assays and show results from SFOAE-based tests. The SFOAE assay used in this study provides a sensitive, flexible, frequency-specific assay of medial efferent activation that uses a low-level probe sound that elicits little or no efferent activity, and thus provides results that can be interpreted without the confound of unintended efferent activation.
The time-course of the human medial olivocochlear reflex (MOCR) was measured via its suppression of stimulus-frequency otoacoustic emissions (SFOAEs) in nine ears. MOCR effects were elicited by contralateral, ipsilateral or bilateral wideband acoustic stimulation. As a first approximation, MOCR effects increased like a saturating exponential with a time constant of 277+/-62 ms, and decayed exponentially with a time constant of 159+/-54 ms. However, in ears with the highest signal-to-noise ratios (4/9), onset time constants could be separated into "fast," tau= approximately 70 ms, "medium," tau = approximately 330 ms, and "slow," tau = approximately 25 s components, and there was an overshoot in the decay like an under-damped sinusoid. Both the buildup and decay could be modeled as a second order differential equation and the differences between the buildup and decay could be accounted for by decreasing one coefficient by a factor of 2. The reflex onset and offset delays were both approximately 25 ms. Although changing elicitor level over a 20 dB SPL range produced a consistent systematic change in response amplitude, the time course did not show a consistent dependence on elictor level, nor did the time-courses of ipsilaterally, contralaterally, and bilaterally activated MOCR responses differ significantly. Given the MOCR's time-course, it is best suited to operate on acoustic changes that persist for 100's of milliseconds.
A clinical test for the strength of the medial olivocochlear reflex (MOCR) might be valuable as a predictor of individuals at risk for acoustic trauma or for explaining why some people have trouble understanding speech in noise. A first step in developing a clinical test for MOCR strength is to determine the range and variation of MOCR strength in a research setting. A measure of MOCR strength near 1 kHz was made across a normalhearing population (N=25) by monitoring stimulusfrequency otoacoustic emissions (SFOAEs) while activating the MOCR with 60 dB SPL wideband contralateral noise. Statistically significant MOCR effects were measured in all 25 subjects; but not all SFOAE frequencies tested produced significant effects within the time allotted. To get a metric of MOCR strength, MOCR-induced changes in SFOAEs were normalized by the SFOAE amplitude obtained by two-tone suppression. We found this Bnormalized MOCR effect^varied across frequency and time within the same subject, sometimes with significant differences between measurements made as little as 40 Hz apart or as little as a few minutes apart. Averaging several single-frequency measures spanning 200 Hz in each subject reduced the frequency-and time-dependent variations enough to produce correlated measures indicative of the true MOCR strength near 1 kHz for each subject. The distribution of MOCR strengths, in terms of SFOAE suppression near 1 kHz, across our normal-hearing subject pool was reasonably approximated by a normal distribution with mean suppression of approximately 35% and standard deviation of approximately 12%. The range of MOCR strengths spanned a factor of 4, suggesting that whatever function the MOCR plays in hearing (e.g., enhancing signal detection in noise, reducing acoustic trauma), different people will have corresponding differences in their abilities to perform that function.
Bilateral cochlear-implant (CI) users and single-sided deaf listeners with a CI are less effective at localizing sounds than normal-hearing (NH) listeners. This performance gap is due to the degradation of binaural and monaural sound localization cues, caused by a combination of device-related and patient-related issues. In this study, we targeted the device-related issues by measuring sound localization performance of 11 NH listeners, listening to free-field stimuli processed by a real-time CI vocoder. The use of a real-time vocoder is a new approach, which enables testing in a free-field environment. For the NH listening condition, all listeners accurately and precisely localized sounds according to a linear stimulus–response relationship with an optimal gain and a minimal bias both in the azimuth and in the elevation directions. In contrast, when listening with bilateral real-time vocoders, listeners tended to orient either to the left or to the right in azimuth and were unable to determine sound source elevation. When listening with an NH ear and a unilateral vocoder, localization was impoverished on the vocoder side but improved toward the NH side. Localization performance was also reflected by systematic variations in reaction times across listening conditions. We conclude that perturbation of interaural temporal cues, reduction of interaural level cues, and removal of spectral pinna cues by the vocoder impairs sound localization. Listeners seem to ignore cues that were made unreliable by the vocoder, leading to acute reweighting of available localization cues. We discuss how current CI processors prevent CI users from localizing sounds in everyday environments.
Measurements of otoacoustic emission (OAE) magnitude are often made at low signal/noise ratios (SNRs) where measurement noise generates bias and variability errors that have led to the misinterpretation of OAE data. To gain an understanding for these errors and their effects, a two part investigation was carried out. First, the nature of OAE measurement noise was investigated using human data from 50 stimulus-frequency OAE experiments involving medial olivocochlear reflex (MOCR) activation. The noise was found to be reasonably approximated by circular Gaussian noise. Furthermore, when bias errors were taken into account, measurement variability was not found to be affected by MOCR activation as had been previously reported. Second, to quantify the errors circular Gaussian noise produces for different methods of OAE magnitude estimation for distortion-product, stimulus-frequency, and spontaneous OAEs, simulated OAE measurements were analyzed via four different magnitude estimation methods and compared. At low SNRs (below -6 dB), estimators involving Rice probability density functions produced less biased estimates of OAE magnitudes than conventional estimation methods, and less total rms error-particularly for spontaneous OAEs. They also enabled the calculation of probability density functions for OAE magnitudes from experimental data.
Determining the electrical stimulation levels is often a difficult and time-consuming task because they are normally determined behaviorally - a particular challenge when dealing with pediatric patients. The evoked stapedius reflex threshold and the evoked compound action potential have already been shown to provide reasonable estimates of the C- and T-levels, although these estimates tend to overestimate the C- and T-levels. The aim of this study was to investigate whether the evoked auditory brainstem response (eABR) can also be used to reliably estimate a patient's C- and T-levels. The correlation between eABR detection thresholds and behaviorally measured perceptual thresholds was statistically significant (r = 0.71; P < 0.001). In addition, eABR Wave-V amplitude increased with increasing stimulation level for the three loudness levels tested. These results show that the eABR detection threshold can be used to estimate a patient's T-levels. In addition, Wave-V amplitude could provide a method for estimating C-levels in the future. The eABR objective measure may provide a useful cochlear implant fitting method - particularly for pediatric patients.
We present the first portable, binaural, real-time research platform compatible with Oticon Medical SP and XP generation cochlear implants. The platform consists of (a) a pair of behind-the-ear devices, each containing front and rear calibrated microphones, (b) a four-channel USB analog-to-digital converter, (c) real-time PC-based sound processing software called the Master Hearing Aid, and (d) USB-connected hardware and output coils capable of driving two implants simultaneously. The platform is capable of processing signals from the four microphones simultaneously and producing synchronized binaural cochlear implant outputs that drive two (bilaterally implanted) SP or XP implants. Both audio signal preprocessing algorithms (such as binaural beamforming) and novel binaural stimulation strategies (within the implant limitations) can be programmed by researchers. When the whole research platform is combined with Oticon Medical SP implants, interaural electrode timing can be controlled on individual electrodes to within ±1 µs and interaural electrode energy differences can be controlled to within ±2%. Hence, this new platform is particularly well suited to performing experiments related to interaural time differences in combination with interaural level differences in real-time. The platform also supports instantaneously variable stimulation rates and thereby enables investigations such as the effect of changing the stimulation rate on pitch perception. Because the processing can be changed on the fly, researchers can use this platform to study perceptual changes resulting from different processing strategies acutely.
PRS together with the CAPT provides a sensitive measure for in situ speech perception testing within the classroom. Vocabulary age has a large effect on a child's ability to perceive the speech signal. SFA leads to improved speech perception, when the speech signal has been degraded because of poor acoustics or background noise and has a particularly large effect for children with lower vocabulary ages.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.