This paper tests key predictions of the "two-mechanism model" for the generation of distortion-product otoacoustic emissions (DPOAEs). The two-mechanism model asserts that lower-sideband DPOAEs constitute a mixture of emissions arising not simply from two distinct cochlear locations (as is now well established) but, more importantly, by two fundamentally different mechanisms: nonlinear distortion induced by the traveling wave and linear coherent reflection off pre-existing micromechanical impedance perturbations. The model predicts that (1) DPOAEs evoked by frequency-scaled stimuli (e.g., at fixed f2/f1) can be unmixed into putative distortion- and reflection-source components with the frequency dependence of their phases consistent with the presumed mechanisms of generation; (2) The putative reflection-source component of the total DPOAE closely matches the reflection-source emission (e.g., low level stimulus-frequency emission) measured at the same frequency under similar conditions. These predictions were tested by unmixing DPOAEs into components using two completely different methods: (a) selective suppression of the putative reflection source using a third tone near the distortion-product frequency and (b) spectral smoothing (or, equivalently, time-domain windowing). Although the two methods unmix in very different ways, they yield similar DPOAE components. The properties of the two DPOAE components are consistent with the predictions of the two-mechanism model.
Kalluri R, Xue J, Eatock RA. Ion channels set spike timing regularity of mammalian vestibular afferent neurons. J Neurophysiol 104: 2034-2051, 2010. First published July 21, 2010 doi:10.1152/jn.00396.2010. In the mammalian vestibular nerve, some afferents have highly irregular interspike intervals and others have highly regular intervals. To investigate whether spike timing is determined by the afferents' ion channels, we studied spiking activity in their cell bodies, isolated from the vestibular ganglia of young rats. Whole cell recordings were made with the perforated-patch method. As previously reported, depolarizing current steps revealed distinct firing patterns. Transient neurons fired one or two onset spikes, independent of current level. Sustained neurons were more heterogeneous, firing either trains of spikes or a spike followed by large voltage oscillations. We show that the firing pattern categories are robust, occurring at different temperatures and ages, both in mice and in rats. A difference in average resting potential did not cause the difference in firing patterns, but contributed to differences in afterhyperpolarizations. A low-voltage-activated potassium current (I LV ) was previously implicated in the transient firing pattern. We show that I LV grew from the first to second postnatal week and by the second week comprised Kv1 and Kv7 (KCNQ) components. Blocking I LV converted step-evoked firing patterns from transient to sustained. Separated from their normal synaptic inputs, the neurons did not spike spontaneously. To test whether the firing-pattern categories might correspond to afferent populations of different regularity, we injected simulated excitatory postsynaptic currents at pseudorandom intervals. Sustained neurons responded to a given pattern of input with more regular firing than did transient neurons. Pharmacological block of I LV made firing more regular. Thus ion channel differences that produce transient and sustained firing patterns in response to depolarizing current steps can also produce irregular and regular spike timing.
Otoacoustic emissions (OAEs) evoked by broadband clicks and by single tones are widely regarded as originating via different mechanisms within the cochlea. Whereas the properties of stimulus-frequency OAEs (SFOAEs) evoked by tones are consistent with an origin via linear mechanisms involving coherent wave scattering by preexisting perturbations in the mechanics, OAEs evoked by broadband clicks (CEOAEs) have been suggested to originate via nonlinear interactions among the different frequency components of the stimulus (e.g., intermodulation distortion). The experiments reported here test for bandwidth-dependent differences in mechanisms of OAE generation. Click-evoked and stimulus-frequency OAE input/output transfer functions were obtained and compared as a function of stimulus frequency and intensity. At low and moderate intensities human CEOAE and SFOAE transfer functions are nearly identical. When stimulus intensity is measured in "bandwidth-compensated" sound-pressure level (cSPL), CEOAE and SFOAE transfer functions have equivalent growth functions at fixed frequency and equivalent spectral characteristics at fixed intensity. This equivalence suggests that CEOAEs and SFOAEs are generated by the same mechanism. Although CEOAEs and SFOAEs are known by different names because of the different stimuli used to evoke them, the two OAE "types" are evidently best understood as members of the same emission family.
Frequency selectivity in the inner ear is fundamental to hearing and is traditionally thought to be similar across mammals. Although direct measurements are not possible in humans, estimates of frequency tuning based on noninvasive recordings of sound evoked from the cochlea (otoacoustic emissions) have suggested substantially sharper tuning in humans but remain controversial. We report measurements of frequency tuning in macaque monkeys, OldWorld primates phylogenetically closer to humans than the laboratory animals often taken as models of human hearing (e.g., cats, guinea pigs, chinchillas). We find that measurements of tuning obtained directly from individual auditory-nerve fibers and indirectly using otoacoustic emissions both indicate that at characteristic frequencies above about 500 Hz, peripheral frequency selectivity in macaques is significantly sharper than in these common laboratory animals, matching that inferred for humans above 4-5 kHz. Compared with the macaque, the human otoacoustic estimates thus appear neither prohibitively sharp nor exceptional. Our results validate the use of otoacoustic emissions for noninvasive measurement of cochlear tuning and corroborate the finding of sharp tuning in humans. The results have important implications for understanding the mechanical and neural coding of sound in the human cochlea, and thus for developing strategies to compensate for the degradation of tuning in the hearing-impaired.auditory filters | comparative hearing S ound waveforms consist of pressure fluctuations in time and space. In the process of transducing mechanical vibrations into neural signals, the cochlea performs a mechanical frequency analysis that decomposes sounds into constituent frequencies (1, 2). The frequency tuning of the cochlear filters plays a critical role in the ability to distinguish and segregate different sounds perceptually. For example, sounds that radiate from different sources superpose in the air, and are thus "mixed up" before striking the eardrums. Based on the output of the cochlear filters, and by comparing responses from the two ears, the nervous system is capable of disentangling the various sounds, grouping related frequency components to identify auditory objects and localize their sources in space (3). The critical role of peripheral frequency selectivity is perhaps best illustrated by the consequences of damage to the inner ear, which typically leads to a degradation of the cochlear filters. The loss of sharp filtering results in an impaired ability to detect signals in noise and to separate different sounds (4). Frequency selectivity is therefore crucial to everyday human communication.The study of the cochlea is hampered by its fragility and inaccessibility. Direct measurements of mechanical or neural frequency tuning in healthy cochleae are only possible in laboratory animals. To date, measurements of the mechanical vibration of the cochlea's basilar membrane have been largely restricted to the basal high-frequency end of the cochlea, where surgical acce...
The mechanoreceptive sensory hair cells in the inner ear are selectively vulnerable to numerous genetic and environmental insults. In mammals, hair cells lack regenerative capacity, and their death leads to permanent hearing loss and vestibular dysfunction. Their paucity and inaccessibility has limited the search for otoprotective and regenerative strategies. Growing hair cells in vitro would provide a route to overcome this experimental bottleneck. We report a combination of four transcription factors (Six1, Atoh1, Pou4f3, and Gfi1) that can convert mouse embryonic fibroblasts, adult tail-tip fibroblasts and postnatal supporting cells into induced hair cell-like cells (iHCs). iHCs exhibit hair cell-like morphology, transcriptomic and epigenetic profiles, electrophysiological properties, mechanosensory channel expression, and vulnerability to ototoxin in a high-content phenotypic screening system. Thus, direct reprogramming provides a platform to identify causes and treatments for hair cell loss, and may help identify future gene therapy approaches for restoring hearing.
Stimulus-frequency otoacoustic emissions (SFOAEs) have been measured in several different ways, including (1) nonlinear compression, (2) two-tone suppression, and (3) spectral smoothing. Each of the three methods exploits a different cochlear phenomenon or signal-processing technique to extract the emission. The compression method makes use of the compressive growth of emission amplitude relative to the linear growth of the stimulus. The emission is defined as the complex difference between ear-canal pressure measured at one intensity and the rescaled pressure measured at a higher intensity for which the emission is presumed negligible. The suppression method defines the SFOAE as the complex difference between the ear-canal pressure measured with and without a suppressor tone at a nearby frequency. The suppressor tone is presumed to substantially reduce or eliminate the emission. The spectral smoothing method involves convolving the complex ear-canal pressure spectrum with a smoothing function. The analysis exploits the differing latencies of stimulus and emission and is equivalent to windowing in the corresponding latency domain. Although the three methods are generally assumed to yield identical emissions, no equivalence has ever been established. This paper compares human SFOAEs measured with the three methods using procedures that control for temporal drifts, contamination of the calibration by evoked emissions, and other potential confounds. At low stimulus intensities, SFOAEs measured using all three methods are nearly identical. At higher intensities, limitations of the procedures contribute to small differences, although the general spectral shape and phase of the three SFOAEs remain similar. The near equivalence of SFOAEs measured by compression, suppression, and spectral smoothing indicates that SFOAE characteristics are not mere artifacts of measurement methodology.
Although stimulus-frequency otoacoustic emissions (SFOAEs) offer compelling advantages as noninvasive probes of cochlear function, they remain underutilized compared to other evoked emission types, such as distortion-products (DPOAEs), whose measurement methods are less complex and time-consuming. Motivated by similar advances in the measurement of DPOAEs, this paper develops and characterizes a more efficient SFOAE measurement paradigm based on swept tones. In contrast to standard SFOAE measurement methods, in which the emissions are measured in the sinusoidal steady-state using discrete tones of well defined frequency, the swept-tone method sweeps rapidly across frequency (typically at rates of 1 Hz/ms or greater) using a chirp-like stimulus. Measurements obtained using both swept-and discrete-tone methods in an interleaved suppression paradigm demonstrate that the two methods of measuring SFOAEs yield nearly equivalent results, the differences between them being comparable to the run-to-run variability encountered using either method alone. The match appears robust to variations in measurement parameters, such as sweep rate and direction. The near equivalence of the SFOAEs obtained using the two measurement methods enables the interpretation of swept-tone SFOAEs within existing theoretical frameworks. Furthermore, the data demonstrate that SFOAE phase-gradient delaysincluding their large and irregular fluctuations across frequency-reflect actual physical time delays at different frequencies, showing that the physical emission latency, not merely the phase gradient, is inherently irregular.
SummaryRodent vestibular afferent neurons offer several advantages as a model system for investigating the significance and origins of regularity in neuronal firing interval. Their regularity has a bimodal distribution that defines regular and irregular afferent classes. Factors likely to be involved in setting firing regularity include the morphology and physiology of the afferents' contacts with hair cells, which may influence the averaging of synaptic noise and the afferents' intrinsic electrical properties. In vitro patch clamp studies on the cell bodies of primary vestibular afferents reveal a rich diversity of ion channels, with indications of at least two neuronal populations. Here we suggest that firing patterns of isolated vestibular ganglion somata reflect intrinsic ion channel properties, which in vivo combine with hair cell synaptic drive to produce regular and irregular firing.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.