Objectives-The goal of this study was to evaluate the ability of a threshold measure, made with a restricted electrode configuration, to identify channels exhibiting relatively poor spatial selectivity. With a restricted electrode configuration, channel-to-channel variability in threshold may reflect variations in the interface between the electrodes and auditory neurons (i.e., nerve survival, electrode placement, tissue impedance). These variations in the electrode-neuron interface should also be reflected in psychophysical tuning curve measurements. Specifically, it is hypothesized that high single-channel thresholds obtained with the spatially focused partial tripolar electrode configuration are predictive of wide or tip-shifted psychophysical tuning curves.Design-Data were collected from five cochlear implant listeners implanted with the HiRes 90k cochlear implant (Advanced Bionics). Single-channel thresholds and most comfortable listening levels were obtained for stimuli that varied in presumed electrical field size by using the partial tripolar configuration, for which a fraction of current (σ) from a center active electrode returns through two neighboring electrodes and the remainder through a distant indifferent electrode. Forward-masked psychophysical tuning curves were obtained for channels with the highest, lowest, and median tripolar (σ=1 or 0.9) thresholds. The probe channel and level were fixed and presented with either the monopolar (σ=0) or a more focused partial tripolar (σ ≥ 0.55) configuration. The masker channel and level were varied while the configuration was fixed to σ = 0.5. A standard, threeinterval, two-alternative forced choice procedure was used for thresholds and masked levels.Results-Single-channel threshold and variability in threshold across channels systematically increased as the compensating current, σ, increased and the presumed electrical field became more focused. Across subjects, channels with the highest single-channel thresholds, when measured with a narrow, partial tripolar stimulus, had significantly broader psychophysical tuning curves than the lowest threshold channels. In two subjects, the tips of the tuning curves were shifted away from the probe channel. Tuning curves were also wider for the monopolar probes than with partial tripolar probes, for both the highest and lowest threshold channels.Conclusions-These results suggest that single-channel thresholds measured with a restricted stimulus can be used to identify cochlear implant channels with poor spatial selectivity. Channels having wide or tip-shifted tuning characteristics would likely not deliver the appropriate spectral information to the intended auditory neurons, leading to suboptimal perception. As a clinical tool, quick identification of impaired channels could lead to patient-specific mapping strategies and result in improved speech and music perception.Correspondence to: Julie Arenberg
Objectives The goal of this study was to compare cochlear implant behavioral measures and electrically-evoked auditory brainstem responses (EABRs) obtained with a spatially focused electrode configuration. It has been shown previously that channels with high thresholds, when measured with the tripolar configuration, exhibit relatively broad psychophysical tuning curves (Bierer and Faulkner, 2010). The elevated threshold and degraded spatial/spectral selectivity of such channels are consistent with a poor electrode-neuron interface, such as suboptimal electrode placement or reduced nerve survival. However, the psychophysical methods required to obtain these data are time intensive and may not be practical during a clinical mapping procedure, especially for young children. Here we have extended the previous investigation to determine if a physiological approach could provide a similar assessment of channel functionality. We hypothesized that, in accordance with the perceptual measures, higher EABR thresholds would correlate with steeper EABR amplitude growth functions, reflecting a degraded electrode-neuron interface. Design Data were collected from six cochlear implant listeners implanted with the HiRes 90k cochlear implant (Advanced Bionics). Single-channel thresholds and most comfortable listening levels were obtained for stimuli that varied in presumed electrical field size by using the partial tripolar configuration, for which a fraction of current (σ) from a center active electrode returns through two neighboring electrodes and the remainder through a distant indifferent electrode. EABRs were obtained in each subject for the two channels having the highest and lowest tripolar (σ=1 or 0.9) behavioral threshold. Evoked potentials were measured with both the monopolar (σ=0) and a more focused partial tripolar (σ ≥ 0.50) configuration. Results Consistent with previous studies, EABR thresholds were highly and positively correlated with behavioral thresholds obtained with both the monopolar and partial tripolar configurations. The Wave V amplitude growth functions with increasing stimulus level showed the predicted effect of shallower growth for the partial tripolar than for the monopolar configuration, but this was observed only for the low threshold channel. In contrast, high-threshold channels showed the opposite effect; steeper growth functions were seen for the partial tripolar configuration. Conclusions These results suggest that behavioral thresholds or EABRs measured with a restricted stimulus can be used to identify potentially impaired cochlear implant channels. Channels having high thresholds and steep growth functions would likely not activate the appropriate spatially restricted region of the cochlea, leading to suboptimal perception. As a clinical tool, quick identification of impaired channels could lead to patient-specific mapping strategies and result in improved speech and music perception.
At the present time, cochlear implantation is the only available medical intervention for patients with profound hearing loss and is considered the "standard of care" for both prelingually deaf infants and post-lingually deaf adults. It has been suggested recently that cochlear implants are one of the greatest accomplishments of auditory neuroscience. Despite the enormous success of cochlear implantation for the treatment of profound deafness, especially in young prelingually deaf children, several pressing unresolved clinical issues have emerged that are at the forefront of current research efforts in the field. In this commentary we briefly review how a cochlear implant works and then discuss five of the most critical clinical and basic research issues: (1) individual differences in outcome and benefit, (2) speech perception in noise, (3) music perception, (4) neuroplasticity and perceptual learning, and (5) binaural hearing.
Objectives Noise-vocoded speech is a valuable research tool for testing experimental hypotheses about the effects of spectral-degradation on speech recognition in adults with normal hearing (NH). However, very little research has utilized noise-vocoded speech with children with NH. Earlier studies with children with NH focused primarily on the amount of spectral information needed for speech recognition without assessing the contribution of neurocognitive processes to speech perception and spoken word recognition. In this study, we first replicated the seminal findings reported by Eisenberg et al. (2002) who investigated effects of lexical density and word frequency on noise-vocoded speech perception in a small group of children with NH. We then extended the research to investigate relations between noise-vocoded speech recognition abilities and five neurocognitive measures: auditory attention and response set, talker discrimination and verbal and nonverbal short-term working memory. Design Thirty-one children with NH between 5 and 13 years of age were assessed on their ability to perceive lexically controlled words in isolation and in sentences that were noise-vocoded to four spectral channels. Children were also administered vocabulary assessments (PPVT-4 and EVT-2) and measures of auditory attention (NEPSY Auditory Attention (AA) and Response Set (RS) and a talker discrimination task (TD)) and short-term memory (visual digit and symbol spans). Results Consistent with the findings reported in the original Eisenberg et al. (2002) study, we found that children perceived noise-vocoded lexically easy words better than lexically hard words. Words in sentences were also recognized better than the same words presented in isolation. No significant correlations were observed between noise-vocoded speech recognition scores and the PPVT-4 using language quotients to control for age effects. However, children who scored higher on the EVT-2 recognized lexically easy words better than lexically hard words in sentences. Older children perceived noise-vocoded speech better than younger children. Finally, we found that measures of auditory attention and short-term memory capacity were significantly correlated with a child’s ability to perceive noise-vocoded isolated words and sentences. Conclusions First, we successfully replicated the major findings from the Eisenberg et al. (2002) study. Because familiarity, phonological distinctiveness and lexical competition affect word recognition, these findings provide additional support for the proposal that several foundational elementary neurocognitive processes underlie the perception of spectrally-degraded speech. Second, we found strong and significant correlations between performance on neurocognitive measures and children’s ability to recognize words and sentences noise-vocoded to four spectral channels. These findings extend earlier research suggesting that perception of spectrally-degraded speech reflects early peripheral auditory processes as well as additional contrib...
Background: There is a pressing clinical need for the development of ecologically valid and robust assessment measures of speech recognition. Perceptually Robust English Sentence Test Open-set (PRESTO) is a new high-variability sentence recognition test that is sensitive to individual differences and was designed for use with several different clinical populations. PRESTO differs from other sentence recognition tests because the target sentences differ in talker, gender, and regional dialect. Increasing interest in using PRESTO as a clinical test of spoken word recognition dictates the need to establish equivalence across test lists. Purpose: The purpose of this study was to establish list equivalency of PRESTO for clinical use. ResearchDesign: PRESTO sentence lists were presented to three groups of normal-hearing listeners in noise (multitalker babble [MTB] at 0 dB signal-to-noise ratio) or under eight-channel cochlear implant simulation (CI-Sim). Study Sample: Ninety-one young native speakers of English who were undergraduate students from the Indiana University community participated in this study. Data Collection and Analysis: Participants completed a sentence recognition task using different PRESTO sentence lists. They listened to sentences presented over headphones and typed in the words they heard on a computer. Keyword scoring was completed offline. Equivalency for sentence lists was determined based on the list intelligibility (mean keyword accuracy for each list compared with all other lists) and listener consistency (the relation between mean keyword accuracy on each list for each listener). Results: Based on measures of list equivalency and listener consistency, ten PRESTO lists were found to be equivalent in the MTB condition, nine lists were equivalent in the CI-Sim condition, and six PRESTO lists were equivalent in both conditions. Conclusions: PRESTO is a valuable addition to the clinical toolbox for assessing sentence recognition across different populations. Because the test condition influenced the overall intelligibility of lists, researchers and clinicians should take the presentation conditions into consideration when selecting the best PRESTO lists for their research or clinical protocols.
The present study examined the morphological development of the otolith vestibular receptors in quail. Here we describe epithelial growth, hair cell density, stereocilia polarization, and afferent nerve innervation during development. The otolith maculae epithelial areas increased exponentially throughout embryonic development reaching asymptotic values near post-hatch day P7. Increases in hair cell density were dependent upon macular location; striolar hair cells developed first followed by hair cells in extrastriola regions. Stereocilia polarization was initiated early, with defining reversal zones forming at E8. Less than half of all immature hair cells observed had non-polarized internal kinocilia with the remaining exhibiting planar polarity. Immunohistochemistry and neural tracing techniques were employed to examine the shape and location of the striolar regions. Initial innervation of the maculae was by small fibers with terminal growth cones at E6, followed by collateral branches with apparent bouton terminals at E8. Calyceal terminal formation began at E10, however no mature calyces were observed until E12, when all fibers appeared to be dimorphs. Calyx afferents innervating only type I hair cells did not develop until E14. Finally, the topographic organization of afferent macular innervation in the adult quail utricle was quantified. Calyx and dimorph afferents were primarily confined to the striolar regions, while bouton fibers were located in the extrastriola and type II band. Calyx fibers were the least complex, followed by dimorph units. Bouton fibers had large innervation fields, with arborous branches and many terminal boutons.
Music listening experiences can be enhanced with tactile vibrations. However, it is not known which parameters of the tactile vibration must be congruent with the music to enhance it. Devices that aim to enhance music with tactile vibrations often require coding an acoustic signal into a congruent vibrotactile signal. Therefore, understanding which of these audio-tactile congruences are important is crucial. Participants were presented with a simple sine wave melody through supra-aural headphones and a haptic actuator held between the thumb and forefinger. Incongruent versions of the stimuli were made by randomizing physical parameters of the tactile stimulus independently of the auditory stimulus. Participants were instructed to rate the stimuli against the incongruent stimuli based on preference. It was found making the intensity of the tactile stimulus incongruent with the intensity of the auditory stimulus, as well as misaligning the two modalities in time, had the biggest negative effect on ratings for the melody used. Future vibrotactile music enhancement devices can use time alignment and intensity congruence as a baseline coding strategy, which improved strategies can be tested against.
Many older adults report difficulty when listening to speech in background noise. These difficulties may arise from some combination of factors, including age-related hearing loss, auditory sensory processing difficulties, and/or general cognitive decline. To perform well in everyday noisy environments, listeners must quickly adapt, switch attention, and adjust to multiple sources of variability in both the signal and listening environments. Sentence recognition tests in noise have been useful for assessing speech understanding abilities because they require a combination of basic sensory/perceptual abilities as well as cognitive resources and processing operations. This study was designed to explore several factors underlying individual differences in aided speech understanding in older adults. We examined the relations between measures of speech perception, cognition, and self-reported listening difficulties in a group of aging adults (N = 40, age range 60–86) and a group of young normal hearing listeners (N = 28, age range 18–30). All participants completed a comprehensive battery of tests, including cognitive, psychophysical, speech understanding, as well as the SSQ self-report scale. While controlling for audibility, speech understanding declined with age and was strongly correlated with psychophysical measures, cognition, and self-reported speech understanding difficulties. [Work supported by NIH: NIDCD grant T32-DC00012 and NIA grant R01-AG008293 to Indiana University.]
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.