The aim of this study was to relate the pitch of high-rate electrical stimulation delivered to individual cochlear implant electrodes to electrode insertion depth and insertion angle. The patient (CH1) was able to provide pitch matches between electric and acoustic stimulation because he had auditory thresholds in his nonimplanted ear ranging between 30 and 60 dB HL over the range, 250 Hz to 8 kHz. Electrode depth and insertion angle were measured from high-resolution computed tomography (CT) scans of the patient_s temporal bones. The scans were used to create a 3D image volume reconstruction of the cochlea, which allowed visualization of electrode position within the scala. The method of limits was used to establish pitch matches between acoustic pure tones and electric stimulation (a 1,652-pps, unmodulated, pulse train). The pitch matching data demonstrated that, for insertion angles of greater than 450 degrees or greater than approximately 20 mm insertion depth, pitch saturated at approximately 420 Hz. From 20 to 15 mm insertion depth pitch estimates were about one-half octave lower than the Greenwood function. From 13 to 3 mm insertion depth the pitch estimates were approximately one octave lower than the Greenwood function. The pitch match for an electrode only 3.4 mm into the cochlea was 3,447 Hz. These data are consistent with other reports, e.g., Boëx et al. (2006), of a frequency-to-place map for the electrically stimulated cochlea in which perceived pitches for stimulation on individual electrodes are significantly lower than those predicted by the Greenwood function for stimulation at the level of the hair cell.
The intelligibility of speech having either a single "hole" in various bands or having two "holes" in disjoint or adjacent bands in the spectrum was assessed with normal-hearing listeners. In experiment 1, the effect of spectral "holes" on vowel and consonant recognition was evaluated using speech processed through six frequency bands, and synthesized as a sum of sine waves. Results showed a modest decrease in vowel and consonant recognition performance when a single hole was introduced in the low- and high-frequency regions of the spectrum, respectively. When two spectral holes were introduced, vowel recognition was sensitive to the location of the holes, while consonant recognition remained constant around 70% correct, even when the middle- and high-frequency speech information was missing. The data from experiment 1 were used in experiment 2 to derive frequency-importance functions based on a least-squares approach. The shapes of the frequency-importance functions were found to be different for consonants and vowels in agreement with the notion that different cues are used by listeners to identify consonants and vowels. For vowels, there was unequal weighting across the various channels, while for consonants the frequency-importance function was relatively flat, suggesting that all bands contributed equally to consonant identification.
Fundamental frequency (F0) variation is one of a number of acoustic cues normal hearing listeners use for guiding lexical segmentation of degraded speech. This study examined whether F0 contour facilitates lexical segmentation by listeners fitted with cochlear implants (CIs). Lexical boundary error patterns elicited under unaltered and flattened F0 conditions were compared across three groups: listeners with conventional CI, listeners with CI and preserved low-frequency acoustic hearing, and normal hearing listeners subjected to CI simulations. Results indicate that all groups attended to syllabic stress cues to guide lexical segmentation, and that F0 contours facilitated performance for listeners with low-frequency hearing.
The importance of intensity resolution in terms of the number of intensity steps needed for speech recognition was assessed for normal-hearing and cochlear implant listeners. In experiment 1, the channel amplitudes extracted from a six-channel continuous interleaved sampling (CIS) processor were quantized into 2, 4, 8, 16, or 32 steps. Consonant recognition was assessed for five cochlear implant listeners, using the Med-El/CIS-link device, as a function of the number of steps in the electrical dynamic range. Results showed that eight steps within the dynamic range are sufficient for reaching asymptotic performance in consonant recognition. These results suggest that amplitude resolution is not a major factor in determining consonant identification. In experiment 2, the relationship between spectral resolution (number of channels) and intensity resolution (number of steps) in normal-hearing listeners was investigated. Speech was filtered through 4-20 frequency bands, synthesized as a linear combination of sine waves with amplitudes extracted from the envelopes of the bandpassed waveforms, and then quantized into 2-32 levels to produce stimuli with varying degrees of intensity resolution. Results showed that the number of steps needed to achieve asymptotic performance was a function of the number of channels and the speech material used. For vowels, asymptotic performance was obtained with four steps, while for consonants, eight steps were needed for most channel conditions, consistent with our findings in experiment 1. For sentences processed though 4 channels, 16 steps were needed to reach asymptotic performance, while for sentences processed through 16 channels, 4 steps were needed. The results with normal-hearing listeners on sentence recognition point to an inverse relationship between spectral resolution and intensity resolution. When spectral resolution is poor (i.e., a small number of channels is available) a relatively fine intensity resolution is needed to achieve high levels of understanding. Conversely, when the intensity resolution is poor, a high degree of spectral resolution is needed to achieve asymptotic performance. The results of this study, taken together with previous findings on the effect of reduced dynamic range, suggest that the performance of cochlear implant subjects is primarily limited by the small number (four to six) of channels received, and not by the small number of intensity steps or reduced dynamic range.
Objective: To create and validate a Spanish sentence test for evaluation of speech understanding of Spanish-speaking listeners with hearing loss or cochlear implants (CI). Study Design: Two thousand sentences were recorded from two male and two female speakers. The average intelligibility of each sentence was estimated as the mean score achieved by five listeners presented with a five-channel cochlear implant simulation. The mean scores of each sentence were used to construct 42 lists of 20 sentences with similar mean scores. List equivalency was then validated by presenting all lists to 10 CI users and in a 2-list comparison in a clinical setting to 38 CI patients. Setting: Tertiary referral center. Patients: Normal-hearing listeners (n = 5), CI users in a research study (n = 10), and CI patients (n = 38) in routine clinical follow-up. Intervention: Multiple sentence lists from a newly minted speech perception test. Main Outcome Measures: List intelligibility and equivalence across sentence lists. Results: Forty-two lists of sentences were equivalent when all lists were presented in random order to 10 adult CI recipients. The variability of scores observed on lists presented to the same listener in the same condition was captured using a binomial distribution model based on a 40-item list for 38 adult implant recipients. Conclusion: The Spanish AzBio Sentence Test includes 42 lists of 20 sentences. These sentences are roughly equivalent in terms of overall difficulty and confidence limits have been provided to assess the significance of variability in list scores observed within or across conditions. These materials will be of benefit when assessing native Spanish speakers in both research and clinical settings.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.