Objectives: The impact of the newly introduced cochlear implantation criteria of the United Kingdom and Flanders (Dutch speaking part of Belgium) was examined in the patient population of a tertiary referral center in the Netherlands. We compared the patients who would be included/excluded under the new versus old criteria in relation to the actual improvement in speech understanding after implantation in our center. We also performed a sensitivity analysis to examine the effectiveness of the different preoperative assessment approaches used in the United Kingdom and Flanders. Design: The selection criteria were based on preoperative pure-tone audiometry at 0.5, 1, 2, and 4 kHz and a speech perception test (SPT) with and without best-aided hearing aids. Postoperatively, the same SPT was conducted to assess the benefit in speech understanding. Results: The newly introduced criteria in Flanders and the United Kingdom were less restrictive, resulting in greater percentages of patients implanted with CI (increase of 30%), and sensitivity increase of 31%. The preoperative best-aided SPT, used by both countries, had the highest diagnostic ability to indicate a postoperative improvement of speech understanding. We observed that patient selection was previously dominated by the pure-tone audiometry criteria in both countries, whereas speech understanding became more important in their new criteria. Among patients excluded by the new criteria, seven of eight (the United Kingdom and Flanders) did exhibit improved postoperative speech understanding. Conclusions: The new selection criteria of the United Kingdom and Flanders led to increased numbers of postlingually deafened adults benefitting from CI. The new British and Flemish criteria depended on the best-aided SPT with the highest diagnostic ability. Notably, the new criteria still led to the rejection of candidates who would be expected to gain considerably in speech understanding after implantation.
Sensory deprivation can lead to cross-modal cortical changes, whereby sensory brain regions deprived of input may be recruited to perform atypical function. Enhanced cross-modal responses to visual stimuli observed in auditory cortex of postlingually deaf cochlear implant (CI) users are hypothesized to reflect increased activation of cortical language regions, but it is unclear if this cross-modal activity is “adaptive” or “mal-adaptive” for speech understanding. To determine if increased activation of language regions is correlated with better speech understanding in CI users, we assessed task-related activation and functional connectivity of auditory and visual cortices to auditory and visual speech and non-speech stimuli in CI users (n = 14) and normal-hearing listeners (n = 17) and used functional near-infrared spectroscopy to measure hemodynamic responses. We used visually presented speech and non-speech to investigate neural processes related to linguistic content and observed that CI users show beneficial cross-modal effects. Specifically, an increase in connectivity between the left auditory and visual cortices—presumed primary sites of cortical language processing—was positively correlated with CI users’ abilities to understand speech in background noise. Cross-modal activity in auditory cortex of postlingually deaf CI users may reflect adaptive activity of a distributed, multimodal speech network, recruited to enhance speech understanding.
Goal: Advances in computational models of biological systems and artificial neural networks enable rapid virtual prototyping of neuroprostheses, accelerating innovation in the field. Here, we present an end-to-end computational model for predicting speech perception with cochlear implants (CI), the most widely-used neuroprosthesis. Methods: The model integrates CI signal processing, a finite element model of the electrically-stimulated cochlea, and an auditory nerve model to predict neural responses to speech stimuli. An automatic speech recognition neural network is then used to extract phoneme-level speech perception from these neural response patterns. Results: Compared to human CI listener data, the model predicts similar patterns of speech perception and misperception, captures between-phoneme differences in perceptibility, and replicates effects of stimulation parameters and noise on speech recognition. Information transmission analysis at different stages along the CI processing chain indicates that the bottleneck of information flow occurs at the electrode-neural interface, corroborating studies in CI listeners. Conclusion: An end-to-end model of CI speech perception replicated phoneme-level CI speech perception patterns, and was used to quantify information degradation through the CI processing chain. Significance: This type of model shows great promise for developing and optimizing new and existing neuroprostheses.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.