The aim of this study is to define an automated and reproducible framework for cochlear anatomical analysis from high-resolution segmented images and to provide a comprehensive and objective shape variability study suitable for cochlear implant design and surgery planning. For the scala tympani (ST), the scala vestibuli (SV) and the whole cochlea, the variability of the arc lengths and the radial and longitudinal components of the lateral, central and modiolar paths are studied. The robustness of the automated cochlear coordinate system estimation is validated with synthetic and real data. Cochlear cross-sections are statistically analyzed using area, height and width measurements. The cross-section tilt angle is objectively measured and this data documents a significant feature for occurrence of surgical trauma.
To assess the quality of insertion of Cochlear Implants (CI) after surgery, it is important to analyze the positions of the electrodes with respect to the cochlea based on post-operative CT imaging. Yet, these images suffer from metal artifacts which often entail a difficulty to make any analysis. In this work, we propose a 3D metal artifact reduction method using convolutional neural networks for post-operative cochlear implant imaging. Our approach is based on a 3D generative adversarial network (MARGANs) to create an image with a reduction of metal artifacts. The generative model is trained on a large number of pre-operative "artifact-free" images on which simulated metal artifacts are created. This simulation involves the segmentation of the scala tympani, the virtual insertion of electrode arrays and the simulation of beam hardening based on the Beer-Lambert law. Quantitative and qualitative evaluations compared with two classical metallic artifact reduction algorithms show the effectiveness of our method.
The robust delineation of the cochlea and its inner structures combined with the detection of the electrode of a cochlear implant within these structures is essential for envisaging a safer, more individualized, routine image-guided cochlear implant therapy. We present Nautilus—a web-based research platform for automated pre- and post-implantation cochlear analysis. Nautilus delineates cochlear structures from pre-operative clinical CT images by combining deep learning and Bayesian inference approaches. It enables the extraction of electrode locations from a post-operative CT image using convolutional neural networks and geometrical inference. By fusing pre- and post-operative images, Nautilus is able to provide a set of personalized pre- and post-operative metrics that can serve the exploration of clinically relevant questions in cochlear implantation therapy. In addition, Nautilus embeds a self-assessment module providing a confidence rating on the outputs of its pipeline. We present a detailed accuracy and robustness analyses of the tool on a carefully designed dataset. The results of these analyses provide legitimate grounds for envisaging the implementation of image-guided cochlear implant practices into routine clinical workflows.
The aim of the present study was to investigate the pupillary response to word identification in cochlear implant (CI) patients. Authors hypothesized that when task difficulty (i.e., addition of background noise) increased, pupil dilation markers such as the peak dilation or the latency of the peak dilation would increase in CI users, as already observed in normal-hearing and hearing-impaired subjects. Methods: Pupillometric measures in 10 CI patients were combined to standard speech recognition scores used to evaluate CI outcomes, namely, speech audiometry in quiet and in noise at +10 dB signal-to-noise ratio (SNR). The main outcome measures of pupillometry were mean pupil dilation, maximal pupil dilation, dilation latency, and mean dilation during return to baseline or retention interval. Subjective hearing quality was evaluated by means of one self-reported fatigue questionnaire, and the Speech, Spatial, and Qualities (SSQ) of Hearing scale. Results: All pupil dilation data were transformed to percent change in event-related pupil dilation (ERPD, %). Analyses show that the peak amplitudes for both mean pupil dilation and maximal pupil dilation were higher during the speech-in-noise test. Mean peak dilation was measured at 3.47 ± 2.29% noise vs. 2.19 ± 2.46 in quiet and maximal peak value was detected at 9.17 ± 3.25% in noise vs. 8.72 ± 2.93% in quiet. Concerning the questionnaires, the mean pupil dilation during the retention interval was significantly correlated with the spatial subscale score of the SSQ Hearing scale [r(8) = −0.84, p = 0.0023], and with the global score [r(8) = −0.78, p = 0.0018]. Conclusion: The analysis of pupillometric traces, obtained during speech audiometry in quiet and in noise in CI users, provided interesting information about the different processes engaged in this task. Pupillometric measures could be indicative of listening difficulty, phoneme intelligibility, and were correlated with general hearing experience as evaluated by the SSQ of Hearing scale. These preliminary results show that pupillometry constitutes a promising tool to improve objective quantification of CI performance in clinical settings.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.