Due to limited spectral resolution, cochlear implants (CIs) do not convey pitch information very well. Pitch cues are important for perception of music and tonal language; it is possible that music training may improve performance in both listening tasks. In this study, we investigated music training outcomes in terms of perception of music, lexical tones, and sentences in 22 young (4.8 to 9.3 years old), prelingually deaf Mandarin-speaking CI users. Music perception was measured using a melodic contour identification (MCI) task. Speech perception was measured for lexical tones and sentences presented in quiet. Subjects received 8 weeks of MCI training using pitch ranges not used for testing. Music and speech perception were measured at 2, 4, and 8 weeks after training was begun; follow-up measures were made 4 weeks after training was stopped. Mean baseline performance was 33.2%, 76.9%, and 45.8% correct for MCI, lexical tone recognition, and sentence recognition, respectively. After 8 weeks of MCI training, mean performance significantly improved by 22.9, 14.4, and 14.5 percentage points for MCI, lexical tone recognition, and sentence recognition, respectively (p < .05 in all cases). Four weeks after training was stopped, there was no significant change in posttraining music and speech performance. The results suggest that music training can significantly improve pediatric Mandarin-speaking CI users’ music and speech perception.
Objectives: While fundamental frequency (F0) cues are important to both lexical tone perception and multitalker segregation, F0 cues are poorly perceived by cochlear implant (CI) users. Adding low-frequency acoustic hearing via a hearing aid in the contralateral ear may improve CI users’ F0 perception. For English-speaking CI users, contralateral acoustic hearing has been shown to improve perception of target speech in noise and in competing talkers. For tonal languages such as Mandarin Chinese, F0 information is lexically meaningful. Given competing F0 information from multiple talkers and lexical tones, contralateral acoustic hearing may be especially beneficial for Mandarin-speaking CI users’ perception of competing speech. Design: Bimodal benefit (CI+hearing aid – CI-only) was evaluated in 11 pediatric Mandarin-speaking Chinese CI users. In experiment 1, speech recognition thresholds (SRTs) were adaptively measured using a modified coordinated response measure test; subjects were required to correctly identify 2 keywords from among 10 choices in each category. SRTs were measured with CI-only or bimodal listening in the presence of steady state noise (SSN) or competing speech with the same (M+M) or different voice gender (M+F). Unaided thresholds in the non-CI ear and demographic factors were compared with speech performance. In experiment 2, SRTs were adaptively measured in SSN for recognition of 5 keywords, a more difficult listening task than the 2-keyword recognition task in experiment 1. Results: In experiment 1, SRTs were significantly lower for SSN than for competing speech in both the CI-only and bimodal listening conditions. There was no significant difference between CI-only and bimodal listening for SSN and M+F (p > 0.05); SRTs were significantly lower for CI-only than for bimodal listening for M+M (p < 0.05), suggesting bimodal interference. Subjects were able to make use of voice gender differences for bimodal listening (p < 0.05) but not for CI-only listening (p > 0.05). Unaided thresholds in the non-CI ear were positively correlated with bimodal SRTs for M+M (p < 0.006) but not for SSN or M+F. No significant correlations were observed between any demographic variables and SRTs (p > 0.05 in all cases). In experiment 2, SRTs were significantly lower with two than with five keywords (p < 0.05). A significant bimodal benefit was observed only for the 5-keyword condition (p < 0.05). Conclusions: With the CI alone, subjects experienced greater interference with competing speech than with SSN and were unable to use voice gender difference to segregate talkers. For the coordinated response measure task, subjects experienced no bimodal benefit and even bimodal interference when competing talkers were the same voice gender. A bimodal benefit in SSN was observed for the five-keyword condition but not for the two-keyword condition, suggesting that bimodal listening may be more beneficial as the difficulty of the listening task increased. The present data suggest that bimodal benefit may depend on the type of masker and/or the difficulty of the listening task.
Patients with single-sided deafness (SSD) often experience poor sound localization, reduced speech understanding in noise, reduced quality of life, and tinnitus. The present study aims to evaluate effects of tinnitus and duration of deafness on sound localization and speech recognition in noise by SSD subjects. Sound localization and speech recognition in noise were measured in 26 SSD and 10 normal-hearing (NH) subjects. Speech was always presented directly in front of the listener. Noise was presented to the deaf ear, in front of the listener, or to the better hearing ear. Tinnitus severity was measured using visual analog scale and Tinnitus Handicap Inventory. Relative to NH subjects, SSD subjects had significant deficits in sound localization and speech recognition in all listening conditions (p < .001). For SSD subjects, speech recognition in noise was correlated with mean hearing thresholds in the better hearing ear (p < .001) but not in the deaf ear. SSD subjects with tinnitus performed poorer in sound localization and speech recognition in noise than those without tinnitus. Shorter duration of deafness was associated with greater tinnitus and sound localization difficulty. Tinnitus visual analog scale and Tinnitus Handicap Inventory were highly correlated; the degree of tinnitus was negatively correlated with sound localization and speech recognition in noise. Those experiencing noticeable tinnitus may benefit more from cochlear implantation than those without; subjective tinnitus reduction may be correlated with improved sound localization and speech recognition in noise. Subjects with longer duration of deafness demonstrated better sound localization, suggesting long-term compensation for loss of binaural cues.
Due to poor perception of fundamental frequency (0) cues that are important for lexical tone perception and talker segregation, pediatric Chinese cochlear implant (CI) users may be especially susceptible to informational masking. Here, speech recognition thresholds (SRTs) were measured in steady noise or competing speech in Mandarin-speaking CI and normal-hearing (NH) children. CI children were more susceptible to informational masking and were unable to use 0 cues to segregate talkers. SRTs were significantly correlated with chronological age in NH children and with duration of deafness in CI children, suggesting that auditory deprivation may limit developmental processes important for talker segregation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.