Objectives This study examined the language outcomes of children with mild to severe hearing loss during the preschool years. The longitudinal design was leveraged to test whether language growth trajectories were associated with degree of hearing loss and whether aided hearing influenced language growth in a systematic manner. The study also explored the influence of the timing of hearing aid fitting and extent of use on children’s language growth. Finally, the study tested the hypothesis that morphosyntax may be at particular risk due to the demands it places on the processing of fine details in the linguistic input. Design The full cohort of children in this study was comprised of 290 children were hard of 34 hearing (CHH) and 112 children with normal hearing (CNH) who participated in the Outcomes 35 of Children with Hearing Loss (OCHL) study between the ages of 2 and 6 years. CHH had a mean better ear pure tone average of 47.66 dB HL (SD = 13.35). All children received a comprehensive battery of language measures at annual intervals, including standardized tests, parent report measures, and spontaneous and elicited language samples. Principal components analysis supported the use of a single composite language score for each of the age levels (2, 3, 4, 5, 6 years). Measures of unaided (better ear pure tone average, Speech Intelligibility Index) and aided (residualized Speech Intelligibility Index) hearing were collected, along with parent report measures of daily hearing aid use time. Mixed modeling procedures were applied to examine the rate of change (227 CHH; 94 CNH) in language ability over time in relation to 1) degree of hearing loss, 2) aided hearing, 3) age of hearing aid fit and duration of use, and 4) daily hearing aid use. Principal components analysis was also employed to examine factor loadings from spontaneous language samples and to test their correspondence with standardized measures. Multiple regression analysis was used to test for differential effects of hearing loss on morphosyntax and lexical development. Results Children with mild to severe hearing loss, on average, showed depressed language levels compared to peers with normal hearing who were matched on age and socioeconomic status. The degree to which CHH fell behind increased with greater severity of hearing loss. The amount of improved audibility with hearing aids was associated with differential rates of language growth; better audibility was associated with faster rates of language growth in the preschool years. Children fit early with hearing aids had better early language achievement than children fit later. However, children who were fit after 18 months of age improved in their language abilities as a function of the amount of hearing aid use. These results suggest that the language learning system remains open to experience provided by improved access to linguistic experience. Performance in the domain of morphosyntax was found to be more delayed in CHH than their semantic abilities. Conclusion The data obtained in thi...
Purpose This study investigated predictors of hearing aid (HA) use time for children with mild-severe hearing loss. Barriers to consistent HA use and reliability of parent report measures were also examined. Method Participants included parents of 272 children with hearing loss. Parents estimated the amount of time the child used HAs daily. Regression analysis examined the relationships among independent variables and HA use time. To determine parental accuracy of HA use time, datalogging from the HA was compared to parental estimates. Results Longer HA use related to older age, poorer hearing, and higher maternal education. Parental consistency ratings revealed similar findings; younger children and children with milder hearing losses wore HAs less consistently than older children and children with more severe hearing loss. Parents’ estimates and datalogging were significantly correlated; however, results suggested parents overestimate the amount of time their children wear their hearing aids. Conclusions The findings provide evidence that certain variables were significantly related to the amount of time children wore their HAs. Consistency rating scales provided insight into circumstances that were challenging for families. Use of both parental reports and datalogging may allow clinicians and researchers to obtain a general estimate of HA use time.
Low-frequency acoustic hearing improves pitch discrimination as compared with traditional, electric-only cochlear implants. These findings have implications for musical tasks such as familiar melody recognition.
The research examined whether performance by adult cochlear implant recipients on a variety of recognition and appraisal tests derived from real-world music could be predicted from technological, demographic, and life experience variables, as well as speech recognition scores. A representative sample of 209 adults implanted between 1985 and 2006 participated. Using multiple linear regression models and generalized linear mixed models, sets of optimal predictor variables were selected that effectively predicted performance on a test battery that assessed different aspects of music listening. These analyses established the importance of distinguishing between the accuracy of music perception and the appraisal of musical stimuli when using music listening as an index of implant success. Importantly, neither device type nor processing strategy predicted music perception or music appraisal. Speech recognition performance was not a strong predictor of music perception, and primarily predicted music perception when the test stimuli included lyrics. Additionally, limitations in the utility of speech perception in predicting musical perception and appraisal underscore the utility of music perception as an alternative outcome measure for evaluating implant outcomes. Music listening background, residual hearing (i.e., hearing aid use), cognitive factors, and some demographic factors predicted several indices of perceptual accuracy or appraisal of music. KeywordsCochlear implant; cognitive; music; speech perceptionThe cochlear implant (CI) is a prosthetic hearing device developed primarily to assist persons who are severely to profoundly deaf with verbal communication. The device picks up acoustic signals through an externally worn microphone, and these signals are then processed to filter and extract those components of sound critically important for speech perception. Those components are conveyed via electrical signals to an array of electrodes in the cochlea, resulting in electrical stimulation of the auditory nerve. This signal is then transmitted to the central Kate Gfeller, Department of Otolaryngology, 200 Hawkins Drive, 21201 PFP, Iowa City, IA 52242-1078 Portions of this article were presented in the keynote address for Music Perception for Cochlear Implant Workshops, University of Washington, Seattle, October 17, 2006 NIH-PA Author ManuscriptNIH-PA Author Manuscript NIH-PA Author Manuscript auditory pathway for interpretation. Although the device does not provide an exact replica of normal hearing, the majority of postlingually deafened implant recipients using modern CIs score above 80% on high-context sentences in quiet listening conditions, even without visual cues (Wilson, 2000).Although CIs have been quite successful in providing implant recipients with speech perception, they are less effective in transmitting the fine structural features of sound that contribute to music perception (e.g., Gfeller et al, 2000aGfeller et al, , 2002aGfeller et al, , 2003Gfeller et al, , 2005Gfeller et al, , 2007Le...
Aim: The aims of this study were to examine the music perception abilities of Cochlear Nucleus Hybrid (acoustic plus electric stimulation) cochlear implant (CI) recipients and to compare their performance with that of normal-hearing (NH) adults and CI recipients using conventional long-electrode (LE) devices (Advanced Bionics: 90K, Clarion, CIIHF; Cochlear Corporation: CI24M, CI22, Contour; Ineraid). Hybrid CI recipients were compared with NH adults and LE CI recipients on recognition of (a) real-world melodies and (b) musical instruments. Patients and Methods: We tested 4 Hybrid CI recipients, 17 NH adults, and 39 LE CI recipients on open-set recognition of real-world songs presented with and without lyrics. We also tested 14 Hybrid CI recipients, 21 NH adults, and 174 LE CI recipients on closed-set recognition of 8 musical instruments playing a 7-note phrase. Results: On recognition of real-world songs, both the Hybrid recipients and NH listeners were significantly more accurate (p < 0.0001) than the LE CI recipients in the no lyrics condition, which required reliance on musical cues only. The LE group was significantly less accurate than either the Hybrid or NH group (p < 0.0001) on instrument recognition for low and high frequency ranges. Conclusions: These results, while preliminary in nature, suggest that preservation of low-frequency acoustic hearing is important for perception of real-world musical stimuli.
Importance Hearing loss (HL) in children can be deleterious to their speech and language development. The standard of practice has been early provision of hearing aids (HAs) to moderate these effects; however, there have been few empirical studies evaluating the effectiveness of this practice on speech and language development among children with mild-to-severe HL. Objective To investigate the contributions of aided hearing and duration of HA use to speech and language outcomes in children with mild-to-severe HL. Design, Setting, and Participants An observational cross-sectional design was used to examine the association of aided hearing levels and length of HA use with levels of speech and language outcomes. One hundred eighty 3- and 5-year-old children with HL were recruited through records of Universal Newborn Hearing Screening and referrals from clinical service providers in the general community in 6 US states. Interventions All but 4 children had been fitted with HAs, and measures of aided hearing and the duration of HA use were obtained. Main outcomes and measures Standardized measures of speech and language ability were obtained. Results Measures of the gain in hearing ability for speech provided by the HA were significantly correlated with levels of speech (ρ179 = 0.20; P = .008) and language: ρ155 = 0.21; P = .01) ability. These correlations were indicative of modest levels of association between aided hearing and speech and language outcomes. These benefits were found for children with mild and moderate-to-severe HL. In addition, the amount of benefit from aided hearing interacted with the duration of HA experience (Speech: F4,161 = 4.98; P < .001; Language: F4,138 = 2.91; P < .02). Longer duration of HA experience was most beneficial for children who had the best aided hearing. Conclusions and Relevance The degree of improved hearing provided by HAs was associated with better speech and language development in children. In addition, the duration of HA experience interacted with the aided hearing to influence outcomes. These results provide support for the provision of well-fitted HAs to children with HL. In particular, the findings support early HA fitting and HA provision to children with mild HL.
Background Deficient vocabulary is a frequently reported symptom of developmental language impairment but the nature of the deficit and its developmental course are not well documented. Aims We aimed to describe the nature of the deficit in terms of breadth and depth of vocabulary knowledge and to determine whether the nature and the extent of the deficit change over the school years. Methods A total of 25,681 oral definitions produced by 177 children with developmental language impairment (LI) and 325 grade-mates with normally developing language (ND) in grades 2, 4, 8, and 10 were taken from an existing longitudinal database. We analyzed these for breadth by counting the number of words defined correctly and for depth by determining the amount of information in each correct definition. Via a linear mixed model, we determined whether breadth and depth varied with language diagnosis independent of nonverbal IQ, mothers’ education level, race, gender, income and (for depth only) word. Results Children with LI scored significantly lower than children with ND on breadth and depth of vocabulary knowledge in all grades. The extent of the deficit did not vary significantly across grades. Language diagnosis was an independent predictor of breadth and depth and as strong a predictor as maternal education. For the LI group, growth in depth relative to breadth was slower than for the ND group. Conclusions Compared to their grade-mates, children with LI have fewer words in their vocabularies and they have shallower knowledge of the words that are in their vocabularies. This deficit persists over developmental time.
Acoustic plus electric (electric-acoustic) speech processing has been successful in highlighting the important role of articulation information in consonant recognition in those adults that have profound high-frequency hearing loss at frequencies greater than 1500 Hz and less than 60% discrimination scores. Eighty-seven subjects were enrolled in an adult Hybrid multicenter Food and Drug Administration clinical trial. Immediate hearing preservation was accomplished in 85/87 subjects. Over time (3 months to 5 years), some hearing preservation was maintained in 91% of the group. Combined electric-acoustic processing enabled most of this group of volunteers to gain improved speech understanding, compared to their preoperative hearing, with bilateral hearing aids. Most have preservation of low-frequency acoustic hearing within 15 dB of their preoperative pure tone levels. Those with greater losses (>30 dB) also benefited from the combination of electric-acoustic speech processing. Postoperatively, in the electric-acoustic processing condition, loss of low-frequency hearing did not correlate with improvements in speech perception scores in quiet. Sixteen subjects were identified as poor performers in that they did not achieve a significant improvement through electric-acoustic processing. A multiple regression analysis determined that 91% of the variance in the poorly performing group can be explained by the preoperative speech recognition score and duration of deafness. Signal-to-noise ratios for speech understanding in noise improved more than 9 dB in some individuals in the electric-acoustic processing condition. The relation between speech understanding in noise thresholds and residual low-frequency acoustic hearing is significant (r = 0.62; p < 0.05). The data suggest that, in general, the advantages gained for speech recognition in noise by preserving residual hearing exist, unless the hearing loss approaches profound levels. Preservation of residual low-frequency hearing should be considered when expanding candidate selection criteria for standard cochlear implants. Duration of profound high-frequency hearing loss appears to be an important variable when determining selection criteria for the Hybrid implant.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.