Problem A key ingredient to academic success is being able to read. Deaf individuals have historically failed to develop literacy skills comparable to those of their normal-hearing peers, but early identification and cochlear implants have improved prospects that these children can learn to read at the levels of their peers. The goal of this study was to examine early, or emergent, literacy in these children. Method 27 deaf children with cochlear implants (CIs) who had just completed kindergarten were tested on emergent literacy, as well as on cognitive and linguistic skills that support emergent literacy, specifically ones involving phonological awareness, executive functioning, and oral language. 17 kindergartners with normal hearing (NH) and 8 with hearing loss, but who used hearing aids (HAs) served as controls. Outcomes were compared for these three groups of children, regression analyses were performed to see if predictor variables for emergent literacy differed for children with NH and those with CIs, and factors related to the early treatment of hearing loss and prosthesis configuration were examined for children with CIs. Results Performance of children with CIs was roughly one or more standard deviations below the mean performance of children with NH on all tasks, except for syllable counting, reading fluency, and rapid serial naming. Oral language skills explained more variance in emergent literacy for children with CIs than for children with NH. Age of first implant explained moderate amounts of variance for several measures. Having one or two CIs had no effect, but children who had some amount of bimodal experience outperformed children who had none on several measures. Conclusions Even deaf children who have benefitted from early identification, intervention, and implantation are still at risk for problems with emergent literacy that could affect their academic success. This finding means that intensive language support needs to continue through at least the early elementary grades. Also a period of bimodal stimulation during the preschool years can help boost emergent literacy skills to some extent.
Purpose Several acoustic cues specify any single phonemic contrast. Nonetheless, adult, native speakers of a language share weighting strategies, showing preferential attention to some properties over others. Cochlear implant (CI) signal processing disrupts the salience of some cues: in general, amplitude structure remains readily available, but spectral structure less so. This study asked how well speech recognition is supported if CI users shift attention to salient cues not weighted strongly by native speakers. Method 20 adults with CIs participated. The /bɑ/-/wɑ/ contrast was used because spectral and amplitude structure varies in correlated fashion for this contrast. Normal-hearing adults weight the spectral cue strongly, but the amplitude cue negligibly. Three measurements were made: labeling decisions, spectral and amplitude discrimination, and word recognition. Results Outcomes varied across listeners: some weighted the spectral cue strongly, some weighted the amplitude cue, and some weighted neither. Spectral discrimination predicted spectral weighting. Spectral weighting explained the most variance in word recognition. Age of onset of hearing loss predicted spectral weighting, but not unique variance in word recognition. Conclusions The weighting strategies of listeners with normal hearing likely support speech recognition best, so efforts in implant design, fitting, and training should focus on developing those strategies.
Objective: This study examined speech recognition in noise for children with hearing loss, compared it to recognition for children with normal hearing, and examined mechanisms that might explain variance in children’s abilities to recognize speech in noise. Design: Word recognition was measured in two levels of noise, both when the speech and noise were co-located in front and when the noise came separately from one side. Four mechanisms were examined as factors possibly explaining variance: vocabulary knowledge, sensitivity to phonological structure, binaural summation, and head shadow. Study sample: Participants were 113 eight-year-old children. Forty-eight had normal hearing (NH) and 65 had hearing loss: 18 with hearing aids (HAs), 19 with one cochlear implant (CI), and 28 with two CIs. Results: Phonological sensitivity explained a significant amount of between-groups variance in speech-in-noise recognition. Little evidence of binaural summation was found. Head shadow was similar in magnitude for children with NH and with CIs, regardless of whether they wore one or two CIs. Children with HAs showed reduced head shadow effects. Conclusion: These outcomes suggest that in order to improve speech-in-noise recognition for children with hearing loss, intervention needs to be comprehensive, focusing on both language abilities and auditory mechanisms.
Purpose Previous research has demonstrated that children weight the acoustic cues to many phonemic decisions differently than do adults and gradually shift those strategies as they gain language experience. However, that research has focused on spectral and duration cues rather than on amplitude cues. In the current study, the authors examined amplitude rise time (ART; an amplitude cue) and formant rise time (FRT; a spectral cue) in the /bɑ/–/wɑ/ manner contrast for adults and children, and related those speech decisions to outcomes of nonspeech discrimination tasks. Method Twenty adults and 30 children (ages 4–5 years) labeled natural and synthetic speech stimuli manipulated to vary ARTs and FRTs, and discriminated nonspeech analogs that varied only by ART in an AX paradigm. Results Three primary results were obtained. First, listeners in both age groups based speech labeling judgments on FRT, not on ART. Second, the fundamental frequency of the natural speech samples did not influence labeling judgments. Third, discrimination performance for the nonspeech stimuli did not predict how listeners would perform with the speech stimuli. Conclusion Even though both adults and children are sensitive to ART, it was not weighted in phonemic judgments by these typical listeners.
In three experiments, we tested the hypothesis that children are more obliged than adults to fuse components of speech signals and asked whether the principle of harmonicity could explain the effect or whether it is, instead, due to children’s implementing speech-based mechanisms. Coherence masking protection (CMP) was used, which involves labeling a phonetically relevant formant (the target) presented in noise, either alone or in combination with a stable spectral band (the cosignal) that provides no additional information about phonetic identity and is well outside the critical band of the target. Adults and children (8 and 5 years old) heard stimuli that were either synthetic speech or hybrids consisting of sine wave targets and synthetic cosignals. The target and cosignal either shared a common harmonic structure or did not. An adaptive procedure located listeners’ thresholds for accurate labeling. Lower thresholds when the cosignal is present indicate CMP. Younger children demonstrated CMP effects that were both larger in magnitude and less susceptible to disruptions in harmonicity than those observed for adults. The conclusion was that children are obliged to integrate spectral components of speech signals, a perceptual strategy based on their recognition of when all components come from the same generator.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.