Purpose To evaluate the family environments of children with cochlear implants and to examine relationships between family environment and post-implant language development and executive function. Method Forty-five families of children with cochlear implants completed a self-report family environment questionnaire (FES) and an inventory of executive function (BRIEF/BRIEF-P). Children’s receptive vocabulary (PPVT-4) and global language skills (PLS-4/CELF-4) were also evaluated. Results The family environments of children with cochlear implants differed from those of normal-hearing children, but not in clinically significant ways. Language development and executive function were found to be atypical, but not uncharacteristic of this clinical population. Families with higher levels of self-reported control had children with smaller vocabularies. Families reporting a higher emphasis on achievement had children with fewer executive function and working memory problems. Finally, families reporting a higher emphasis on organization had children with fewer problems related to inhibition. Conclusions Some of the variability in cochlear implantation outcomes that have protracted periods of development is related to family environment. Because family environment can be modified and enhanced by therapy or education, these preliminary findings hold promise for future work in helping families to create robust language-learning environments that can maximize their child’s potential with a cochlear implant.
Objective Valid and reliable methods for assessing speech perception in toddlers are lacking in the field, leading to conspicuous gaps in understanding how speech perception develops and limited clinical tools for assessing sensory aid benefit in toddlers. The objective of this investigation was to evaluate speech-sound discrimination in toddlers using modifications to the Change/No-Change procedure1. Methods Normal-hearing 2- and 3-year-olds’ discrimination of acoustically dissimilar (“easy”) and similar (“hard”) speech-sound contrasts were evaluated in a combined repeated measures and factorial design. Performance was measured in d’. Effects of contrast difficulty and age were examined, as was test-retest reliability, using repeated measures ANOVAs, planned post-hoc tests, and correlation analyses. Results The easy contrast (M=2.53) was discriminated better than the hard contrast (M=1.72) across all ages (p < .0001). The oldest group of children (M=3.13) discriminated the contrasts better than youngest (M=1.04; p < .0001) and the mid-age children (M=2.20; p = .037), who in turn discriminated the contrasts better than the youngest children (p = .010). Test-retest reliability was excellent (r = .886, p < .0001). Almost 90% of the children met the teaching criterion. The vast majority demonstrated the ability to be tested with the modified procedure and discriminated the contrasts. The few who did not were 2.5 years of age and younger. Conclusions The modifications implemented resulted, at least preliminarily, in a procedure that is reliable and sensitive to contrast difficulty and age in this young group of children, suggesting that these modifications are appropriate for this age group. With further development, the procedure holds promise for use in clinical populations who are believed to have core deficits in rapid phonological encoding, such as children with hearing loss or specific language impairment, children who are struggling to read, and second-language learners.
Purpose This study explored visual speech influence in preschoolers using 3 developmentally appropriate tasks that vary in perceptual difficulty and task demands. They also examined developmental differences in the ability to use visually salient speech cues and visual phonological knowledge. Method Twelve adults and 27 typically developing 3- and 4-year-old children completed 3 audiovisual (AV) speech integration tasks: matching, discrimination, and recognition. The authors compared AV benefit for visually salient and less visually salient speech discrimination contrasts and assessed the visual saliency of consonant confusions in auditory-only and AV word recognition. Results Four-year-olds and adults demonstrated visual influence on all measures. Three-year-olds demonstrated visual influence on speech discrimination and recognition measures. All groups demonstrated greater AV benefit for the visually salient discrimination contrasts. AV recognition benefit in 4-year-olds and adults depended on the visual saliency of speech sounds. Conclusions Preschoolers can demonstrate AV speech integration. Their AV benefit results from efficient use of visually salient speech cues. Four-year-olds, but not 3-year-olds, used visual phonological knowledge to take advantage of visually salient speech cues, suggesting possible developmental differences in the mechanisms of AV benefit.
This study used the auditory evaluation framework [Erber (1982). Auditory Training (Alexander Graham Bell Association, Washington, DC)] to characterize the influence of visual speech on audiovisual (AV) speech perception in adults and children at multiple levels of perceptual processing. Six- to eight-year-old children and adults completed auditory and AV speech perception tasks at three levels of perceptual processing (detection, discrimination, and recognition). The tasks differed in the level of perceptual processing required to complete them. Adults and children demonstrated visual speech influence at all levels of perceptual processing. Whereas children demonstrated the same visual speech influence at each level of perceptual processing, adults demonstrated greater visual speech influence on tasks requiring higher levels of perceptual processing. These results support previous research demonstrating multiple mechanisms of AV speech processing (general perceptual and speech-specific mechanisms) with independent maturational time courses. The results suggest that adults rely on both general perceptual mechanisms that apply to all levels of perceptual processing and speech-specific mechanisms that apply when making phonetic decisions and/or accessing the lexicon. Six- to eight-year-old children seem to rely only on general perceptual mechanisms across levels. As expected, developmental differences in AV benefit on this and other recognition tasks likely reflect immature speech-specific mechanisms and phonetic processing in children.
Purpose This study assessed the extent to which 6- to 8.5-month-old infants and 18- to 30-year-old adults detect and discriminate auditory syllables in noise better in the presence of visual speech than in auditory-only conditions. In addition, we examined whether visual cues to the onset and offset of the auditory signal account for this benefit. Method Sixty infants and 24 adults were randomly assigned to speech detection or discrimination tasks and were tested using a modified observer-based psychoacoustic procedure. Each participant completed 1–3 conditions: auditory-only, with visual speech, and with a visual signal that only cued the onset and offset of the auditory syllable. Results Mixed linear modeling indicated that infants and adults benefited from visual speech on both tasks. Adults relied on the onset–offset cue for detection, but the same cue did not improve their discrimination. The onset–offset cue benefited infants for both detection and discrimination. Whereas the onset–offset cue improved detection similarly for infants and adults, the full visual speech signal benefited infants to a lesser extent than adults on the discrimination task. Conclusions These results suggest that infants' use of visual onset–offset cues is mature, but their ability to use more complex visual speech cues is still developing. Additional research is needed to explore differences in audiovisual enhancement (a) of speech discrimination across speech targets and (b) with increasingly complex tasks and stimuli.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.