Infants' representations of the sound patterns of words were explored by examining the effects of talker variability on the recognition of words in fluent speech. Infants were familiarized with isolated words (e.g., cup and dog) from 1 talker and then heard 4 passages produced by another talker, 2 of which included the familiarized words. At 7.5 months of age, infants attended longer to passages with the familiar words for materials produced by 2 female talkers or 2 male talkers but not for materials by a male and a female talker. These findings suggest a strong role for talker-voice similarity in infants' ability to generalize word tokens. By 10.5 months, infants could generalize different instances of the same word across talkers of the opposite sex. One implication of the present results is that infants' initial representations of the sound structure of words not only include phonetic information but also indexical properties relating to the vocal characteristics of particular talkers.
Infant-directed speech (IDS), compared with adult-directed speech (ADS), is characterized by a slower rate, a higher fundamental frequency, greater pitch variations, longer pauses, repetitive intonational structures, and shorter sentences. Despite studies on the properties of IDS, there is no direct demonstration of its effects for word learning in infants. This study examined whether 21- and 27-month-old children learned novel words better in IDS than in ADS. Two major findings emerged. First, 21-month-olds reliably learned words only in the IDS condition, although children with relatively larger vocabulary than their peers learned in the ADS condition as well. Second, 27-month-olds reliably learned the words in the ADS condition. These results support the implicitly held assumption that IDS does in fact facilitate word mapping at the start of lexical acquisition and that its influence wanes as language development proceeds.
Assessing speech discrimination skills in individual infants from clinical populations (e.g., infants with hearing impairment) has important diagnostic value. However, most infant speech discrimination paradigms have been designed to test group effects rather than individual differences. Other procedures suffer from high attrition rates. In this study, we developed 4 variants of the Visual Habituation Procedure (VHP) and assessed their robustness in detecting individual 9-month-old infants' ability to discriminate highly contrastive nonwords. In each variant, infants were first habituated to audiovisual repetitions of a nonword (seepug) before entering the test phase. The test phase in Experiment 1 (extended variant) consisted of 7 old trials (seepug) and 7 novel trials (boodup) in alternating order. In Experiment 2, we tested 3 novel variants that incorporated methodological features of other behavioral paradigms. For the oddity variant, only 4 novel trials and 10 old trials were used. The stimulus alternation variant was identical to the extended variant except that novel trials were Send correspondence to Derek M. Houston,
Word-learning skills were tested in normal-hearing 12- to 40-month-olds and in deaf 22- to 40-month-olds 12 to 18 months after cochlear implantation. Using the Intermodal Preferential Looking Paradigm (IPLP), children were tested for their ability to learn two novel-word/novel-object pairings. Normal-hearing children demonstrated learning on this task at approximately 18 months of age and older. For deaf children, performance on this task was significantly correlated with early auditory experience: Children whose cochlear implants were switched on by 14 months of age or who had relatively more hearing before implantation demonstrated learning in this task, but later implanted profoundly deaf children did not. Performance on this task also correlated with later measures of vocabulary size. Taken together, these findings suggest that early auditory experience facilitates word learning and that the IPLP may be useful for identifying children who may be at high risk for poor vocabulary development.
SummaryObjective-We adapted a behavioral procedure that has been used extensively with normalhearing (NH) infants, the visual habituation (VH) procedure, to assess deaf infants' discrimination and attention to speech.Methods-Twenty-four NH 6-month-olds, 24 NH 9-month-olds, and 16 deaf infants at various ages before and following cochlear implantation (CI) were tested in a sound booth on their caregiver's lap in front of a TV monitor. During the habituation phase, each infant was presented with a repeating speech sound (e.g. 'hop hop hop') paired with a visual display of a checkerboard pattern on half of the trials ('sound trials') and only the visual display on the other half ('silent trials'). When the infant's looking time decreased and reached a habituation criterion, a test phase began. This consisted of two trials: an 'old trial' that was identical to the 'sound trials' and a 'novel trial' that consisted of a different repeating speech sound (e.g. 'ahhh') paired with the same checkerboard pattern.Results-During the habituation phase, NH infants looked significantly longer during the sound trials than during the silent trials. However, deaf infants who had received cochlear implants (CIs) displayed a much weaker preference for the sound trials. On the other hand, both NH infants and deaf infants with CIs attended significantly longer to the visual display during the novel trial than during the old trial, suggesting that they were able to discriminate the speech patterns. Before receiving CIs, deaf infants did not show any preferences.Conclusions-Taken together, the findings suggest that deaf infants who receive CIs are able to detect and discriminate some speech patterns. However, their overall attention to speech sounds may be less than NH infants'. Attention to speech may impact other aspects of speech perception and spoken language development, such as segmenting words from fluent speech and learning novel words. Implications of the effects of early auditory deprivation and age at CI on speech perception and language development are discussed.
Research suggests that non-linguistic sequence learning abilities are an important contributor to language development (Conway, Bauernschmidt, Huang, & Pisoni, 2010). The current study investigated visual sequence learning as a possible predictor of vocabulary development in infants. Fifty-eight 8.5-month-old infants were presented with a three-location spatiotemporal sequence of multi-colored geometric shapes. Early language skills were assessed using the MacArthur-Bates CDI. Analyses of children’s reaction times to the stimuli suggest that the extent to which infants demonstrated learning was significantly correlated with their vocabulary comprehension at the time of test and with their gestural comprehension abilities 5 months later. These findings suggest that visual sequence learning may have both domain-general and domain-specific associations with language learning.
ObjectiveUnexplained variability in speech recognition outcomes among postlingually deafened adults with cochlear implants (CIs) is an enormous clinical and research barrier to progress. This variability is only partially explained by patient factors (e.g., duration of deafness) and auditory sensitivity (e.g., spectral and temporal resolution). This study sought to determine whether non‐auditory neurocognitive skills could explain speech recognition variability exhibited by adult CI users.Study DesignThirty postlingually deafened adults with CIs and thirty age‐matched normal‐hearing (NH) controls were enrolled.MethodsParticipants were assessed for recognition of words in sentences in noise and several non‐auditory measures of neurocognitive function. These non‐auditory tasks assessed global intelligence (problem‐solving), controlled fluency, working memory, and inhibition‐concentration abilities.ResultsFor CI users, faster response times during a non‐auditory task of inhibition‐concentration predicted better recognition of sentences in noise; however, similar effects were not evident for NH listeners.ConclusionsFindings from this study suggest that inhibition‐concentration skills play a role in speech recognition for CI users, but less so for NH listeners. Further research will be required to elucidate this role and its potential as a novel target for intervention.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.