The influence of speech production on speech perception is well established in adults. However, because adults have a long history of both perceiving and producing speech, the extent to which the perception-production linkage is due to experience is unknown. We addressed this issue by asking whether articulatory configurations can influence infants' speech perception performance. To eliminate influences from specific linguistic experience, we studied preverbal, 6-mo-old infants and tested the discrimination of a nonnative, and hence never-before-experienced, speech sound distinction. In three experimental studies, we used teething toys to control the position and movement of the tongue tip while the infants listened to the speech sounds. Using ultrasound imaging technology, we verified that the teething toys consistently and effectively constrained the movement and positioning of infants' tongues. With a looking-time procedure, we found that temporarily restraining infants' articulators impeded their discrimination of a nonnative consonant contrast but only when the relevant articulator was selectively restrained to prevent the movements associated with producing those sounds. Our results provide striking evidence that even before infants speak their first words and without specific listening experience, sensorimotor information from the articulators influences speech perception. These results transform theories of speech perception by suggesting that even at the initial stages of development, oral-motor movements influence speech sound discrimination. Moreover, an experimentally induced "impairment" in articulator movement can compromise speech perception performance, raising the question of whether long-term oral-motor impairments may impact perceptual development.language acquisition | perception-production | infancy T he acquisition of language, arguably our most defining human capacity, relies on the seamless exchange of information between production and perception. In their seminal work, Eimas et al. (1) found that from 1 mo of age, human infants are equipped with perceptual sensitivities that enable them to discriminate speech sounds according to the boundaries used in human languages (see refs. 2, 3 for reviews of subsequent work). Within the first year, infant speech perception sensitivities adapt to the ambient language: The process of perceptual narrowing results in a decline in discrimination of nonnative distinctions (4, 5) and an improvement in the discrimination of native speech sound contrasts (6, 7). A similar trajectory is seen for audiovisual speech perception: Very young infants can match heard and seen speech (8-10), but by 9-10 mo, they do so reliably only for native speech sounds (11). Development of speech production progresses similarly. Although the infant vocal tract is anatomically immature and lacks the neuromuscular control of the adult vocal tract (12), the ability to produce communicative sounds (cries) is evident at birth (13, 14) and already reflects characteristics of the lang...
The period between six and 12 months is a sensitive period for language learning during which infants undergo auditory perceptual attunement, and recent results indicate that this sensitive period may exist across sensory modalities. We tested infants at three stages of perceptual attunement (six, nine, and 11 months) to determine 1) whether they were sensitive to the congruence between heard and seen speech stimuli in an unfamiliar language, and 2) whether familiarization with congruent audiovisual speech could boost subsequent non-native auditory discrimination. Infants at six- and nine-, but not 11-months, detected audiovisual congruence of non-native syllables. Familiarization to incongruent, but not congruent, audiovisual speech changed auditory discrimination at test for six-month-olds but not nine- or 11-month-olds. These results advance the proposal that speech perception is audiovisual from early in ontogeny, and that the sensitive period for audiovisual speech perception may last somewhat longer than that for auditory perception alone.
Does the acoustic input for bilingual infants equal the conjunction of the input heard by monolinguals of each separate language? The present letter tackles this question, focusing on maternal speech addressed to 11-month-old infants, on the cusp of perceptual attunement. The acoustic characteristics of the point vowels /a,i,u/ were measured in the spontaneous infant-directed speech of French-English bilingual mothers, as well as in the speech of French and English monolingual mothers. Bilingual caregivers produced their two languages with acoustic prosodic separation equal to that of the monolinguals, while also conveying distinct spectral characteristics of the point vowels in their two languages.
At the end of the target article, Keven & Akins (K&A) put forward a challenge to the developmental psychology community to consider the development of complex psychological processes - in particular, intermodal infant perception - across different levels of analysis. We take up that challenge and consider the possibility that early emerging stereotypies might help explain the foundations of the link between speech perception and speech production.
Infants are able to match seen and heard speech even in non-native languages, and familiarization to audiovisual speech appears to affect subsequent auditory-only discrimination of non-native speech sounds (Danielson et al., 2013; 2014). However, the robustness of these behaviors appears to change rapidly within the first year of life. In this current set of studies, conducted with six-, nine-, and 10-month-old English-learning infants, we examine the developmental trajectory of audiovisual speech perception of non-native speech sounds. In the first place, we show that the tendency to detect a mismatch between heard and seen speech sounds in a non-native language changes across this short period in development, in tandem with the trajectory of auditory perceptual narrowing (Werker & Tees, 1984; Kuhl et al., 1992; inter alia). Furthermore, we demonstrate that infants’ familiarization to matching and mismatching audiovisual speech affects their auditory speech perception differently at various ages. While six-month-old infants’ auditory speech perception appears to be malleable in the face of prior audiovisual familiarization, this tendency declines with age. The current set of studies is one of the first to utilize traditional looking-time measurements while also employing pupillometry as a correlate of infants’ acoustic change detection (Hochmann & Papeo, 2014).
Infants are sensitive to the correspondence between visual and auditory speech. Infants exhibit the McGurk effect, and matching audiovisual information may facilitate discrimination of similar consonant sounds in an infant’s native language (e.g., Teinonen et al., 2008). However, because most existing research in audiovisual speech perception has been conducted using native speech sounds with infants in their first year of life, little work has explored whether this link between the auditory and visual modalities of speech perception arises due to experience with the native language. In the present set of studies, English-learning six- and ten-month-old infants are tested for discrimination of a non-English speech contrast following familiarization with matching and mismatching audiovisual speech. Furthermore, the looking fixation behaviors of the two age groups are compared between the two conditions. Although it has been demonstrated that infants in the younger age range attend preferentially to the eye region when viewing matched audiovisual speech and that infants in the older age range temporarily attend to the mouth region (Lewkowicz & Hansen-Tift, 2012), here deviations in this behavior for matching and mismatching non-native speech are examined (a link that has only been previously explored in the native language (Tomalski et al., 2013)).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.