Abstract:It has been proposed that speech is specified by the eye, the ear, and even the skin. Kuhl and Meltzoff (1984) showed that 4-month-olds could lip-read to an extent. Given the age of the infants, it was not clear whether this was a learned skill or a by-product of the primary auditory process. This paper presents evidence that neonate infants (less than 33 h) show virtually identical patterns of intermodal interaction as do 4-month-olds. Since they are neonates, it is unlikely that learning was involved. The re… Show more
“…When presented with two side-by-side images of the same woman's face articulating "ee" and "ouu" respectively, infants of 4 months (Kuhl & Meltzoff 1982) and 2 months (Patterson & Werker 2003), and possibly even newborns (Aldridge et al 1999), look longer to the side that is articulating the sound that they hear. Infants this young can also match heard and seen consonants (MacKain et al 1983) and do so best when the matching face is on the right side, indicating involvement of the left hemisphere language areas.…”
Section: Audiovisual Matching and Integrationmentioning
A continuing debate in language acquisition research is whether there are critical periods (CPs) in development during which the system is most responsive to environmental input. Recent advances in neurobiology provide a mechanistic explanation of CPs, with the balance between excitatory and inhibitory processes establishing the onset and molecular brakes establishing the offset of windows of plasticity. In this article, we review the literature on human speech perception development within the context of this CP model, highlighting research that reveals the interplay of maturational and experiential influences at key junctures in development and presenting paradigmatic examples testing CP models in human subjects. We conclude with a discussion of how a mechanistic understanding of CP processes changes the nature of the debate: The question no longer is, "Are there CPs?" but rather what processes open them, keep them open, close them, and allow them to be reopened.
“…When presented with two side-by-side images of the same woman's face articulating "ee" and "ouu" respectively, infants of 4 months (Kuhl & Meltzoff 1982) and 2 months (Patterson & Werker 2003), and possibly even newborns (Aldridge et al 1999), look longer to the side that is articulating the sound that they hear. Infants this young can also match heard and seen consonants (MacKain et al 1983) and do so best when the matching face is on the right side, indicating involvement of the left hemisphere language areas.…”
Section: Audiovisual Matching and Integrationmentioning
A continuing debate in language acquisition research is whether there are critical periods (CPs) in development during which the system is most responsive to environmental input. Recent advances in neurobiology provide a mechanistic explanation of CPs, with the balance between excitatory and inhibitory processes establishing the onset and molecular brakes establishing the offset of windows of plasticity. In this article, we review the literature on human speech perception development within the context of this CP model, highlighting research that reveals the interplay of maturational and experiential influences at key junctures in development and presenting paradigmatic examples testing CP models in human subjects. We conclude with a discussion of how a mechanistic understanding of CP processes changes the nature of the debate: The question no longer is, "Are there CPs?" but rather what processes open them, keep them open, close them, and allow them to be reopened.
“…Similar to speech sounds and letters, optimized AV speech integration develops over the course of many years (McGurk & MacDonald, 1976; Ross et al, 2011; Sekiyama & Burnham, 2008). However, this process begins much earlier than reading, with some level of sensitivity to the congruency between the sounds of certain vowels and their corresponding articulations already present in infants as young as 2 months (Patterson & Werker, 2003) and even, it has been argued, in newborns (Aldridge, Braga, Walton, & Bower, 1999). We turn now to this literature, while keeping in mind that the acquisition of these multisensory associations might be considerably easier because 1) the speech sounds and mouth gestures are causally related, 2) audio-visual speech is encountered beginning in infancy and on a very regular basis, and 3) the learning of these relationships is largely implicit rather than explicit.…”
Section: Multisensory Processing Reading and Dyslexiamentioning
Two sensory systems are intrinsic to learning to read. Written words enter the brain through the visual system and associated sounds through the auditory system. The task before the beginning reader is quite basic. She must learn correspondences between orthographic tokens and phonemic utterances, and she must do this to the point that there is seamless automatic ‘connection’ between these sensorially distinct units of language. It is self-evident then that learning to read requires formation of cross-sensory associations to the point that deeply encoded multisensory representations are attained. While the majority of individuals manage this task to a high degree of expertise, some struggle to attain even rudimentary capabilities. Why do dyslexic individuals, who learn well in myriad other domains, fail at this particular task? Here, we examine the literature as it pertains to multisensory processing in dyslexia. We find substantial support for multisensory deficits in dyslexia, and make the case that to fully understand its neurological basis, it will be necessary to thoroughly probe the integrity of auditory-visual integration mechanisms.
“…Recently, such auditory-visual vowel matching ability has been found for 2-month-old infants (Patterson & Werker, 2003), and there also is some evidence for auditory-visual matching in newborns (Aldridge, Braga, Walton, & Bower, 1999). While these studies suggest that auditory-visual matching appears early in development, both experiential and maturational influences are nevertheless evident.…”
The McGurk effect, in which auditory [ba] dubbed onto [ga] lip movements is perceived as "da" or "tha," was employed in a real-time task to investigate auditory-visual speech perception in prelingual infants. Experiments 1A and 1B established the validity of real-time dubbing for producing the effect. In Experiment 2, 4 1/2-month-olds were tested in a habituation-test paradigm, in which an auditory-visual stimulus was presented contingent upon visual fixation of a live face. The experimental group was habituated to a McGurk stimulus (auditory [ba] visual [ga]), and the control group to matching auditory-visual [ba]. Each group was then presented with three auditory-only test trials, [ba], [da], and [(delta)a] (as in then). Visual-fixation durations in test trials showed that the experimental group treated the emergent percept in the McGurk effect, [da] or [(delta)a], as familiar (even though they had not heard these sounds previously) and [ba] as novel. For control group infants [da] and [(delta)a] were no more familiar than [ba]. These results are consistent with infants' perception of the McGurk effect, and support the conclusion that prelinguistic infants integrate auditory and visual speech information.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.