The present study examined when and how the ability to cross-modally match audio-visual fluent speech develops in 4.5-, 6- and 12-month-old German-learning infants. In Experiment 1, 4.5- and 6-month-old infants’ audio-visual matching ability of native (German) and non-native (French) fluent speech was assessed by presenting auditory and visual speech information sequentially, that is, in the absence of temporal synchrony cues. The results showed that 4.5-month-old infants were capable of matching native as well as non-native audio and visual speech stimuli, whereas 6-month-olds perceived the audio-visual correspondence of native language stimuli only. This suggests that intersensory matching narrows for fluent speech between 4.5 and 6 months of age. In Experiment 2, auditory and visual speech information was presented simultaneously, therefore, providing temporal synchrony cues. Here, 6-month-olds were found to match native as well as non-native speech indicating facilitation of temporal synchrony cues on the intersensory perception of non-native fluent speech. Intriguingly, despite the fact that audio and visual stimuli cohered temporally, 12-month-olds matched the non-native language only. Results were discussed with regard to multisensory perceptual narrowing during the first year of life.
Unilateral spatial neglect is a disabling condition frequently occurring after stroke. People with neglect suffer from various spatial deficits in several modalities, which in many cases impair everyday functioning. A successful treatment is yet to be found. Several techniques have been proposed in the last decades, but only a few showed long-lasting effects and none could completely rehabilitate the condition. Diagnostic methods of neglect could be improved as well. The disorder is normally diagnosed with pen-and-paper methods, which generally do not assess patients in everyday tasks and do not address some forms of the disorder. Recently, promising new methods based on virtual reality have emerged. Virtual reality technologies hold great opportunities for the development of effective assessment and treatment techniques for neglect because they provide rich, multimodal, and highly controllable environments. In order to stimulate advancements in this domain, we present a review and an analysis of the current work. We describe past and ongoing research of virtual reality applications for unilateral neglect and discuss the existing problems and new directions for development.
One of the most salient social categories conveyed by human faces and voices is gender. We investigated the developmental emergence of the ability to perceive the coherence of auditory and visual attributes of gender in 6- and 9-month-old infants. Infants viewed two side-by-side video clips of a man and a woman singing a nursery rhyme and heard a synchronous male or female soundtrack. Results showed that 6-month-old infants did not match the audible and visible attributes of gender, and 9-month-old infants matched only female faces and voices. These findings indicate that the ability to perceive the multisensory coherence of gender emerges relatively late in infancy and that it reflects the greater experience that most infants have with female faces and voices.
Early in life, infants possess an effective face-processing system which becomes specialized according to the faces present in the environment. Infants are also exposed to the voices and sounds of caregivers. Previous studies have found that face–voice associations become progressively more tuned to the types of association most prevalent in the environment. The present study investigated whether 6-month-old infants associate own-race faces with their native language and faces from a different race with a non-native language. Infants were presented with pictures of own- and other-race faces simultaneously, with a native or non-native language in a habituation paradigm. Results indicate that 6-month-olds are able to match other-race faces to a non-native language.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.