Infants learn language at an incredible speed, and one of the first steps in this voyage is learning the basic sound units of their native languages. It is widely thought that caregivers facilitate this task by hyperarticulating when speaking to their infants. Using state-of-the-art speech technology, we addressed this key theoretical question: Are sound categories clearer in infant-directed speech than in adult-directed speech? A comprehensive examination of sound contrasts in a large corpus of recorded, spontaneous Japanese speech demonstrates that there is a small but significant tendency for contrasts in infant-directed speech to be less clear than those in adult-directed speech. This finding runs contrary to the idea that caregivers actively enhance phonetic categories in infant-directed speech. These results suggest that to be plausible, theories of infants' language acquisition must posit an ability to learn from noisy data.
Adult listeners systematically associate certain speech sounds with round or spiky shapes, a sound-symbolic phenomenon known as the "bouba-kiki effect." In this study, we investigate the respective influences of consonants and vowels in this phenomenon. French participants were asked to match auditorily presented pseudowords with one of two visually presented shapes, one round and one spiky. The pseudowords were created by crossing either two consonant pairs with a wide range of vowels (experiment 1 and 2) or two vowel pairs with a wide range of consonants (experiment 3). Analyses showed that consonants have a greater influence than vowels in the bouba-kiki effect. Importantly, this asymmetry cannot be due to an onset bias, as a strong consonantal influence is found both with CVCV (experiment 1) and VCV (experiment 2) stimuli. We discuss these results in terms of the differential role of consonants and vowels in speech perception.
Previous research with artificial language learning paradigms has shown that infants are sensitive to statistical cues to word boundaries (Saffran, Aslin & Newport, 1996) and that they can use these cues to extract word-like units (Saffran, 2001). However, it is unknown whether infants use statistical information to construct a receptive lexicon when acquiring their native language. In order to investigate this issue, we rely on the fact that besides real words a statistical algorithm extracts sound sequences that are highly frequent in infant-directed speech but constitute nonwords. In three experiments, we use a preferential listening paradigm to test French-learning 11-month-old infants' recognition of highly frequent disyllabic sequences from their native language. In Experiments 1 and 2, we use nonword stimuli and find that infants listen longer to high-frequency than to low-frequency sequences. In Experiment 3, we compare high-frequency nonwords to real words in the same frequency range, and find that infants show no preference. Thus, at 11 months, French-learning infants recognize highly frequent sound sequences from their native language and fail to differentiate between words and nonwords among these sequences. These results are evidence that they have used statistical information to extract word candidates from their input and stored them in a 'protolexicon', containing both words and nonwords.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.