Psychological scientists have become increasingly concerned with issues related to methodology and replicability, and infancy researchers in particular face specific challenges related to replicability: For example, high-powered studies are difficult to conduct, testing conditions vary across labs, and different labs have access to different infant populations. Addressing these concerns, we report on a large-scale, multisite study aimed at (a) assessing the overall replicability of a single theoretically important phenomenon and (b) examining methodological, cultural, and developmental moderators. We focus on infants’ preference for infant-directed speech (IDS) over adult-directed speech (ADS). Stimuli of mothers speaking to their infants and to an adult in North American English were created using seminaturalistic laboratory-based audio recordings. Infants’ relative preference for IDS and ADS was assessed across 67 laboratories in North America, Europe, Australia, and Asia using the three common methods for measuring infants’ discrimination (head-turn preference, central fixation, and eye tracking). The overall meta-analytic effect size (Cohen’s d) was 0.35, 95% confidence interval = [0.29, 0.42], which was reliably above zero but smaller than the meta-analytic mean computed from previous literature (0.67). The IDS preference was significantly stronger in older children, in those children for whom the stimuli matched their native language and dialect, and in data from labs using the head-turn preference procedure. Together, these findings replicate the IDS preference but suggest that its magnitude is modulated by development, native-language experience, and testing procedure.
In bilingual language environments, infants and toddlers listen to two separate languages during the same key years that monolingual children listen to just one and bilinguals rarely learn each of their two languages at the same rate. Learning to understand language requires them to cope with challenges not found in monolingual input, notably the use of two languages within the same utterance (e.g., Do you like the perro? or ¿Te gusta el doggy?). For bilinguals of all ages, switching between two languages can reduce the efficiency in real‐time language processing. But language switching is a dynamic phenomenon in bilingual environments, presenting the young learner with many junctures where comprehension can be derailed or even supported. In this study, we tested 20 Spanish–English bilingual toddlers (18‐ to 30‐months) who varied substantially in language dominance. Toddlers’ eye movements were monitored as they looked at familiar objects and listened to single‐language and mixed‐language sentences in both of their languages. We found asymmetrical switch costs when toddlers were tested in their dominant versus non‐dominant language, and critically, they benefited from hearing nouns produced in their dominant language, independent of switching. While bilingualism does present unique challenges, our results suggest a united picture of early monolingual and bilingual learning. Just like monolinguals, experience shapes bilingual toddlers’ word knowledge, and with more robust representations, toddlers are better able to recognize words in diverse sentences.
Accented speech poses a challenge for listeners, particularly those with limited knowledge of their language. In a series of studies, we explored the possibility that experience with variability, specifically the variability provided by multiple accents, would facilitate infants' comprehension of speech produced with an unfamiliar accent. 15- and 18-month-old American-English learning infants were exposed to brief passages of multi-talker speech and subsequently tested on their ability to distinguish between real, familiar words and nonsense words, produced in either their native accent or an unfamiliar (British) accent. Exposure passages were produced in a familiar (American) accent, a single unfamiliar (British) accent or a variety of novel accents (Australian, Southern, Indian). While 15-month-olds successfully recognized real words spoken in a familiar accent, they never demonstrated comprehension of English words produced in the unfamiliar accent. 18-month-olds also failed to recognize English words spoken in the unfamiliar accent after exposure to the familiar or single unfamiliar accent. However, they succeeded after exposure to multiple unfamiliar accents, suggesting that as they get older, infants are better able to exploit the cues provided by variable speech. Increased variability across multiple dimensions can be advantageous for young listeners.
Infants' looking behaviors are often used for measuring attention, real‐time processing, and learning—often using low‐resolution videos. Despite the ubiquity of gaze‐related methods in developmental science, current analysis techniques usually involve laborious post hoc coding, imprecise real‐time coding, or expensive eye trackers that may increase data loss and require a calibration phase. As an alternative, we propose using computer vision methods to perform automatic gaze estimation from low‐resolution videos. At the core of our approach is a neural network that classifies gaze directions in real time. We compared our method, called iCatcher, to manually annotated videos from a prior study in which infants looked at one of two pictures on a screen. We demonstrated that the accuracy of iCatcher approximates that of human annotators and that it replicates the prior study's results. Our method is publicly available as an open‐source repository at https://github.com/yoterel/iCatcher.
Recent research has begun to explore individual differences in statistical learning, and how those differences may be related to other cognitive abilities, particularly their effects on language learning. In the present research, we explored a different type of relationship between language learning and statistical learning: the possibility that learning a new language may also influence statistical learning by changing the regularities to which learners are sensitive. We tested two groups of participants, Mandarin Learners and Naïve Controls, at two time points, six months apart. At each time point, participants performed two different statistical learning tasks: an artificial tonal language statistical learning task and a visual statistical learning task. Only the Mandarin-learning group showed significant improvement on the linguistic task, while both groups improved equally on the visual task. These results support the view that there are multiple influences on statistical learning. Domain-relevant experiences may affect the regularities that learners can discover when presented with novel stimuli.
In bilingual language environments, infants and toddlers listen to two separate languages during the same key years that monolingual children listen to just one and bilinguals rarely learn each of their two languages at the same rate. Learning to understand language requires them to cope with challenges not found in monolingual input, notably the use of two languages within the same utterance (e.g., Do you like the perro? or ¿Te gusta el doggy?). For bilinguals of all ages, switching between two languages can reduce the efficiency in real‐time language processing. But language switching is a dynamic phenomenon in bilingual environments, presenting the young learner with many junctures where comprehension can be derailed or even supported. In this study, we tested 20 Spanish–English bilingual toddlers (18‐ to 30‐months) who varied substantially in language dominance. Toddlers’ eye movements were monitored as they looked at familiar objects and listened to single‐language and mixed‐language sentences in both of their languages. We found asymmetrical switch costs when toddlers were tested in their dominant versus non‐dominant language, and critically, they benefited from hearing nouns produced in their dominant language, independent of switching. While bilingualism does present unique challenges, our results suggest a united picture of early monolingual and bilingual learning. Just like monolinguals, experience shapes bilingual toddlers’ word knowledge, and with more robust representations, toddlers are better able to recognize words in diverse sentences.
From the earliest months of life, infants prefer listening to and learn better from infant-directed speech (IDS) compared with adult-directed speech (ADS). Yet IDS differs within communities, across languages, and across cultures, both in form and in prevalence. This large-scale, multisite study used the diversity of bilingual infant experiences to explore the impact of different types of linguistic experience on infants’ IDS preference. As part of the multilab ManyBabies 1 project, we compared preference for North American English (NAE) IDS in lab-matched samples of 333 bilingual and 384 monolingual infants tested in 17 labs in seven countries. The tested infants were in two age groups: 6 to 9 months and 12 to 15 months. We found that bilingual and monolingual infants both preferred IDS to ADS, and the two groups did not differ in terms of the overall magnitude of this preference. However, among bilingual infants who were acquiring NAE as a native language, greater exposure to NAE was associated with a stronger IDS preference. These findings extend the previous finding from ManyBabies 1 that monolinguals learning NAE as a native language showed a stronger IDS preference than infants unexposed to NAE. Together, our findings indicate that IDS preference likely makes similar contributions to monolingual and bilingual development, and that infants are exquisitely sensitive to the nature and frequency of different types of language input in their early environments.
Learning always happens from input that contains multiple structures and multiple sources of variability. Though infants possess learning mechanisms to locate structure in the world, lab-based experiments have rarely probed how infants contend with input that contains many different structures and cues. Two experiments explored infants’ use of two naturally occurring sources of variability – different sounds and different people – to detect regularities in language. Monolingual infants (9–10 months) heard a male and female talker produce two different speech streams, one of which followed a deterministic pattern (e.g., AAB, le-le-di) and one of which did not. For half of the infants, each speaker produced only one of the streams; for the other half of infants, each speaker produced 50% of each stream. In Experiment 1, each stream consisted of distinct sounds, and infants successfully demonstrated learning regardless of the correspondence between speaker and stream. In Experiment 2, each stream consisted of the same sounds, and infants failed to show learning, even when speakers provided a perfect cue for separating each stream. Thus, monolingual infants can learn in the presence of multiple speech streams, but these experiments suggest that infants may rely more on sound-based rather than speaker-based distinctions when breaking into the structure of incoming information. This selective use of some cues over others highlights infants’ ability to adaptively focus on distinctions that are most likely to be useful as they sort through their inherently multidimensional surroundings.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.