Abstract:How do young children learn to organize the statistics of communicative input across milliseconds and months? Developmental science has made progress in elucidating how infants learn patterns in language and how infant-directed speech is engineered to ease short-timescale processing, but less is known about how children link perceptual experiences across multiple levels of processing within an interaction (from syllables to stories) and across development. In this article, we propose that three domains of rese… Show more
“…Second, we measured pupil size synchrony -the alignment in pupillary responses across participants -which has been shown in prior studies to be positively associated with attention to or similar processing of a stimulus (Hasson et al, 2004(Hasson et al, , 2008Kang & Wheatley, 2017;Nencheva et al, 2021;Piazza, Cohen, et al, 2021;Piazza, Nencheva, et al, 2021). This allowed us to probe whether infants find more vs. less frequent transitions more engaging.…”
Predicting others' feelings is a superpower that enables efficient social interactions. How do infants learn which emotions precede and follow each other? We propose that infants develop this ability by tuning into the dynamics of their socio-emotional environment. If so, we expect that the way in which infants process emotion transitions will reflect both general patterns seen in adults as well as local statistics of observed emotion transitions. We measured 4-10-month-old U.S. infants' (N=70) pupillary responses to emotion transitions and surveyed primary caregivers on the frequency of their own emotion transitions. As expected, infants were attuned to adult patterns of emotion transitions, showing greater pupillary synchrony for frequent transitions. They were also sensitive to their caregiver’s specific transition frequencies, exhibiting similar pupillary responses to infants whose caregivers show similar patterns. These findings suggest that infants learn about emotion dynamics by observing statistical patterns in the people around them.
“…Second, we measured pupil size synchrony -the alignment in pupillary responses across participants -which has been shown in prior studies to be positively associated with attention to or similar processing of a stimulus (Hasson et al, 2004(Hasson et al, , 2008Kang & Wheatley, 2017;Nencheva et al, 2021;Piazza, Cohen, et al, 2021;Piazza, Nencheva, et al, 2021). This allowed us to probe whether infants find more vs. less frequent transitions more engaging.…”
Predicting others' feelings is a superpower that enables efficient social interactions. How do infants learn which emotions precede and follow each other? We propose that infants develop this ability by tuning into the dynamics of their socio-emotional environment. If so, we expect that the way in which infants process emotion transitions will reflect both general patterns seen in adults as well as local statistics of observed emotion transitions. We measured 4-10-month-old U.S. infants' (N=70) pupillary responses to emotion transitions and surveyed primary caregivers on the frequency of their own emotion transitions. As expected, infants were attuned to adult patterns of emotion transitions, showing greater pupillary synchrony for frequent transitions. They were also sensitive to their caregiver’s specific transition frequencies, exhibiting similar pupillary responses to infants whose caregivers show similar patterns. These findings suggest that infants learn about emotion dynamics by observing statistical patterns in the people around them.
“…The input to young learners reflects these complex goals. Child-directed input is multidimensional, incorporating a diverse set of communicative cues across multiple modalities, and this multidimensional input is highly variable over time and across individuals, communities and cultures Bergelson, Casillas, et al, 2019;Casillas et al, 2020;Holler & Levinson, 2019;Kosie & Lew-Williams, 2023;Piazza et al, 2021;Ryskin & Fang, 2021;Schatz et al, 2022;Suarez-Rivera et al, 2022a;Yu & Smith, 2012). There is no "onesize-fits-all" characterization of human input, and any model of learning (language learning included) needs to account for and be robust to this massive variation.…”
Children do not learn language from language alone. Instead, children learn from social interactions with multidimensional communicative cues that occur dynamically across timescales. A wealth of research using in-lab experiments and brief audio recordings has made progress in explaining early cognitive and communicative development, but these approaches are limited in their ability to capture the rich diversity of children’s early experience. Large language models represent a powerful approach for understanding how language can be learned from massive amounts of textual (and in some cases visual) data, but they have near-zero access to the actual, lived complexity of children’s everyday input. We assert the need for more descriptive research that densely samples the natural dynamics of children’s everyday communicative environments in order to grasp the long-stand mystery of how young children learn, including their language development. With the right multimodal data, researchers will be able to go beyond large language models to build developmentally-grounded efficient communication models that truly take into account the complexity of children’s diverse environments.
“…Beyond differences between the timescales and contexts of laboratory and real-world task activities, there are important domains of human activity that are themselves complexly organized with component processes operating over quite different timescales despite interacting and depending on one another. An example is the development and deployment of language communication skills as described by Piazza et al (2021 , p. 459):…”
It is now possible for real-life activities, unfolding over their natural range of temporal and spatial scales, to become the primary targets of cognitive studies. Movement toward this type of research will require an integrated methodological approach currently uncommon in the field. When executed hand in hand with thorough and ecologically valid empirical description, properly developed laboratory tasks can serve as model systems to capture the essentials of a targeted real-life activity. When integrated together, data from these two kinds of studies can facilitate causal analysis and modeling of the mental and neural processes that govern that activity, enabling a fuller account than either method can provide on its own. The resulting account, situated in the activity’s natural environmental, social, and motivational context, can then enable effective and efficient development of interventions to support and improve the activity as it actually unfolds in real time. We believe that such an integrated multi-level research program should be common rather than rare and is necessary to achieve scientifically and societally important goals. The time is right to finally abandon the boundaries that separate the laboratory from the outside world.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.