The Child Language Data Exchange System (CHILDES) has played a critical role in research on child language development, particularly in characterizing the early language learning environment. Access to these data can be both complex for novices and difficult to automate for advanced users, however. To address these issues, we introduce childes-db, a database-formatted mirror of CHILDES that improves data accessibility and usability by offering novel interfaces, including browsable web applications and an R application programming interface (API). Along with versioned infrastructure that facilitates reproducibility of past analyses, these interfaces lower barriers to analyzing naturalistic parent-child language, allowing for a wider range of researchers in language and cognitive development to easily leverage CHILDES in their work.
The ability to process social information is a critical component of children’s early language and cognitive development. However, as children reach their first birthday, they begin to locomote themselves, dramatically affecting their visual ac- cess to this information. How do these postural and locomotor changes affect children’s access to the social information relevant for word-learning? Here, we explore this question by us- ing head-mounted cameras to record 36 infants’ (8-16 months of age) egocentric visual perspective and use computer vision algorithms to estimate the proportion of faces and hands in infants’ environments. We find that infants’ posture and orientation to their caregiver modulates their access to social information, confirming previous work that suggests motoric developments play a significant role in the emergence of children’s linguistic and social capacities. We suggest that the combined use of head-mounted cameras and the application of new computer vision techniques is a promising avenue for understanding the statistics of infants’ visual and linguistic experience.
How do postural developments affect infants’ access to social information? We recorded egocentric and third‐person video while infants and their caregivers (N = 36, 8‐ to 16‐month‐olds, N = 19 females) participated in naturalistic play sessions. We then validated the use of a neural network pose detection model to detect faces and hands in the infant view. We used this automated method to analyze our data and a prior egocentric video dataset (N = 17, 12‐month‐olds). Infants’ average posture and orientation with respect to their caregiver changed dramatically across this age range; both posture and orientation modulated access to social information. Together, these results confirm that infant’s ability to move and act on the world plays a significant role in shaping the social information in their view.
16The Child Language Data Exchange System (CHILDES) has played a critical role in 17 research on child language development, particularly in characterizing the early language 18 learning environment. Access to these data can be both complex for novices and difficult to 19 automate for advanced users, however. To address these issues, we introduce childes-db, a 20 database-formatted mirror of CHILDES that improves data accessibility and usability by 21 offering novel interfaces, including browsable web applications and an R application 22 programming interface (API). Along with versioned infrastructure that facilitates 23 reproducibility of past analyses, these interfaces lower barriers to analyzing naturalistic 24 parent-child language, allowing for a wider range of researchers in language and cognitive 25 development to easily leverage CHILDES in their work. 26 software 28 Word count: 2925 29 CHILDES-DB: AN INTERFACE TO CHILDES 3 childes-db: a flexible and reproducible interface to the Child Language Data Exchange 30 System 31 Introduction 32 What are the representations that children learn about language, and how do they 33 emerge from the interaction of learning mechanisms and environmental input? Developing 34 facility with language requires learning a great many interlocking components -meaningful 35 distinctions between sounds (phonology), names of particular objects and actions (word 36 learning), meaningful sub-word structure (morphology), rules for how to organize words 37 together (syntax), and context-dependent and context-independent aspects of meaning 38 (semantics and pragmatics). Key to learning all of these systems is the contribution of the 39 child's input -exposure to linguistic and non-linguistic data -in the early environment. 40 While in-lab experiments can shed light on linguistic knowledge and some of the implicated 41 learning mechanisms, characterizing this early environment requires additional research 42 methods and resources. 43 One of the key methods that has emerged to address this gap is the collection and 44 annotation of speech to and by children, often in the context of the home. Starting with 45 Roger Brown's (1973) work on Adam, Eve, and Sarah, audio recordings -and more recently 46 video recordings -have been augmented with rich, searchable annotations to allow 47 48 Focusing on language learning in naturalistic contexts also reveals that children have, in 49 many cases, productive and receptive abilities exceeding those demonstrated in experimental 50 contexts. Often, children's most revealing and sophisticated uses of language emerge in the 51 course of naturalistic play. 52 While corpora of early language acquisition are extremely useful, creating them 53 requires significant resources. Collecting and transcribing audio and video is costly and 54 extremely time consuming -even orthographic transcription (i.e., transcriptions with 55 minimal phonetic detail) can take ten times the duration of the original recording 56 CHILDES-DB: AN INTERFACE TO CHILDES 4 (MacWhinney, 20...
How do postural developments affect infants’ access to social information? We recorded egocentric and third-person video while infants and their caregivers (N=36, 8–16-month-olds, N=19 females) participated in naturalistic play sessions. We then validated the use of a neural network pose detection model to detect faces and hands in the infant view. We used this automated method to analyze our data and a prior egocentric video dataset (N=17, 12-month-olds). Infants’ average posture and orientation with respect to their caregiver changed dramatically across this age range; both posture and orientation modulated access to social information. Together, these results confirm that infant’s ability to move and act on the world plays a significant role in shaping the social information in their view.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.