2023
DOI: 10.1111/cogs.13305
|View full text |Cite
|
Sign up to set email alerts
|

Finding Structure in One Child's Linguistic Experience

Abstract: Neural network models have recently made striking progress in natural language processing, but they are typically trained on orders of magnitude more language input than children receive. What can these neural networks, which are primarily distributional learners, learn from a naturalistic subset of a single child's experience? We examine this question using a recent longitudinal dataset collected from a single child, consisting of egocentric visual data paired with text transcripts. We train both languageonly… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 85 publications
(150 reference statements)
0
0
0
Order By: Relevance
“…The proof-of-concept approach can model how in practice some linguistic outcome can be learnt, offering support for these types of hypotheses. This approach has been one of the most fruitful in recent years and has been used to support multiple hypotheses about the learnability of : the shape bias (Ohmer, Marino, König, & Franke, 2021), basic syntactic dependencies (Huebner et al, 2021), syntactic and semantic categories (Wang, Vong, Kim, & Lake, 2023), associative word learning strategies (Vong & Lake, 2022), words from sustained attention (Tsutsui, Chandrasekaran, Reza, Crandall, & Yu, 2020), and logical reasoning words (Portelance, Frank, & Jurafsky, 2023).…”
Section: Models As Proofs-of-conceptsmentioning
confidence: 99%
“…The proof-of-concept approach can model how in practice some linguistic outcome can be learnt, offering support for these types of hypotheses. This approach has been one of the most fruitful in recent years and has been used to support multiple hypotheses about the learnability of : the shape bias (Ohmer, Marino, König, & Franke, 2021), basic syntactic dependencies (Huebner et al, 2021), syntactic and semantic categories (Wang, Vong, Kim, & Lake, 2023), associative word learning strategies (Vong & Lake, 2022), words from sustained attention (Tsutsui, Chandrasekaran, Reza, Crandall, & Yu, 2020), and logical reasoning words (Portelance, Frank, & Jurafsky, 2023).…”
Section: Models As Proofs-of-conceptsmentioning
confidence: 99%
“…Lazaridou et al (2017) and Chrupała et al (2015) are notable for pioneering self-supervised training objectives for multimodal models several years before the advent of Transformer architectures trained on masking objectives. Wang et al (2023) train LMs on data from the SAYCam dataset (Sullivan et al, 2021), pairing (written) child-directed utterances with visual data from the child's point of view. While this data domain is nearly ideal from a developmental plausibility perspective, the available data is too small to model anything past the first month of development.…”
Section: Cognitively Oriented Approachesmentioning
confidence: 99%