Oxford Handbooks Online 2016
DOI: 10.1093/oxfordhb/9780199601264.013.28
|View full text |Cite
|
Sign up to set email alerts
|

Statistical Learning, Inductive Bias, and Bayesian Inference in Language Acquisition

Abstract: Bayesian models of language acquisition are powerful tools for exploring how linguistic generalizations can be made. Notably, Bayesian models assume children leverage statistical information in sophisticated ways, and so it is important to demonstrate that children’s behavior is consistent with both the assumptions of the Bayesian framework and the predictions of specific models. We first provide a historical overview of behavioral evidence suggesting children utilize available statistical information to make … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
11
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(11 citation statements)
references
References 108 publications
0
11
0
Order By: Relevance
“…This analysis suggests that the differences between models could be more related to the choice of technical options than to their theoretical underpinnings. Several contributors to Bayesian literature (e.g., Goldwater et al., ; Pearl & Goldwater, ; Qian, Jaeger, & Aslin, ), referring to Marr ()'s framework, have emphasized that Bayesian models provide a computational ‐level account for learners’ behavior, while most other models, either symbolic or connectionist, would be aimed at simulating the same behavior at the algorithmic level. Marr's framework provides an attractive solution to the heterogeneity of models.…”
Section: Models Of Chunkingmentioning
confidence: 99%
“…This analysis suggests that the differences between models could be more related to the choice of technical options than to their theoretical underpinnings. Several contributors to Bayesian literature (e.g., Goldwater et al., ; Pearl & Goldwater, ; Qian, Jaeger, & Aslin, ), referring to Marr ()'s framework, have emphasized that Bayesian models provide a computational ‐level account for learners’ behavior, while most other models, either symbolic or connectionist, would be aimed at simulating the same behavior at the algorithmic level. Marr's framework provides an attractive solution to the heterogeneity of models.…”
Section: Models Of Chunkingmentioning
confidence: 99%
“…MaxEnt grammars), thus opening the door to the importation of the learning theories discussed in §4.1 (see Johnson 2013b for a more general introduction to statistical learning of grammars). It has been pointed out that statistical learning theory, especially Bayesian modeling, can permit a more rigorous assessment of claims about UG (see the overview in Pearl & Goldwater 2016). 18 When neural network modeling is integrated with grammatical formalisms in the ways discussed in §4, we may be able to go further in assessing the extent to which grammatical representations can be learned from experience and what aspects of the grammar must be hardwired.…”
mentioning
confidence: 99%
“…Human infants are not as tabula rasa as models like InferSent but rather encode useful inductive biases (Chomsky & Lightfoot, 2002; Lightfoot & Julia, 1984; Mitchell, 1980; Pearl & Goldwater, 2016; Seidenberg, 1997). Building such biases into our models (Battaglia et al, 2018; Dubey, Agrawal, Pathak, Griffiths, & Efros, 2018; Gandhi & Lake, 2019; Lake et al, 2017) is a promising direction toward scalably learning systematic representations.…”
Section: Discussion and Future Workmentioning
confidence: 99%