2021
DOI: 10.31234/osf.io/d2h5c
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Why is scaling up models of language evolution hard?

Abstract: Computational model simulations have been very fruitful for gaining insight into how the systematic structure we observe in the world’s natural languages could have emerged through cultural evolution. However, these model simulations operate on a toy scale compared to the size of actual human vocabularies, due to the prohibitive computational resource demands that simulations with larger lexicons would pose. Using computational complexity analysis, we show that this is not an implementational artifact, but ins… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(9 citation statements)
references
References 28 publications
(49 reference statements)
0
9
0
Order By: Relevance
“…Together, the results proven here caution against intuitive notions about the complexity properties of computational problems driving empirical programs, and demonstrate the need and benefits of critically assessing their soundness (something that is too rarely done explicitly; but see, e.g., van de Braak, de Haan, van Rooij, and Blokpoel, 2022;Woensdregt et al, 2021;van de Pol, van Rooij, and Szymanik, 2018;Rich, Blokpoel, de Haan, and van Rooij, 2020;Zeppi and Blokpoel, 2017 for notable exceptions). Whenever intuitions are challenged, this enables researchers to reevaluate the current meta-theoretical calculus (cf.…”
Section: Discussionmentioning
confidence: 74%
“…Together, the results proven here caution against intuitive notions about the complexity properties of computational problems driving empirical programs, and demonstrate the need and benefits of critically assessing their soundness (something that is too rarely done explicitly; but see, e.g., van de Braak, de Haan, van Rooij, and Blokpoel, 2022;Woensdregt et al, 2021;van de Pol, van Rooij, and Szymanik, 2018;Rich, Blokpoel, de Haan, and van Rooij, 2020;Zeppi and Blokpoel, 2017 for notable exceptions). Whenever intuitions are challenged, this enables researchers to reevaluate the current meta-theoretical calculus (cf.…”
Section: Discussionmentioning
confidence: 74%
“…While a fully Bayesian formulation elegantly formalizes the computational-level inference problem at the core of the CHAI account, this formulation faces a number of limitations. For one, it is clearly intractable (Van Rooij, 2008; Van Rooij et al, 2019): The posterior update step in Equation 5 grows increasingly intensive as the space of possible utterances and meanings grows (Woensdregt et al, 2021). The intractability problem also raises a scalability problem: Does CHAI provide any guidance toward building artificial agents that are actually able to adapt to human partners as humans do with one another?…”
Section: Discussionmentioning
confidence: 99%
“…Computational modelling of capacities can help us to make our assumptions precise and explicit, and to draw out their consequences, without the need to simulate the postulated computations (though simulations have their uses; more on that next). For instance, with formal computationallevel models and mathematical proof techniques at hand, one can critically assess claims of explanatory adequacy Egan, 2017;van Rooij & Baggio, 2021), claims of intractability (Adolfi, Wareham, & van Rooij, 2023), claims of tractability van Rooij, Evans, Muller, Gedge, & Wareham, 2008), claims of competing theories , claims of evolvability (Rich, Blokpoel, de Haan, & van Rooij, 2020;Woensdregt et al, 2021), and claims of approximability (Kwisthout & Van Rooij, 2013;Kwisthout, Wareham, & Van Rooij, 2011). 16 We acknowledge that computational modelling can also contribute to productive theory development without committing to computationalism (Guest & Martin, 2021;Morgan & Morrison, 1999).…”
Section: Theory Without Makeingmentioning
confidence: 99%