2022
DOI: 10.1017/s0140525x22002849
|View full text |Cite
|
Sign up to set email alerts
|

The best game in town: The reemergence of the language-of-thought hypothesis across the cognitive sciences

Abstract: Mental representations remain the central posits of psychology after many decades of scrutiny. However, there is no consensus about the representational format(s) of biological cognition. This paper provides a survey of evidence from computational cognitive psychology, perceptual psychology, developmental psychology, comparative psychology, and social psychology, and concludes that one type of format that routinely crops up is the language of thought (LoT). We outline six core properties of LoTs: (i) discrete … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
44
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 88 publications
(56 citation statements)
references
References 275 publications
0
44
0
Order By: Relevance
“…In other words, the MFA assumes that structured meaningful representations can be available to a child (as well as non-human animals) before learning a language. A recent review article by Quilty-Dunn et al (2022) discusses several pieces of evidence from both infants and non-human animals' studies, among other data, in favor of a language of thought as one of the representational formats of cognition and thus in line with our view that structured meaningful representation can be available independently of language. In fact, research has accumulated showing that infants, as young as 4-month-old can use abstract content and reason about it under certain conditions (when their attention is attracted to the relevant dimension through priming, e.g., Lin et al, 2021 or when some categories are relevant for them or salient, e.g., Bonatti et al, 2002;Surian and Caldi, 2010), suggesting that failures noticed in the literatures are due to testing conditions, rather than to lack of competence (Stavans et al, 2019).…”
Section: Meaning First Meets Language Acquisitionmentioning
confidence: 92%
“…In other words, the MFA assumes that structured meaningful representations can be available to a child (as well as non-human animals) before learning a language. A recent review article by Quilty-Dunn et al (2022) discusses several pieces of evidence from both infants and non-human animals' studies, among other data, in favor of a language of thought as one of the representational formats of cognition and thus in line with our view that structured meaningful representation can be available independently of language. In fact, research has accumulated showing that infants, as young as 4-month-old can use abstract content and reason about it under certain conditions (when their attention is attracted to the relevant dimension through priming, e.g., Lin et al, 2021 or when some categories are relevant for them or salient, e.g., Bonatti et al, 2002;Surian and Caldi, 2010), suggesting that failures noticed in the literatures are due to testing conditions, rather than to lack of competence (Stavans et al, 2019).…”
Section: Meaning First Meets Language Acquisitionmentioning
confidence: 92%
“…This way symbolic conceptual/cognitive representations can facilitate the operation of cognitive tasks in perception, reasoning, planning, etc. in relation to things in the outside world (see [ 22 ].…”
Section: An Overview Of the Tensionmentioning
confidence: 99%
“…Compositionality has long been touted as a key property of human cognition, enabling humans to exhibit flexible and abstract language processing and visual processing, among other cognitive processes (Marcus, 2003;Piantadosi et al, 2016;Lake et al, 2017;Smolensky et al, 2022). According to common definitions (Quilty-Dunn et al, 2022;Fodor & Lepore, 2002), a rep-resentation system is compositional if it implements a set of discrete constituent functions which exhibit some degree of modularity. That is, blue circle is represented compositionally if a system is able to entertain the concept blue independently of circle, and vice-versa.…”
Section: Introductionmentioning
confidence: 99%
“…It is an open question whether neural networks require explicit symbolic mechanisms to implement compositional solutions, or whether they implicitly learn compositional solutions during training. Historically, neural networks have been considered non-compositional systems, instead solving tasks by matching new inputs to memorized or "iconic" representations (Marcus, 2003;Quilty-Dunn et al, 2022). Neural networks' apparent lack of compositionality has served as a key point in favor of integrating explicit symbolic mechanisms into artificial intelligence systems (Andreas et al, 2016;Koh et al, 2020;Ellis et al, 2020).…”
Section: Introductionmentioning
confidence: 99%