2021
DOI: 10.31234/osf.io/k79nv
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

How hard is cognitive science?

Abstract: Cognitive science is itself a cognitive activity. Yet, computational cognitive science tools are seldom used to study (limits of) cognitive scientists’ thinking. Here, we do so using computational-level modeling and complexity analysis. We present an idealized formal model of a core inference problem faced by cognitive scientists: Given observations of a system’s behaviors, infer cognitive processes that could plausibly produce the behavior. We consider variants of this problem at different levels of explanati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 32 publications
(20 citation statements)
references
References 27 publications
0
15
0
Order By: Relevance
“…We have mentioned the Interaction Engine Hypothesis, Coupled Oscillators models, and dyadic models of self‐regulation and adaptation, which are promising but currently underspecified to capture the key patterns we see in the meta‐analytic data. By providing computational implementations of these and other models, future research can be more explicit about the assumptions of the theories, as well as compare the predictions of different models and test them on the data available (Guest & Martin, 2021; Navarro, 2021; Rich et al, 2021). Furthermore, the development of better mechanistic models can help the field better capture the meaningful aspects of cross‐linguistic, sociodemographic, ethnic, and cultural diversity.…”
Section: Discussionmentioning
confidence: 99%
“…We have mentioned the Interaction Engine Hypothesis, Coupled Oscillators models, and dyadic models of self‐regulation and adaptation, which are promising but currently underspecified to capture the key patterns we see in the meta‐analytic data. By providing computational implementations of these and other models, future research can be more explicit about the assumptions of the theories, as well as compare the predictions of different models and test them on the data available (Guest & Martin, 2021; Navarro, 2021; Rich et al, 2021). Furthermore, the development of better mechanistic models can help the field better capture the meaningful aspects of cross‐linguistic, sociodemographic, ethnic, and cultural diversity.…”
Section: Discussionmentioning
confidence: 99%
“…Thus, while Markov assumptions may be at play, algorithms designed merely to address a stationary Markov decision process will catastrophically fail in the more general settings considered here. Nonetheless, without further assumptions, the problem would be intractable (or even uncomputable) [4]. Thus, we further assume that the external world W is changing somewhat predictably over time.…”
Section: A Sketch Of Our Frameworkmentioning
confidence: 99%
“…The success of deep networks is a result of their ability to efficiently learn what counts as 'local' [159]. 4 In prospective learning, in contrast to retrospective learning, what counts as local is also a function of potential future environments. Thus, the key difference between retrospective and prospective representation learning is that the internal representation for prospective learning must trade-off between being effective for the current scenario, and being effective for potential future scenarios.…”
Section: Putting It All Togethermentioning
confidence: 99%
See 2 more Smart Citations