2013
DOI: 10.7551/mitpress/9339.001.0001
|View full text |Cite
|
Sign up to set email alerts
|

Explaining the Computational Mind

Abstract: A defense of the computational explanation of cognition that relies on mechanistic philosophy of science and advocates for explanatory pluralism. In this book, Marcin Milkowski argues that the mind can be explained computationally because it is itself computational—whether it engages in mental arithmetic, parses natural language, or processes the auditory signals that allow us to experience music. Defending the computational explanation against objections to it—from John Searle and Hilary Putnam… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
131
0
2

Year Published

2017
2017
2019
2019

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 218 publications
(140 citation statements)
references
References 0 publications
1
131
0
2
Order By: Relevance
“…In particular, Varela, Thompson, and Rosch took computation to require representation (1991/2016, p. 40), and presumably thought that the automaton they described did not meet this requirement. However, more recent accounts of computation do not require representation (see, e.g., Egan ; Fresco , ; Miłkowski , ; Piccinini , ), and so may be compatible with Varela, Thompson, and Rosch's enactive theory of cognition. In this section we will focus on just one of these theories, Piccinini's mechanistic account, and demonstrate that according to this account the enactive automaton described by Varela, Thompson, and Rosch straightforwardly qualifies as a (non‐representational) computing mechanism.…”
Section: The Enactive Automaton As a Computing Mechanismmentioning
confidence: 95%
“…In particular, Varela, Thompson, and Rosch took computation to require representation (1991/2016, p. 40), and presumably thought that the automaton they described did not meet this requirement. However, more recent accounts of computation do not require representation (see, e.g., Egan ; Fresco , ; Miłkowski , ; Piccinini , ), and so may be compatible with Varela, Thompson, and Rosch's enactive theory of cognition. In this section we will focus on just one of these theories, Piccinini's mechanistic account, and demonstrate that according to this account the enactive automaton described by Varela, Thompson, and Rosch straightforwardly qualifies as a (non‐representational) computing mechanism.…”
Section: The Enactive Automaton As a Computing Mechanismmentioning
confidence: 95%
“…Computational modeling in cognitive development comes from two distinct traditions in computer science, that of symbolic and subsymbolic information processing (Boden, ; Klahr, ; Miłkowski, ). Symbolic computation emphasized automated logical theorem‐proving and more generally adult problem‐solving in the 1950s, an approach which gave rise to production systems in the 1970s.…”
Section: Modeling Size Seriation: a Window On Processmentioning
confidence: 99%
“…The computational methodology chosen combines aspects of Bayesian cognitive modeling (Lee, ), dynamical systems modeling (Van Geert, ), and the cognitive architectural approach, housed within a procedural simulation program (Yule et al, ). We chose to build our own modeling framework, due to concerns that the number of free parameters and predefined, and possibly hidden, theoretical constructs would have been very large should a ready‐made cognitive architecture (e.g., ACT‐R, SOAR) have been used (Miłkowski, ). Despite this concern, there are nevertheless many parameters in our model relative to the simple parameters defining the behavioral change.…”
Section: A Computational Model Of Sequential Size Understandingmentioning
confidence: 99%
See 1 more Smart Citation
“…The fourth and most controversial condition is “system‐detectable error” (systems dependent on S‐representations must be capable of detecting an error based on an insufficiency in their correspondence to their target). Advocates of system‐detectable error typically take it to be required, for it is only systems that are capable of detecting a mismatch resulting in their own actions and some target, for which error matters for that system (e.g., Gładziejewski, ; Miłkowski, ; following Bickhard, , ). As we shall see below, system‐detectable error strengthens the idea that content adds something of epistemic value when describing the contribution of an S‐representation to the success or failure of a containing system.…”
Section: The Structural Representation Accountmentioning
confidence: 99%