2017
DOI: 10.3390/philosophies2010005
|View full text |Cite
|
Sign up to set email alerts
|

Exploring the Computational Explanatory Gap

Abstract: While substantial progress has been made in the field known as artificial consciousness, at the present time there is no generally accepted phenomenally conscious machine, nor even a clear route to how one might be produced should we decide to try. Here, we take the position that, from our computer science perspective, a major reason for this is a computational explanatory gap: our inability to understand/explain the implementation of high-level cognitive algorithms in terms of neurocomputational processing. W… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
26
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
4
1
1

Relationship

2
4

Authors

Journals

citations
Cited by 16 publications
(26 citation statements)
references
References 67 publications
0
26
0
Order By: Relevance
“…present-day robots and simulated life forms, e.g. Reggia, 2013;Reggia, Monner, & Sylvester, 2014). In essence, robots and computer simulations demonstrate how pain-like behaviour can be produced without the subjective experience of pain.…”
Section: Are Insects More Like Little People or Complicated Robots?mentioning
confidence: 99%
“…present-day robots and simulated life forms, e.g. Reggia, 2013;Reggia, Monner, & Sylvester, 2014). In essence, robots and computer simulations demonstrate how pain-like behaviour can be produced without the subjective experience of pain.…”
Section: Are Insects More Like Little People or Complicated Robots?mentioning
confidence: 99%
“…Top-down symbolic methods are largely just the opposite. This complementarity has been recognized in the past (Reggia et al, 2014) and leveraged effectively in a number of cognitive architectures (e.g., Sun and Naveh, 2004).…”
Section: Introductionmentioning
confidence: 96%
“…Top-down symbolic methods are largely just the opposite. This complementarity has been recognized in the past (Reggia et al, 2014) and leveraged effectively in a number of cognitive architectures (e.g., Sun and Naveh, 2004).The current limited abilities of neural architectures to capture critical aspects of high-level cognition puts them at a tremendous disadvantage relative to symbolic AI techniques when trying to engineer neurocomputational systems for high-level problem-solving tasks. Such problem solving by people depends on cognitive control, the process of managing other cognitive processes (Schneider and Chein, 2003).…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…stimuli. Previous studies have noted that the gap between traditional artificial neural 653 network architectures and symbolic systems is one of the great challenges to be 654 overcome by artificial intelligence [86]. Previous neural network models that attempt to 655 implement a similar approach to memory control have relied on predetermined, 656 hand-coded sequences of memory operations hard-coded into the model ( [87][88][89], but 657 see [50]).…”
mentioning
confidence: 99%