2022
DOI: 10.3390/biomimetics7030127
|View full text |Cite
|
Sign up to set email alerts
|

Explainable AI: A Neurally-Inspired Decision Stack Framework

Abstract: European law now requires AI to be explainable in the context of adverse decisions affecting the European Union (EU) citizens. At the same time, we expect increasing instances of AI failure as it operates on imperfect data. This paper puts forward a neurally inspired theoretical framework called “decision stacks” that can provide a way forward in research to develop Explainable Artificial Intelligence (X-AI). By leveraging findings from the finest memory systems in biological brains, the decision stack framewo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
11
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1
1

Relationship

3
2

Authors

Journals

citations
Cited by 6 publications
(11 citation statements)
references
References 96 publications
0
11
0
Order By: Relevance
“…In the past decade, with the application of AI in several autonomous systems and robots, we have seen a tremendous amount of research interest in X-AI methods. Currently, we can choose from a suite of X-AI methods to untangle deep learning opaque models (Lipton, 2017;Došilović et al, 2018;Xu et al, 2019;Holzinger et al, 2022;Khan et al, 2022). There are various categorizations of X-AI methods based on several criteria, including structure, design transparency, agnosticness, scope, supervision, explanation type, and data type, as listed in Table 1 (Khan et al, 2022).…”
Section: Explainability In Ai Limits Explainability In Neuro-robotsmentioning
confidence: 99%
See 4 more Smart Citations
“…In the past decade, with the application of AI in several autonomous systems and robots, we have seen a tremendous amount of research interest in X-AI methods. Currently, we can choose from a suite of X-AI methods to untangle deep learning opaque models (Lipton, 2017;Došilović et al, 2018;Xu et al, 2019;Holzinger et al, 2022;Khan et al, 2022). There are various categorizations of X-AI methods based on several criteria, including structure, design transparency, agnosticness, scope, supervision, explanation type, and data type, as listed in Table 1 (Khan et al, 2022).…”
Section: Explainability In Ai Limits Explainability In Neuro-robotsmentioning
confidence: 99%
“…Overall, these excellent foundational methods [summarized in Table 1; for more details, readers may consult (Khan et al, 2022)] help produce some model understanding and present bits of human interpretable understanding. However, there is still no comprehensive understanding of how an AI implements a decision while explaining the model decision (Khan et al, 2022). These methods are far from perfect.…”
Section: Explainability In Ai Limits Explainability In Neuro-robotsmentioning
confidence: 99%
See 3 more Smart Citations