Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence 2020
DOI: 10.24963/ijcai.2020/670
|View full text |Cite
|
Sign up to set email alerts
|

Explanation Perspectives from the Cognitive Sciences---A Survey

Abstract: With growing adoption of AI across fields such as healthcare, finance, and the justice system, explaining an AI decision has become more important than ever before. Development of human-centric explainable AI (XAI) systems necessitates an understanding of the requirements of the human-in-the-loop seeking the explanation. This includes the cognitive behavioral purpose that the explanation serves for its recipients, and the structure that the explanation uses to reach those ends. An understanding of th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
21
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 28 publications
(21 citation statements)
references
References 13 publications
0
21
0
Order By: Relevance
“…Hence, this property is about how something is explained instead of what is explained. Examples include the usage of higher-level information [9], abstractions [63,246] or suitable terminology [246], and not using explanations that are circular [238].…”
Section: Co-12mentioning
confidence: 99%
“…Hence, this property is about how something is explained instead of what is explained. Examples include the usage of higher-level information [9], abstractions [63,246] or suitable terminology [246], and not using explanations that are circular [238].…”
Section: Co-12mentioning
confidence: 99%
“…Deep neural nets lack interpretability as it is difficult to analyze which modalities or features are driving the predictions [26].In real-world scenarios impacting human lives by automated algorithmic decision assistance such as in legal, healthcare, finance, transport, military, and autonomous vehicles, we expect AI systems to provide their predictions with proper evidence and justification [29]. The system's explanation should be human interpretable and understandable, mapping the human mental model to build trust, transparency, reliability for success and failure, robust, fair, and unbiased applications underlying ethical machine learning principles [139], [140]. Explainability is a legal concern to comply with the EU General Data Protection and Regulation (GDPR) act asking for" Right to explanation" to the users of an automated decision-making system [141].…”
Section: Explainable Ai (Xai)mentioning
confidence: 99%
“…This means that the problem is not the complexity of the model or the design of methods such as SHAP, PDP or ALE. The actual problem to be solved by the research community is to determine exactly what questions the model stakeholders can pose in explanatory model analysis and what the desired properties of the answers to these questions are (Miller, 2019;Adebayo et al, 2020;Baniecki and Biecek, 2020;Barredo Arrieta et al, 2020;Sokol and Flach, 2020;Srinivasan and Chander, 2020).…”
Section: Conclusion: You Do Not Explain Without a Contextmentioning
confidence: 99%