2021
DOI: 10.1016/j.artint.2021.103525
|View full text |Cite
|
Sign up to set email alerts
|

Levels of explainable artificial intelligence for human-aligned conversational explanations

Abstract: Over the last few years there has been rapid research growth into eXplainable Artificial Intelligence (XAI) and the closely aligned Interpretable Machine Learning (IML). Drivers for this growth include recent legislative changes and increased investments by industry and governments, along with increased concern from the general public. People are affected by autonomous decisions every day and the public need to understand the decision-making process to accept the outcomes. However, the vast majority of the app… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

1
61
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

2
6

Authors

Journals

citations
Cited by 68 publications
(69 citation statements)
references
References 203 publications
(280 reference statements)
1
61
0
Order By: Relevance
“…For example, it would not be useful to have an algorithm that recommended movies we do not like, or, also, it would be serious if a medical system made a faulty negative diagnosis during an evaluation of a patient with cancer. In this context, the role of explainable artificial intelligence is an important role since it consists of tools, techniques, and algorithms that provide the agent with the ability to explain its action to the human intuitively [6,7].…”
Section: Introductionmentioning
confidence: 99%
“…For example, it would not be useful to have an algorithm that recommended movies we do not like, or, also, it would be serious if a medical system made a faulty negative diagnosis during an evaluation of a patient with cancer. In this context, the role of explainable artificial intelligence is an important role since it consists of tools, techniques, and algorithms that provide the agent with the ability to explain its action to the human intuitively [6,7].…”
Section: Introductionmentioning
confidence: 99%
“…However, for AI to succeed, it must provide trusted and socially acceptable systems and, therefore, should be modelled on philosophical, psychological and cognitive science models of human explanation. Dazeley et al [58] identified a set of levels of explainability that are adapted from Animal Cognitive Ethology's levels of intentionality [91,45], combined with human social contexts [57]. After reviewing the literature across these levels, Dazeley et al [58] shows that the majority of XAI research is focused on the lowest level of Zero-order explanations.…”
Section: Introductionmentioning
confidence: 99%
“…Dazeley et al [58] identified a set of levels of explainability that are adapted from Animal Cognitive Ethology's levels of intentionality [91,45], combined with human social contexts [57]. After reviewing the literature across these levels, Dazeley et al [58] shows that the majority of XAI research is focused on the lowest level of Zero-order explanations. Furthermore, that a fully Broad-XAI system requires the full range of XAI levels to provide an integrated conversational explanation.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…They often fail to understand the meaning of a particular observation, which may be obvious to a human observer, thus missing the opportunity for the agent to learn [31]. Vice versa, agents that provide explanations for their actions are essential for humans to learn from experiences [32]. Generating an explanation requires access to some sort of internal representation about the task, the team, and the situational context.…”
mentioning
confidence: 99%