2018
DOI: 10.1007/978-3-319-99740-7_21
|View full text |Cite
|
Sign up to set email alerts
|

Explainable AI: The New 42?

Abstract: their un-debuggability, and their inability to "explain" their decisions in a human understandable and reconstructable way. So while AlphaGo or DeepStack can crush the best humans at Go or Poker, neither program has any internal model of its task; its representations defy interpretation by humans, there is no mechanism to explain their actions and behaviour, and furthermore, there is no obvious instructional value. .. the high performance systems can not help humans improve. Even when we understand the underly… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
90
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
8
2

Relationship

2
8

Authors

Journals

citations
Cited by 204 publications
(91 citation statements)
references
References 15 publications
1
90
0
Order By: Relevance
“…While in some domains automation is relevant, it is essential for others to understand AI predictions and decisions for human stakeholders [10], as explanations can impact the work of stakeholders who adopt such tools for decision-making [12]. For instance, in healthcare, doctors can adopt explanation methods to understand the diagnosis provided by AI models predictions [10].…”
Section: Introductionmentioning
confidence: 99%
“…While in some domains automation is relevant, it is essential for others to understand AI predictions and decisions for human stakeholders [10], as explanations can impact the work of stakeholders who adopt such tools for decision-making [12]. For instance, in healthcare, doctors can adopt explanation methods to understand the diagnosis provided by AI models predictions [10].…”
Section: Introductionmentioning
confidence: 99%
“…Even though the concept of algorithm "transparency" is as old as recommendation systems, the emergence and the ubiquity of "black-box" learning algorithms nowadays, such as neural networks, put "transparency" of algorithms back in the limelight [14]. As detailed in Sect.…”
Section: Definition Of a "Transparent" Classification Systemmentioning
confidence: 99%
“…Thus, there is room for experts from different fields, including linguists, to identify sound and practical solutions, but interactive machine learning could also be of help here. Moreover, verbal explanations are extremely important for the emerging field of "explainable artificial intelligence" (Goebel et al 2018), which opens additional application fields.…”
Section: Linguistic Qualitymentioning
confidence: 99%