2019
DOI: 10.1145/3282486
|View full text |Cite
|
Sign up to set email alerts
|

The challenge of crafting intelligible intelligence

Abstract: Since Artificial Intelligence (AI) software uses techniques like deep lookahead search and stochastic optimization of huge neural networks to fit mammoth datasets, it often results in complex behavior that is difficult for people to understand. Yet organizations are deploying AI algorithms in many mission-critical settings. To trust their behavior, we must make AI intelligible, either by using inherently interpretable models or by developing new methods for explaining and controlling otherwise overwhelmingly c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
103
0
2

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 167 publications
(114 citation statements)
references
References 34 publications
3
103
0
2
Order By: Relevance
“…Given that early work on XAI was to explain symbolic approaches, the authors of such work would have more intuitively seen the link to interaction. Despite this, others in the AI community have recently re-discovered the importance of social interaction for explanation; for example, [186,163], and have noted that this is a problem that requires collaboration with HCI researchers.…”
Section: Social and Interactive Explanationmentioning
confidence: 99%
“…Given that early work on XAI was to explain symbolic approaches, the authors of such work would have more intuitively seen the link to interaction. Despite this, others in the AI community have recently re-discovered the importance of social interaction for explanation; for example, [186,163], and have noted that this is a problem that requires collaboration with HCI researchers.…”
Section: Social and Interactive Explanationmentioning
confidence: 99%
“…In many cases this is due to the increased ability of machines to work constantly, consistently, at scale, and at speed. In some areas, for example Deep Mind's Alpha Go Zero, the results appear to exceed human ability; some moves are made that are novel and inexplicable to human Goplaying experts and yet are effective, leading to more wins and new insights into the game [16]. This raises the question of the limits of explainability.…”
Section: What Type Of Explanation Do People Need?mentioning
confidence: 99%
“…One motivation behind this is that scientists increasingly adopt ML for optimizing and producing scientific outcomes, where explainability is a prerequisite to ensure the scientific value of the outcome. In this context, research directions such as explainable artificial intelligence (AI) , informed ML [von Rueden et al, 2019], or intelligible intelligence [Weld and Bansal, 2018] have emerged. Though related, the concepts, goals, and motivations vary, and core technical terms are defined in different ways.…”
Section: Introductionmentioning
confidence: 99%