2021
DOI: 10.1002/ail2.37
|View full text |Cite
|
Sign up to set email alerts
|

Abstraction, validation, and generalization for explainable artificial intelligence

Abstract: Neural network architectures are achieving superhuman performance on an expanding range of tasks. To effectively and safely deploy these systems, their decision‐making must be understandable to a wide range of stakeholders. Methods to explain artificial intelligence (AI) have been proposed to answer this challenge, but a lack of theory impedes the development of systematic abstractions, which are necessary for cumulative knowledge gains. We propose Bayesian Teaching as a framework for unifying explainable AI (… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
references
References 28 publications
0
0
0
Order By: Relevance