2021
DOI: 10.6028/nist.ir.8312
|View full text |Cite
|
Sign up to set email alerts
|

Four principles of explainable artificial intelligence

Abstract: We introduce four principles for explainable artificial intelligence (AI) that comprise fundamental properties for explainable AI systems. We propose that explainable AI systems deliver accompanying evidence or reasons for outcomes and processes; provide explanations that are understandable to individual users; provide explanations that correctly reflect the system's process for generating the output; and that a system only operates under conditions for which it was designed and when it reaches sufficient conf… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
57
0
5

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 90 publications
(70 citation statements)
references
References 86 publications
0
57
0
5
Order By: Relevance
“…One of our key areas of interest in this study is the question of how to design for transparency in the context of fairness-aware algorithms. Transparency has become an essential feature of algorithmic design, especially in application areas that are socially sensitive due to (1) legal requirements for transparency such as the GDPR [24,41]; (2) the need to build users' trust in a system [11,29,38]; (3) the need for error detection to help mitigate bias and discrimination in a system [23,37,48,49]; (4) helping further public adoption of new technologies [1,3,17,25,30]; and/or (5) creating an environment of accountability for the platforms that host the algorithms [1,15]. We believe that transparency only increases in importance for fairness-aware systems [32].…”
Section: Transparency and Explanationsmentioning
confidence: 99%
“…One of our key areas of interest in this study is the question of how to design for transparency in the context of fairness-aware algorithms. Transparency has become an essential feature of algorithmic design, especially in application areas that are socially sensitive due to (1) legal requirements for transparency such as the GDPR [24,41]; (2) the need to build users' trust in a system [11,29,38]; (3) the need for error detection to help mitigate bias and discrimination in a system [23,37,48,49]; (4) helping further public adoption of new technologies [1,3,17,25,30]; and/or (5) creating an environment of accountability for the platforms that host the algorithms [1,15]. We believe that transparency only increases in importance for fairness-aware systems [32].…”
Section: Transparency and Explanationsmentioning
confidence: 99%
“…A recent taxonomy follows these definitions to include similar approaches of principles for explainable AI from the National Institute of Standards (NIST) [5]. The NIST principles of xAI focus on the user and are derived the Defense Advance Research Project Agency (DARPA) program [6].…”
Section: Fig 2 Ai Principlesmentioning
confidence: 99%
“…In assessing the connection between the user and the man-machine interaction, Philips et al [5] approached explainable AI (xAI) in a similar fashion with four principles for xAI. The explanations:…”
Section: Fig 2 Ai Principlesmentioning
confidence: 99%
“…Explainability in ML is an important subject intensively studied in recent years. There is a wide range of topics that has been discussed, including explainability for multiple stakeholders [5], requirements for explainability [6], [11], and systems for achieving explainability in ML models [10], [7]. Table I summarizes the main related work in the explainability requirements.…”
Section: Related Workmentioning
confidence: 99%