2021
DOI: 10.6028/nist.ir.8367
|View full text |Cite
|
Sign up to set email alerts
|

Psychological foundations of explainability and interpretability in artificial intelligence

Abstract: In this paper, we make the case that interpretability and explainability are distinct requirements for machine learning systems. To make this case, we provide an overview of the literature in experimental psychology pertaining to interpretation (especially of numerical stimuli) and comprehension. We find that interpretation refers to the ability to contextualize a model's output in a manner that relates it to the system's designed functional purpose, and the goals, values, and preferences of end users. In cont… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
25
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 39 publications
(25 citation statements)
references
References 94 publications
0
25
0
Order By: Relevance
“…This issue becomes particularly relevant when deploying systems for use by subject matter experts, who are less interested in how a system works and more concerned with why a system provided a given output. When system designers do not take these perceptual differences into consideration it can lead to misinterpretation of output, which is especially problematic in high-risk settings [247,248]. Coordinated guidance is necessary to ensure that transparency tools are effectively supporting the professionals who use them and not indirectly contributing to processes that could amplify bias.…”
Section: System and Procedural Transparencymentioning
confidence: 99%
“…This issue becomes particularly relevant when deploying systems for use by subject matter experts, who are less interested in how a system works and more concerned with why a system provided a given output. When system designers do not take these perceptual differences into consideration it can lead to misinterpretation of output, which is especially problematic in high-risk settings [247,248]. Coordinated guidance is necessary to ensure that transparency tools are effectively supporting the professionals who use them and not indirectly contributing to processes that could amplify bias.…”
Section: System and Procedural Transparencymentioning
confidence: 99%
“…In the process of analyzing key references related to the principles and characteristics of AI [13][14][15][16][17][18]22], as well as other sources provided in Table 1, the following are identified:…”
Section: Analysis Of Ai Principlesmentioning
confidence: 99%
“…Among such technologies are the most complex and promising means of artificial intelligence (AI). Evidence of the growing dynamics of the implementation of AI systems (AIS) in various fields, as well as the intensity of development and research, is the rapid increase in the number of publications during 2018-2021 [1], accepted and developed standards and guides EU Commission [2,3], ISO/IEC [4][5][6][7][8][9][10], IEEE [11,12], NIST [13][14][15][16][17][18], OECD [19][20][21], and UNESCO [22].…”
Section: Introduction 1motivationmentioning
confidence: 99%
“…According to [7] explainability is the model's ability to provide a description of how a model's outcome came to be, and interpretability refers to a human's ability to make sense, or derive meaning, from a given stimulus so that the human can make a decision. Similar to [7], we propose that explainability and interpretability are two distinct ideas.…”
Section: Role Of Transmittermentioning
confidence: 99%