2021
DOI: 10.1109/tcds.2020.3044366
|View full text |Cite
|
Sign up to set email alerts
|

Explanation as a Social Practice: Toward a Conceptual Framework for the Social Design of AI Systems

Abstract: The recent surge of interest in explainability in artificial intelligence (XAI) is propelled by not only technological advancements in machine learning, but also by regulatory initiatives to foster transparency in algorithmic decision making. In this article, we revise the current concept of explainability and identify three limitations: passive explainee, narrow view on the social process, and undifferentiated assessment of understanding. In order to overcome these limitations, we present explanation as a soc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

2
23
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 31 publications
(25 citation statements)
references
References 76 publications
(105 reference statements)
2
23
0
Order By: Relevance
“…Taking this further and echoing [9] on trust, Israelsen and Ahmed, meanwhile, focus on trust-enhancing "algorithmic assurances" which echo traditional constructs like trustworthiness indicators in the trust literature (see Section 4.4) [11]. All of this comes together as positioning AI explainability as a coconstruction of understanding between explainer (the advanced AI-enabled technology) and explainee (the user) [12]. This ongoing negotiation around explanability echoes my own trust-based alternative to the dialogue around informed consent below (Section 4.4).…”
Section: Responsible and Explainable Aimentioning
confidence: 99%
See 4 more Smart Citations
“…Taking this further and echoing [9] on trust, Israelsen and Ahmed, meanwhile, focus on trust-enhancing "algorithmic assurances" which echo traditional constructs like trustworthiness indicators in the trust literature (see Section 4.4) [11]. All of this comes together as positioning AI explainability as a coconstruction of understanding between explainer (the advanced AI-enabled technology) and explainee (the user) [12]. This ongoing negotiation around explanability echoes my own trust-based alternative to the dialogue around informed consent below (Section 4.4).…”
Section: Responsible and Explainable Aimentioning
confidence: 99%
“…Much of the research above makes explicit a link between the motivation towards explainable or responsible AI with regulation and data subject rights [2,4,5,9,11,12]. With specific regard to big data, the Toronto Declaration puts the onus on data scientists and to some degree governance structures to protect individual rights [13].…”
Section: Responsible and Explainable Aimentioning
confidence: 99%
See 3 more Smart Citations