2019
DOI: 10.1007/s13347-019-00382-7
|View full text |Cite
|
Sign up to set email alerts
|

Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence

Abstract: Many of the computing systems programmed using Machine Learning are opaque: it is difficult to know why they do what they do or how they work. The Explainable Artificial Intelligence research program aims to develop analytic techniques with which to render opaque computing systems transparent, but lacks a normative framework with which to evaluate these techniques' explanatory success. The aim of the present discussion is to develop such a framework, while paying particular attention to different stakeholders'… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
120
0
3

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 208 publications
(157 citation statements)
references
References 39 publications
0
120
0
3
Order By: Relevance
“…These additional selection methods may minimize the loss of features that could be influenced by human bias. The black box problem of NN models also makes it challenging to confirm which features were selected into the model [55]. Lastly, unlike past studies that utilized raw signal data captured by IMU sensors [28][29][30][31], the current study utilized pre-processed data (e.g., gait speed, sway area, cadence, etc.)…”
Section: Discussionmentioning
confidence: 99%
“…These additional selection methods may minimize the loss of features that could be influenced by human bias. The black box problem of NN models also makes it challenging to confirm which features were selected into the model [55]. Lastly, unlike past studies that utilized raw signal data captured by IMU sensors [28][29][30][31], the current study utilized pre-processed data (e.g., gait speed, sway area, cadence, etc.)…”
Section: Discussionmentioning
confidence: 99%
“…2 However, though a lot of interesting work has been done in the area of transparency, far less attention has been devoted to the role of transparency in terms of how those who are ultimately affected (i.e., the general public) come to perceive AI decision-making as being legitimate and worthy of acceptance. Researchers have noted the importance of public acceptance with regard to AI implementation (e.g., Zerilli et al 2018;Binns et al 2018) and there are several frameworks that can be used to make AI systems less biased, more fair, etc., (e.g., Binns 2018;Boscoe 2019;Zednik 2019) which might lead to an increase in perceived legitimacy. These frameworks etc., however, do not explicitly engage with the theories and empirical findings from the social sciences regarding how individuals' legitimacy perceptions are affected by different elements, such as the transparency of the process, decisions, or reasons behind said decisions.…”
Section: Introductionmentioning
confidence: 99%
“…Several philosophers have analyzed these concepts more carefully. Zednik (2019), for instance, offers a pragmatic account of opacity to provide a normative framework detailing different kinds of knowledge that ought to be required by different stakeholders. Explanations, on this view, are the basis for how such knowledge is acquired.…”
Section: Introductionmentioning
confidence: 99%