2018
DOI: 10.1007/s13347-018-0330-6
|View full text |Cite
|
Sign up to set email alerts
|

Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
114
0
3

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 193 publications
(132 citation statements)
references
References 29 publications
1
114
0
3
Order By: Relevance
“…McCarthy et al, ; Mittelstadt et al, ). However, information does not always lead to better acceptance, its effect in the context of automation seems to be especially complicated (Langer et al, ), and with growing automation and more complex technologies underlying this automation (e.g., deep learning algorithms), providing information can become a challenge (Mittelstadt et al, ; Zerilli, Knott, Maclaurin, & Gavaghan, ). Therefore, it is up to designers of automated tools as well as a challenge for interdisciplinary research to make these tools as controllable, social, and transparent as possible (see also the discussion on explainable artificial intelligence that is currently shaking the field of computer science; Biran & Cotton, ; Miller, Howe, & Sonenberg, ).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…McCarthy et al, ; Mittelstadt et al, ). However, information does not always lead to better acceptance, its effect in the context of automation seems to be especially complicated (Langer et al, ), and with growing automation and more complex technologies underlying this automation (e.g., deep learning algorithms), providing information can become a challenge (Mittelstadt et al, ; Zerilli, Knott, Maclaurin, & Gavaghan, ). Therefore, it is up to designers of automated tools as well as a challenge for interdisciplinary research to make these tools as controllable, social, and transparent as possible (see also the discussion on explainable artificial intelligence that is currently shaking the field of computer science; Biran & Cotton, ; Miller, Howe, & Sonenberg, ).…”
Section: Discussionmentioning
confidence: 99%
“…Alternatively, they can ignore the recommendation but are equally unable to explain why they chose to do so. In any case, hiring managers as well as applicants would be exposed to a nontransparent, nonexplainable, and nonchallengeable highly automated decision, probably leading to especially negative reactions as well as serious legal, moral and ethical issues and challenges (Zerilli et al, ).…”
Section: Discussionmentioning
confidence: 99%
“…1 To prevent negative outcomes and create accountable systems that individuals can trust, many have argued that we need to open up the "black box" of AI decision-making and make it more transparent (e.g., O'Neil 2016;Wachter et al 2017;Floridi et al 2018). This "opening up" will make it easier for us to understand (interpret) the functioning of the AI as well as possible to receive explanations for individual decisions (e.g., Zarsky 2016; Lepri et al 2017;Zerilli et al 2018;Binns 2018;De Laat 2018). 2 However, though a lot of interesting work has been done in the area of transparency, far less attention has been devoted to the role of transparency in terms of how those who are ultimately affected (i.e., the general public) come to perceive AI decision-making as being legitimate and worthy of acceptance.…”
Section: Introductionmentioning
confidence: 99%
“…For one, like many biological cognizers, the computing systems being developed in Artificial Intelligence can be viewed as information-processing systems in which inputs are systematically transformed into outputs. 9 For another, like the computers being programmed using Machine Learning, biological cognizers are opaque in the sense that we still do not know exactly why they do what they do or how they work (Zerilli et al, 2018).…”
Section: Toward a Marrian Framework For Explainable Aimentioning
confidence: 99%
“…Investigators within the Explainable AI (XAI) research program intend to ward off these consequences through the use of analytic techniques with which to render opaque computing systems transparent. 1 Although the XAI research program has already commanded significant attention (Burrell, 2016;Doran, Schulz, & Besold, 2017;Lipton, 2016;Ras, van Gerven, & Haselager, 2018;Zerilli, Knott, Maclaurin, & Gavaghan, 2018), important normative questions remain unanswered. Most fundamentally, it remains unclear how Explainable AI should explain: what is required to render opaque computing systems transparent?…”
Section: Introductionmentioning
confidence: 99%