2022
DOI: 10.1037/xge0001181
|View full text |Cite
|
Sign up to set email alerts
|

The human black-box: The illusion of understanding human better than algorithmic decision-making.

Abstract: As algorithms increasingly replace human decision-makers, concerns have been voiced about the blackbox nature of algorithmic decision-making. These concerns raise an apparent paradox. In many cases, human decision-makers are just as much of a black-box as the algorithms that are meant to replace them. Yet, the inscrutability of human decision-making seems to raise fewer concerns. We suggest that one of the reasons for this paradox is that people foster an illusion of understanding human better than algorithmic… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 21 publications
(8 citation statements)
references
References 45 publications
0
6
0
Order By: Relevance
“…These findings join a recent wave of work using robots as models for studying social cognition (Malle et al, 2020; Wykowska, 2021; Wykowska et al, 2016). Going beyond work showing the different ways in which we judge robots in areas such as prejudice (Bigman, Gray, et al, 2023; Bonezzi & Ostinelli, 2021; Bonezzi et al, 2022) and attribution (Malle et al, 2020), this work introduces an impression formation and learning approach to examine how we learn about robots. We address two central questions about how people resolve inconsistent evidence for competence and how they prioritize different types of evidence.…”
Section: Discussionmentioning
confidence: 99%
“…These findings join a recent wave of work using robots as models for studying social cognition (Malle et al, 2020; Wykowska, 2021; Wykowska et al, 2016). Going beyond work showing the different ways in which we judge robots in areas such as prejudice (Bigman, Gray, et al, 2023; Bonezzi & Ostinelli, 2021; Bonezzi et al, 2022) and attribution (Malle et al, 2020), this work introduces an impression formation and learning approach to examine how we learn about robots. We address two central questions about how people resolve inconsistent evidence for competence and how they prioritize different types of evidence.…”
Section: Discussionmentioning
confidence: 99%
“…Additionally, studies by Kaushal et al (2020) and Obermeyer et al (2019) suggest that algorithms can be biased due to the values and biases of their human developers. Bonezzi et al (2022) showed that people are concerned with the black-box nature of algorithms and can incorrectly believe they understand the reasoning behind a human judge's decision. Despite this, human DMs can be just as opaque as algorithms.…”
Section: Algorithms As Decision Aiding Toolsmentioning
confidence: 99%
“…For the algorithms, we at least know how they work, even if we cannot explain why they have arrived at a particular decision. In the case of the human mind, we have only a tentative outline of the answer to the question how it works (Bonezzi et al 2022).…”
Section: Minds As Blackboxesmentioning
confidence: 99%
“…On the other hand, at least some mechanisms behind the folk-psychology seem to be inborn. In particular, as suggested by research in the developmental psychology, it seems that folk psychology is deeply rooted in the human ability to spontaneously distinguish between two kinds of interactions (causality) in the world -physical and intentional (Bloom 2004). We perceive the interactions between physical objects as goversned by a different set of laws than the intentional actions of other people.…”
Section: Stranger Thingsmentioning
confidence: 99%