2022
DOI: 10.1007/s13347-021-00495-y
|View full text |Cite
|
Sign up to set email alerts
|

People Prefer Moral Discretion to Algorithms: Algorithm Aversion Beyond Intransparency

Abstract: We explore aversion to the use of algorithms in moral decision-making. So far, this aversion has been explained mainly by the fear of opaque decisions that are potentially biased. Using incentivized experiments, we study which role the desire for human discretion in moral decision-making plays. This seems justified in light of evidence suggesting that people might not doubt the quality of algorithmic decisions, but still reject them. In our first study, we found that people prefer humans with decision-making d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(8 citation statements)
references
References 41 publications
0
8
0
Order By: Relevance
“…They have neither yet been validated for specific applications based on real data nor confirmed by including end users to unearth their true pertinency towards said tradeoff between performance vs. explainability. An empirical quantification of end user explainability is necessary to provide first-hand knowledge to the engineers of intelligent systems for the development of intelligent systems (Jauernig et al, 2022;. Despite this apparent deficiency, they are commonly referenced as a motivation for a user-or organization-centered XAI research or intelligent system deployment (e. g., Asatiani et al, 2021;Guo et al, 2019;Rudin, 2019).…”
Section: Machine Learning Tradeoffsmentioning
confidence: 99%
See 1 more Smart Citation
“…They have neither yet been validated for specific applications based on real data nor confirmed by including end users to unearth their true pertinency towards said tradeoff between performance vs. explainability. An empirical quantification of end user explainability is necessary to provide first-hand knowledge to the engineers of intelligent systems for the development of intelligent systems (Jauernig et al, 2022;. Despite this apparent deficiency, they are commonly referenced as a motivation for a user-or organization-centered XAI research or intelligent system deployment (e. g., Asatiani et al, 2021;Guo et al, 2019;Rudin, 2019).…”
Section: Machine Learning Tradeoffsmentioning
confidence: 99%
“…The dependent variable performance measures the objective performance of the algorithm. The perceived goodness of explanation is more subjective and we base the choice of this second dependent variable on the proposed tradeoff that requires a quantification of explanation as it is perceived by users and knowingly can influence the user's mindset towards algorithms (Berger et al, 2021;Jauernig et al, 2022). The moderating group variable data complexity is expressed through different cases using different datasets reflecting low complexity and high complexity.…”
Section: Performance Vs Explainability Tradeoffmentioning
confidence: 99%
“…In contexts of human-AI collaboration, the perception of an AI system has been found to be influenced by several variables, including communication direction and the nature of the model underpinning the AI system [28]. Beyond the usual human aversion to algorithms [29], research indicates that individuals prefer human decision-making discretion over algorithms that rigidly apply human-derived fairness principles to specific cases [30]. This preference stems from humans' capacity for independent judgment, allowing them to transcend fairness principles as necessary.…”
Section: 2mentioning
confidence: 99%
“…8 London (2019) emphasizes that common medical interventions often involve mechanisms that are not 6 Although there are definitely other factors in play as well. See for example Krügel et al (2022) and Jauernig et al (2022).…”
Section: The Call For Transparencymentioning
confidence: 99%
“…More on this later.6 Although there are definitely other factors in play as well. See for exampleKrügel et al (2022) andJauernig et al (2022).7 Though seeWeller (2019, Section 3.1) for examples suggesting that transparency can lead us to overascribe reliability.…”
mentioning
confidence: 99%