2020
DOI: 10.31234/osf.io/m3nrp
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Algorithmic Discrimination Causes Less Moral Outrage than Human Discrimination

Abstract: The use of algorithms hold promise for overcoming human biases in decision making.Companies and governments are using algorithms to improve decision-making for hiring, medical treatments, and parole. Unfortunately, as with humans, some of these algorithms make persistently biased decisions, functionally discriminating people based on their race and gender.Media coverage suggests that people are morally outraged by algorithmic discrimination, but here we examine whether people are less outraged by algorithmic d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
23
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 20 publications
(24 citation statements)
references
References 55 publications
1
23
0
Order By: Relevance
“…Additionally, trustees can produce ethically questionable outcomes (i.e., discriminating minority applicants) that can affect trustworthiness assessments (Kim et al, 2006). In cases where trust violations are based on violations of ethical considerations, people might have stronger negative reactions for human trustees as they may believe that automated systems do not actively discriminate against specific groups of people (Bigman et al, 2020). However, previous trust in automation research would suggest that errors by automated systems result in strong negative effects regarding trustworthiness which is why we propose: Hypothesis 1 3 : After a trust violation, trustworthiness, trust, and reliance towards the automated system as trustee will decrease more compared to the human trustee.…”
Section: Implications Of Trust Violationsmentioning
confidence: 99%
See 2 more Smart Citations
“…Additionally, trustees can produce ethically questionable outcomes (i.e., discriminating minority applicants) that can affect trustworthiness assessments (Kim et al, 2006). In cases where trust violations are based on violations of ethical considerations, people might have stronger negative reactions for human trustees as they may believe that automated systems do not actively discriminate against specific groups of people (Bigman et al, 2020). However, previous trust in automation research would suggest that errors by automated systems result in strong negative effects regarding trustworthiness which is why we propose: Hypothesis 1 3 : After a trust violation, trustworthiness, trust, and reliance towards the automated system as trustee will decrease more compared to the human trustee.…”
Section: Implications Of Trust Violationsmentioning
confidence: 99%
“…K. Lee, 2018). In addition, for human trustees, trust violation associated with ethical considerations (e.g., a biased preselection) might have stronger effects as people might be more outraged by such trust violations in the case of a human trustee (Bigman et al, 2020). For trust repair effects, it may be possible to assume stronger effects for human trustees as people believe that humans can learn from their mistakes (Tomlinson & Mayer, 2009).…”
Section: Differences In Trustworthiness Facetsmentioning
confidence: 99%
See 1 more Smart Citation
“…We note that it is possible (and plausible) that learning about algorithm bias might increase people's aversion to algorithm decision-making. However, at least currently, the role of algorithm decision-making in medical decision-making is still limited, and people are less likely to perceive them as biased (Bigman et al, 2020;Lee, 2018).…”
Section: Discussionmentioning
confidence: 99%
“…Moral character may also be important for evaluations of AI agents as these are similar to evaluations of humans (Banks et al, 2021). Increasingly, people label AIs and machines in more humanlike terms including character judements when they are highly anthropomorphic (Kiesler et al, 2008;Li & Sung, 2021;Schroeder & Epley, 2016;Waytz et al, 2014) or engage in unethical behavior (Bigman et al, 2020;Shank & Gott, 2020). If the process of making these character judgments differs from humans, the attribution of virtuous characteristics could have implications for explaining other human-agent differences across domains.…”
Section: Moral Character and Behavior Of Artificial Intelligencementioning
confidence: 99%