2021
DOI: 10.1016/j.chb.2021.106859
|View full text |Cite
|
Sign up to set email alerts
|

Threat of racial and economic inequality increases preference for algorithm decision-making

Abstract: Artificial intelligence (AI) algorithms hold promise to reduce inequalities across race and socioeconomic status. One of the most important domains of racial and economic inequalities is medical outcomes; Black and low-income people are more likely to die from many diseases.Algorithms can help reduce these inequalities because they are less likely than human doctors to make biased decisions. Unfortunately, people are generally averse to algorithms making important moral decisions-including in medicine-undermin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
8
2

Relationship

1
9

Authors

Journals

citations
Cited by 33 publications
(21 citation statements)
references
References 57 publications
0
14
0
Order By: Relevance
“…We suggest that the implemented AI recommendation did not work as assumed because of algorithmic aversion (Berger et al, 2021 ; Dietvorst et al, 2015 ; Ochmann et al, 2021 ). Although algorithmic aversion usually occurs if people with domain knowledge can select between a human and an algorithm recommendation (Dietvorst et al, 2015 ), some previous research has suggested that there are cases of algorithm aversion occurring even if there is no human recommender alternative (Bigman et al, 2021 ). The qualitative analysis also provides evidence for this, suggesting that participants with domain knowledge in HR relied less on the AI recommendations compared to participants without domain knowledge.…”
Section: Discussionmentioning
confidence: 99%
“…We suggest that the implemented AI recommendation did not work as assumed because of algorithmic aversion (Berger et al, 2021 ; Dietvorst et al, 2015 ; Ochmann et al, 2021 ). Although algorithmic aversion usually occurs if people with domain knowledge can select between a human and an algorithm recommendation (Dietvorst et al, 2015 ), some previous research has suggested that there are cases of algorithm aversion occurring even if there is no human recommender alternative (Bigman et al, 2021 ). The qualitative analysis also provides evidence for this, suggesting that participants with domain knowledge in HR relied less on the AI recommendations compared to participants without domain knowledge.…”
Section: Discussionmentioning
confidence: 99%
“…Results from empirical studies provide some support for this approach. Drawing people's attention to racial inequalities in medical outcomes increased support for bias‐reduction interventions such as the use of algorithm decision‐making during the triage process in hospitals (Bigman et al., 2021). Other studies show that people do not perceive affirmative action to violate principles of fairness when they were presented with persuasive evidence of discrimination against the beneficiary group (Son Hing et al., 2002).…”
Section: Threat and Opposition To Dei Policiesmentioning
confidence: 99%
“…A review of chatbots and conversational agents used in mental health found a small number of academic psychiatric studies with limited heterogeneitythere is a lack of high-quality evidence for diagnosis, treatment or therapy but there is a high potential for effective and agreeable mental health care if correctly and ethically implemented [95]. A major research constraint is that chatbots and predictive algorithms may be biased and perpetuate inequities in the underserved and the unserved [96][97][98][99]. The ethics of a patient-therapist relationship and the limited skills and emotional intelligence of chatbots requires a solution [100].…”
Section: Artificial Intelligencementioning
confidence: 99%