2020
DOI: 10.1016/s2589-7500(20)30065-0
|View full text |Cite
|
Sign up to set email alerts
|

Ethical limitations of algorithmic fairness solutions in health care machine learning

Abstract: ethical analysis while bringing this crucial conversation to a new audience. We are at a watershed moment in health care. Ethical considerations have rarely been so integral and essential to maximising success of a technology both empirically and clinically. The time is right to partake in thoughtful and collaborative engagement on the challenge of bias to bring about lasting change.We declare no competing interests.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
96
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 153 publications
(105 citation statements)
references
References 9 publications
0
96
0
Order By: Relevance
“…It is imperative to acknowledge that data sources and algorithms represent just one dimension of bias, because overreliance on technical metrics can bring unexpected consequences. 13 Prediction algorithms almost always intend to influence the clinical decision-making process, and the biases of those developing and using them would have greater impact on whether they ultimately perpetuate inequality. 30,31 For example, a clinician may recognize the potential disparity in diagnosis and intentionally compensate for the algorithmic bias.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…It is imperative to acknowledge that data sources and algorithms represent just one dimension of bias, because overreliance on technical metrics can bring unexpected consequences. 13 Prediction algorithms almost always intend to influence the clinical decision-making process, and the biases of those developing and using them would have greater impact on whether they ultimately perpetuate inequality. 30,31 For example, a clinician may recognize the potential disparity in diagnosis and intentionally compensate for the algorithmic bias.…”
Section: Discussionmentioning
confidence: 99%
“…[8][9][10][11] However, gaps between such advances and solutions for algorithmic bias in clinical prediction persist because of technical difficulties, complexities of high dimensional health data, lack of knowledge of underlying causal structures, and challenges to algorithm appraisal. 12,13 Few examples in health care to date use methods to reduce bias. Influential work by Obermeyer et al 7 suggests a relabeling approach, replacing erroneous or biased target outcomes of machine learning models with alternatives.…”
Section: Introductionmentioning
confidence: 99%
“…Thus, it is essential to put humans in the loop for accountability in decisions that affect patient care. 52,53 A social concern is the impact of AI on patient-provider relationships. The human touch, empathy, understanding, and judgment are critical components of healing and patient care.…”
Section: Trustmentioning
confidence: 99%
“…and others use human actors such as nursing staff, whose activities cannot change quickly (e.g., process changes, policy implications, etc., take time to change). This is often reflected in model or algorithmic bias or fairness when these biases get embedded in digital services [ 72 ]. For example, the algorithms used to make predictions on heart conditions may be biased if they undercount women and especially women of color, impacting diagnoses and treatment plans [ 73 ].…”
Section: Designing a Digital Platform To Support Agilitymentioning
confidence: 99%