2022
DOI: 10.1136/bmjhci-2021-100445
|View full text |Cite
|
Sign up to set email alerts
|

Can medical algorithms be fair? Three ethical quandaries and one dilemma

Abstract: ObjectiveTo demonstrate what it takes to reconcile the idea of fairness in medical algorithms and machine learning (ML) with the broader discourse of fairness and health equality in health research.MethodThe methodological approach used in this paper is theoretical and ethical analysis.ResultWe show that the question of ensuring comprehensive ML fairness is interrelated to three quandaries and one dilemma.DiscussionAs fairness in ML depends on a nexus of inherent justice and fairness concerns embedded in healt… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(9 citation statements)
references
References 22 publications
0
7
0
Order By: Relevance
“…There is a central dilemma when applying AI in medical treatment: who is ethically accountable for a decision which arises from the cooperation of a physician and an AI tool 73 ? Due to the ‘black box’ character of the most AI algorithms, it is difficult for physicians to understand how these algorithms create their recommendations.…”
Section: Discussionmentioning
confidence: 99%
“…There is a central dilemma when applying AI in medical treatment: who is ethically accountable for a decision which arises from the cooperation of a physician and an AI tool 73 ? Due to the ‘black box’ character of the most AI algorithms, it is difficult for physicians to understand how these algorithms create their recommendations.…”
Section: Discussionmentioning
confidence: 99%
“…Other work has emphasised the need for operationalizing equity and fairness in AI for healthcare. 11 , 12 Recent work addressing this call has discussed ethical considerations of fairness and equity in the context of AI for healthcare, 42 , 43 , 44 , 45 , 46 , 47 suggested best practices to incorporate health equity in the algorithm development lifecycle, 14 , 26 , 27 , 28 , 29 , 48 and proposed operational definitions. 30 , 31 , 32 Existing operational definitions have largely borrowed from the AI fairness literature, 21 , 22 , 23 , 24 proposing metrics based on statistical parity in AI performance across subpopulations.…”
Section: Discussionmentioning
confidence: 99%
“…Various biases can creep into algorithmic development and application, affecting the fairness of such algorithms. 9 A range of protected attributes, factors that should not influence health, have been chosen because of legal mandates or because of organizational values. 65 Some common protected attributes include race, ethnicity, religion, national origin, gender, marital status, age, and socioeconomic status.…”
Section: Introductionmentioning
confidence: 99%