2023
DOI: 10.1038/s41551-023-01056-8
|View full text |Cite
|
Sign up to set email alerts
|

Algorithmic fairness in artificial intelligence for medicine and healthcare

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
26
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 62 publications
(29 citation statements)
references
References 274 publications
0
26
0
Order By: Relevance
“…Several measures have been proposed, focusing on different aspects of the prediction. For example, while demographic parity aims to match the proportion of positive predictions across subgroups, equalized odds intends to homogenize both true-positive and false-positive rates . However, there is currently a lack of standard methods for measuring and mitigating discrimination in ML models …”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Several measures have been proposed, focusing on different aspects of the prediction. For example, while demographic parity aims to match the proportion of positive predictions across subgroups, equalized odds intends to homogenize both true-positive and false-positive rates . However, there is currently a lack of standard methods for measuring and mitigating discrimination in ML models …”
Section: Discussionmentioning
confidence: 99%
“…For example, while demographic parity aims to match the proportion of positive predictions across subgroups, equalized odds intends to homogenize both true-positive and false-positive rates. 29 However, there is currently a lack of standard methods for measuring and mitigating discrimination in ML models. [30][31][32] When assessing fairness, it is important to consider race and ethnicity beyond binary comparisons between privileged and unprivileged groups.…”
Section: Jama Network Open | Public Healthmentioning
confidence: 99%
“…These characteristics enable AI-based clinical decision support systems to help providers overcome inherent constraints of bounded rationality (Camerer, 2018;Rawson et al, 2019). We acknowledge that by virtue of being trained using humangenerated data, AI might introduce pre-existing human bias into AI-informed decision-making (Chen, Chen, et al, 2021;Rajkomar et al, 2018). Underscoring this point, Obermeyer et al (2019) found evidence for racial bias in a widely used, commercially available algorithm where the health risks of Black patients were systematically underestimated compared to White patients.…”
Section: Susceptibility To Cognitive and Social Biasesmentioning
confidence: 99%
“…Therefore, it is not hard to see why physicians may be reluctant to allow such decisions to be dictated by a machine. While studies suggest that doctors are not very worried about AI replacing them in their jobs (Chen, Chen, et al, 2021) and are actually eager to adopt AI in their work (AMA, 2022), claims from computer scientists that AI will replace physicians has clearly led to some negative sentiment towards AI (Kim, 2018). A final factor that may dampen beliefs in the benevolence of AI is the ethics reified in the technology.…”
Section: Benevolencementioning
confidence: 99%
“…With the growth of artificial intelligence (AI) applications in medicine, concern over fairness and transparency has also grown. A growing concern highlighted by multiple studies [1][2][3][4] is the phenomenon of algorithmic shortcutting, wherein DL models grasp superficial correlations in training data, potentially leading to biased or unreliable predictions. This concern holds particular weight in orthopedics, where machine learning is deployed for various applications, ranging from the detection of rare fractures to the classification of injuries and prediction of patient outcomes 5,6 .…”
Section: Introductionmentioning
confidence: 99%