2020
DOI: 10.1093/jamia/ocaa094
|View full text |Cite
|
Sign up to set email alerts
|

Latent bias and the implementation of artificial intelligence in medicine

Abstract: Increasing recognition of biases in artificial intelligence (AI) algorithms has motivated the quest to build fair models, free of biases. However, building fair models may be only half the challenge. A seemingly fair model could involve, directly or indirectly, what we call “latent biases.” Just as latent errors are generally described as errors “waiting to happen” in complex systems, latent biases are biases waiting to happen. Here we describe 3 major challenges related to bias in AI algorithms and propose se… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
84
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 110 publications
(86 citation statements)
references
References 9 publications
0
84
0
Order By: Relevance
“…Regulatory oversight of AI technologies is essential for reducing another type of bias -evaluative bias, i.e., particularly for continually evolving AI models (118).…”
Section: J O U R N a L P R E -P R O O Fmentioning
confidence: 99%
“…Regulatory oversight of AI technologies is essential for reducing another type of bias -evaluative bias, i.e., particularly for continually evolving AI models (118).…”
Section: J O U R N a L P R E -P R O O Fmentioning
confidence: 99%
“…We expect that they operate with 100% sensitivity and a low rate of false positives. However, AI is not yet free from bias or errors, and an AI decision support tool could easily succumb to automation bias when its predictions are almost always followed by the endoscopist [78]. Machine learning systems can also unintentionally reproduce or magnify existing biases of their training data sets and exacerbate health disparities [79].…”
Section: Principal Findingsmentioning
confidence: 99%
“…55 An initially equitable algorithm can be made biased by prejudiced data/human decisions. 56 Some suggestions to prevent this bias from affecting AI performance include making it a fundamental requirement to be able to explain and interpret the output of every clinical AI system, to eliminate the "black box." 55 Additionally, transparency about removing existing biases in raw data used in an algorithm and avoiding adding new biases should be included in the model description.…”
Section: Future Directions and Conclusionmentioning
confidence: 99%