2022
DOI: 10.1371/journal.pdig.0000022
|View full text |Cite
|
Sign up to set email alerts
|

Sources of bias in artificial intelligence that perpetuate healthcare disparities—A global review

Abstract: Background While artificial intelligence (AI) offers possibilities of advanced clinical prediction and decision-making in healthcare, models trained on relatively homogeneous datasets, and populations poorly-representative of underlying diversity, limits generalisability and risks biased AI-based decisions. Here, we describe the landscape of AI in clinical medicine to delineate population and data-source disparities. Methods We performed a scoping review of clinical papers published in PubMed in 2019 using A… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
104
0
2

Year Published

2022
2022
2024
2024

Publication Types

Select...
8
1
1

Relationship

1
9

Authors

Journals

citations
Cited by 135 publications
(106 citation statements)
references
References 48 publications
0
104
0
2
Order By: Relevance
“…Enhancing interoperability is critical to enabling training and deployment of algorithms and a substantial barrier to scaling DHI nationally [ 92 ]. The lack of high-quality data in LMICs can perpetuate healthcare disparities [ 114 ]. Without access to high-quality large volumes of data, it is challenging to develop effective algorithms for populations in LMICs.…”
Section: Roadblocks and Solutions To Implement Digital Health Interve...mentioning
confidence: 99%
“…Enhancing interoperability is critical to enabling training and deployment of algorithms and a substantial barrier to scaling DHI nationally [ 92 ]. The lack of high-quality data in LMICs can perpetuate healthcare disparities [ 114 ]. Without access to high-quality large volumes of data, it is challenging to develop effective algorithms for populations in LMICs.…”
Section: Roadblocks and Solutions To Implement Digital Health Interve...mentioning
confidence: 99%
“…For instance, models fed relatively homogeneous data during training suffer from a lack of diversity in terms of underlying patient populations. They can severely limit the generalizability of results and yield biased AI-based decisions (Celi et al, 2022). Obermeyer et al (2019) provided an example of data bias where the algorithm showed Black patients to be healthier than they actually were, as the design of the algorithm used the cost of health as a proxy for the needs of patients.…”
Section: Biasmentioning
confidence: 99%
“…As described in the Introduction section, a given AI solution should only be evaluated once against a given test dataset [17]. Datasets published in the context of challenges or studies (many of which are based on TCGA [4] and have regional biases [113]) should generally not be used as test datasets: it cannot be ruled out that they were taken into account in some form during development, e.g., inadvertently or as part of pretraining. Ideally, test datasets should not be published at all and the evaluation should be conducted by an independent body with no con icts of interest [30].…”
Section: Independencementioning
confidence: 99%