2022
DOI: 10.1126/sciadv.abj1812
|View full text |Cite|
|
Sign up to set email alerts
|

Cross-ethnicity/race generalization failure of behavioral prediction from resting-state functional connectivity

Abstract: Algorithmic biases that favor majority populations pose a key challenge to the application of machine learning for precision medicine. Here, we assessed such bias in prediction models of behavioral phenotypes from brain functional magnetic resonance imaging. We examined the prediction bias using two independent datasets (preadolescent versus adult) of mixed ethnic/racial composition. When predictive models were trained on data dominated by white Americans (WA), out-of-sample prediction errors were generally hi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

3
86
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 85 publications
(89 citation statements)
references
References 97 publications
3
86
0
Order By: Relevance
“…Moreover, models that are fit on a biased demographic group often do not generalize to more diverse samples(67). It is possible that sampling procedures impact the relationship between diffusion properties of the white matter and academic skills and may explain the seemingly contradictory results between the present study and past findings.…”
Section: Discussionmentioning
confidence: 99%
“…Moreover, models that are fit on a biased demographic group often do not generalize to more diverse samples(67). It is possible that sampling procedures impact the relationship between diffusion properties of the white matter and academic skills and may explain the seemingly contradictory results between the present study and past findings.…”
Section: Discussionmentioning
confidence: 99%
“…Diversity in the datasets becomes an increasingly important point that is being addressed by researchers to counteract bias that can be potentially harmful (Leavy, 2018 ). Nonetheless, ensuring diversity in and of itself is not enough (Li et al, 2022 ); more research is needed to understand how discrimination intersects with socioeconomic factors to keep bias from being introduced into healthcare algorithms through structural inequalities in society (Quinn et al, 2021 ). Anticipating structural bias in datasets and understanding the social implications of using AI systems before their implementation is considered best practice; some authors in the sample even propose that failing to do so should be qualified as scientific misconduct (Owens and Walker, 2020 ).…”
Section: Discussionmentioning
confidence: 99%
“…This literature is playing a crucial role in shaping policies for deployment of automated diagnostic models. Consequently, there is also a large amount of recent work on identifying such biases 19,22,25 , and developing techniques to mitigate these biases 20,36,37 . In this study, we have shown that when machine learning models are trained using well-established data preprocessing and hyper-parameter optimization techniques on data from large-scale multi-site studies for three neurological disorders, namely Alzheimer's disease, schizophrenia, and autism spectrum disorder, the predictions of these models need not be biased.…”
Section: Relationship Of This Work To Existing Literature On Identify...mentioning
confidence: 99%