2020
DOI: 10.1016/j.bja.2020.07.040
|View full text |Cite
|
Sign up to set email alerts
|

Bias and ethical considerations in machine learning and the automation of perioperative risk assessment

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
25
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
2
2

Relationship

0
10

Authors

Journals

citations
Cited by 42 publications
(25 citation statements)
references
References 17 publications
0
25
0
Order By: Relevance
“…As AI becomes more pervasive in both public and personal health across diverse populations, there have been increasing concerns, and related examples, of AI solutions leading to inadvertent bias of modeling results (45)(46)(47)(48). Broadly, such bias can originate from the data used for model training and testing, as well as the mechanics of the model itself (49).…”
Section: Potential For Bias In MLmentioning
confidence: 99%
“…As AI becomes more pervasive in both public and personal health across diverse populations, there have been increasing concerns, and related examples, of AI solutions leading to inadvertent bias of modeling results (45)(46)(47)(48). Broadly, such bias can originate from the data used for model training and testing, as well as the mechanics of the model itself (49).…”
Section: Potential For Bias In MLmentioning
confidence: 99%
“…However, there is a growing awareness in the community that the presence of different sources of bias significantly decreases the overall generalisation ability of the models, leading to overestimated model performance reported in internal validation compared to evaluation on independent test data ( Soneson, Gerster, Delorenzi, 2014 , Cohen, Hashir, Brooks, Bertrand , Zech, Badgeley, Liu, Costa, Titano, Oermann, 2018 , Maguolo, Nanni ). In addition, numerous journal editorials are calling for better development, evaluation and reporting practices of machine learning models aimed for clinical application ( Mateen, Liley, Denniston, Holmes, Vollmer, 2020 , Nagendran, Chen, Lovejoy, Gordon, Komorowski, Harvey, Topol, Ioannidis, Collins, Maruthappu, 2020 , Campbell, Lee, Abrmoff, Keane, Ting, Lum, Chiang, 2020 , Health, 2020 , O’Reilly-Shah, Gentry, Walters, Zivot, Anderson, Tighe, 2020 , Health, 2019 , Stevens, Mortazavi, Deo, Curtis, Kao, 2020 ). Underneath, there are growing concerns about ethics and the risk of harmful outcomes of using AI in medical applications ( Campolo, Sanfilippo, Whittaker, Crawford, 2018 , Geis, Brady, Wu, Spencer, Ranschaert, Jaremko, Langer, Borondy Kitts, Birch, Shields, van den Hoven van Genderen, Kotter, Wawira Gichoya, Cook, Morgan, Tang, Safdar, Kohli, 2019 , Brady, Neri, 2020 ).…”
Section: Introductionmentioning
confidence: 99%
“…But in recent years, a competing perspective has emerged --the perspective that algorithms often encode the biases of their developers or the surrounding society, producing predictions or inferences that are clearly discriminatory towards specific groups. Examples of algorithmic bias cross contexts, from criminal justice (Angwin et al, 2016), to medicine (O'Reilly-Shah et al, 2020), to computer vision (Klare et al, 2012), to hiring (Garcia, 2016). These limitations appear --and are particularly salient --for high-stakes decisions such as predicting recidivism (Angwin et al, 2016) or administering anaesthesia (O'Reilly-Shah et al, 2020).…”
Section: Introductionmentioning
confidence: 99%