To analyze which ethically relevant biases have been identified by academic literature in artificial intelligence (AI) algorithms developed either for patient risk prediction and triage, or for contact tracing to deal with the COVID-19 pandemic. Additionally, to specifically investigate whether the role of social determinants of health (SDOH) have been considered in these AI developments or not. We conducted a scoping review of the literature, which covered publications from March 2020 to April 2021. Studies mentioning biases on AI algorithms developed for contact tracing and medical triage or risk prediction regarding COVID-19 were included. From 1054 identified articles, 20 studies were finally included. We propose a typology of biases identified in the literature based on bias, limitations and other ethical issues in both areas of analysis. Results on health disparities and SDOH were classified into five categories: racial disparities, biased data, socio-economic disparities, unequal accessibility and workforce, and information communication. SDOH needs to be considered in the clinical context, where they still seem underestimated. Epidemiological conditions depend on geographic location, so the use of local data in studies to develop international solutions may increase some biases. Gender bias was not specifically addressed in the articles included. The main biases are related to data collection and management. Ethical problems related to privacy, consent, and lack of regulation have been identified in contact tracing while some bias-related health inequalities have been highlighted. There is a need for further research focusing on SDOH and these specific AI apps.
Objective: to analyze which ethically relevant biases have been identified by academic literature in artificial intelligence (AI) algorithms developed either for patient risk prediction and triage, or for contact tracing to deal with the COVID-19 pandemic. Additionally, to specifically investigate whether the role of social determinants of health (SDOH) have been considered in these AI developments or not. Methods: we conducted a scoping review of the literature, which covered publications from March 2020 to April 2021. Studies mentioning biases on AI algorithms developed for contact tracing and medical triage or risk prediction regarding COVID-19 were included.Results: from 1054 identified articles, 20 studies were finally included. We propose a typology of biases identified in the literature based on bias, limitations and other ethical issues in both areas of analysis. Results on health disparities and SDOH were classified into five categories: racial disparities, biased data, socio-economic disparities, unequal accessibility and workforce, and information communication. Discussion: SDOH needs to be considered in the clinical context, where they still seem underestimated. Epidemiological conditions depend on geographic location, so the use of local data in studies to develop international solutions may increase some biases. Gender bias was not specifically addressed in the articles included. Conclusions: the main biases are related to data collection and management. Ethical problems related to privacy, consent, and lack of regulation have been identified in contact tracing while some bias-related health inequalities have been highlighted. There is a need for further research focusing on SDOH and these specific AI apps.
A collection of five video essays on embodiment and social distancing, with a focus on projects. ANNIE ABRAHAMS AND DANIEL PINHEIRO, "Why is the use of videoconferencing so exhausting? An analysis on the demands" (00:10): Video footage from Distant Feeling(s) project run by Annie Abrahams & Daniel Pinheiro, a yearly reconnection, eyes closed, no talking. MAURICIO CARRASCO AND DANIEL ZEA, "Vortex Decameron: Building narratologies in pandemic times" (05:22): Emerging from another plague, the fourteenth century Black Death, Boccaccio's Decameron presents the reader with examples of pre, mid and post-plague societal perceptions and norms. This correspondence inspired Ensemble Vortex to propose a newly commissioned series of contemporary narrative works. TINA LA PORTA, "Internet Art At The Turn Of The Millennium" (11:32): This body of work spans from 1994 to 2005, during this time I made over twenty web-based works that explored the idea of "live-ness" on the internet. ALICIA DE MANUEL, DAVID CASACUBERTA, AND PEP GATELL, "La Maldición de la Corona: Revisiting videoconference as a system to foster group creativity" (16:50): A collective and remote experiment based on William Shakespeare's classic Macbeth, carried out by Fundación Épica La Fura dels Baus during the lockdown. The experiment has involved around thirty creatives and resulted in a pioneering work that transcends technological barriers and investigates the future of theater. MELISSANDRE VARIN, "Freezing Elements of Research" (21:54): the visual is the abstract of the audio/the audio is the abstract of the visual-altogether-with you-it forms an assemblage into braids.
The main aim of this article is to reflect on the impact of biases related to artificial intelligence (AI) systems developed to tackle issues arising from the COVID-19 pandemic, with special focus on those developed for triage and risk prediction. A secondary aim is to review assessment tools that have been developed to prevent biases in AI systems. In addition, we provide a conceptual clarification for some terms related to biases in this particular context. We focus mainly on non-racial biases that may be less considered when addressing biases in AI systems in the existing literature. In the manuscript, we found that the existence of bias in AI systems used for COVID-19 can result in algorithmic justice and that the legal frameworks and strategies developed to prevent the apparition of bias have failed to adequately consider social determinants of health. Finally, we make some recommendations on how to include more diverse professional profiles in order to develop AI systems that increase the epistemic diversity needed to tackle AI biases during the COVID-19 pandemic and beyond.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.