The increasing implementation of and reliance on machine-learning (ML) algorithms to perform tasks, deliver services and make decisions in health and healthcare have made the need for fairness in ML, and more specifically in healthcare ML algorithms (HMLA), a very important and urgent task. However, while the debate on fairness in the ethics of artificial intelligence (AI) and in HMLA has grown significantly over the last decade, the very concept of fairness as an ethical value has not yet been sufficiently explored. Our paper aims to fill this gap and address the AI ethics principle of fairness from a conceptual standpoint, drawing insights from accounts of fairness elaborated in moral philosophy and using them to conceptualise fairness as an ethical value and to redefine fairness in HMLA accordingly. To achieve our goal, following a first section aimed at clarifying the background, methodology and structure of the paper, in the second section, we provide an overview of the discussion of the AI ethics principle of fairness in HMLA and show that the concept of fairness underlying this debate is framed in purely distributive terms and overlaps with non-discrimination, which is defined in turn as the absence of biases. After showing that this framing is inadequate, in the third section, we pursue an ethical inquiry into the concept of fairness and argue that fairness ought to be conceived of as an ethical value. Following a clarification of the relationship between fairness and non-discrimination, we show that the two do not overlap and that fairness requires much more than just non-discrimination. Moreover, we highlight that fairness not only has a distributive but also a socio-relational dimension. Finally, we pinpoint the constitutive components of fairness. In doing so, we base our arguments on a renewed reflection on the concept of respect, which goes beyond the idea of equal respect to include respect for individual persons. In the fourth section, we analyse the implications of our conceptual redefinition of fairness as an ethical value in the discussion of fairness in HMLA. Here, we claim that fairness requires more than non-discrimination and the absence of biases as well as more than just distribution; it needs to ensure that HMLA respects persons both as persons and as particular individuals. Finally, in the fifth section, we sketch some broader implications and show how our inquiry can contribute to making HMLA and, more generally, AI promote the social good and a fairer society.
Fairness is one of the most prominent values in the Ethics and Artificial Intelligence (AI) debate and, specifically, in the discussion on algorithmic decision-making (ADM). However, while the need for fairness in ADM is widely acknowledged, the very concept of fairness has not been sufficiently explored so far. Our paper aims to fill this gap and claims that an ethically informed re-definition of fairness is needed to adequately investigate fairness in ADM. To achieve our goal, after an introductory section aimed at clarifying the aim and structure of the paper, in section “Fairness in algorithmic decision-making” we provide an overview of the state of the art of the discussion on fairness in ADM and show its shortcomings; in section “Fairness as an ethical value”, we pursue an ethical inquiry into the concept of fairness, drawing insights from accounts of fairness developed in moral philosophy, and define fairness as an ethical value. In particular, we argue that fairness is articulated in a distributive and socio-relational dimension; it comprises three main components: fair equality of opportunity, equal right to justification, and fair equality of relationship; these components are grounded in the need to respect persons both as persons and as particular individuals. In section “Fairness in algorithmic decision-making revised”, we analyze the implications of our redefinition of fairness as an ethical value on the discussion of fairness in ADM and show that each component of fairness has profound effects on the criteria that ADM ought to meet. Finally, in section “Concluding remarks”, we sketch some broader implications and conclude.
The use of artificial intelligence (AI) in the field of telemedicine has grown exponentially over the past decade, along with the adoption of AI-based telemedicine to support public health systems. Although AI-based telemedicine can open up novel opportunities for the delivery of clinical health and care and become a strong aid to public health systems worldwide, it also comes with ethical risks that should be detected, prevented, or mitigated for the responsible use of AI-based telemedicine in and for public health. However, despite the current proliferation of AI ethics frameworks, thus far, none have been developed for the design of AI-based telemedicine, especially for the adoption of AI-based telemedicine in and for public health. We aimed to fill this gap by mapping the most relevant AI ethics principles for AI-based telemedicine for public health and by showing the need to revise them via major ethical themes emerging from bioethics, medical ethics, and public health ethics toward the definition of a unified set of 6 AI ethics principles for the implementation of AI-based telemedicine. (Am J Public Health. Published online ahead of print March 9, 2023:e1–e8. https://doi.org/10.2105/AJPH.2022.307225 )
Many popular artificial intelligence (AI) ethics frameworks center the principle of autonomy as necessary in order to mitigate the harms that might result from the use of AI within society. These harms often disproportionately affect the most marginalized within society. In this paper, we argue that the principle of autonomy, as currently formalized in AI ethics, is itself flawed, as it expresses only a mainstream mainly liberal notion of autonomy as rational self-determination, derived from Western traditional philosophy. In particular, we claim that the adherence to such principle, as currently formalized, does not only fail to address many ways in which people’s autonomy can be violated, but also to grasp a broader range of AI-empowered harms profoundly tied to the legacy of colonization, and which particularly affect the already marginalized and most vulnerable on a global scale. To counter such a phenomenon, we advocate for the need of a relational turn in AI ethics, starting from a relational rethinking of the AI ethics principle of autonomy that we propose by drawing on theories on relational autonomy developed both in moral philosophy and Ubuntu ethics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.