In a landmark essay, Andreas Matthias claimed that current developments in autonomous, artificially intelligent (AI) systems are creating a so-called responsibility gap, which is allegedly ever-widening and stands to undermine both the moral and legal frameworks of our society. But how severe is the threat posed by emerging technologies? In fact, a great number of authors have indicated that the fear is thoroughly instilled. The most pessimistic are calling for a drastic scaling-back or complete moratorium on AI systems, while the optimists aim to show that the gap can be bridged nonetheless. Contrary to both camps, I argue against the prevailing assumption that there is a technology-based responsibility gap. I show how moral responsibility is a dynamic and flexible process, one that can effectively encompass emerging technological entities.
The emergence of ethical concerns surrounding artificial intelligence (AI) has led to an explosion of high-level ethical principles being published by a wide range of public and private organizations. However, there is a need to consider how AI developers can be practically assisted to anticipate, identify and address ethical issues regarding AI technologies. This is particularly important in the development of AI intended for healthcare settings, where applications will often interact directly with patients in various states of vulnerability. In this paper, we propose that an ‘embedded ethics’ approach, in which ethicists and developers together address ethical issues via an iterative and continuous process from the outset of development, could be an effective means of integrating robust ethical considerations into the practical development of medical AI.
Recent medical and bioethics literature shows a growing concern for practitioners' emotional experience and the ethical environment in the workplace. Moral distress, in particular, is often said to result from the difficult decisions made and the troubling situations regularly encountered in health care contexts. It has been identified as a leading cause of professional dissatisfaction and burnout, which, in turn, contribute to inadequate attention and increased pain for patients. Given the natural desire to avoid these negative effects, it seems to most authors that systematic efforts should be made to drastically reduce moral distress, if not altogether eliminate it from the lives of vulnerable practitioners. Such efforts, however, may be problematic, as moral distress is not adequately understood, nor is there agreement among the leading accounts regarding how to conceptualize the experience. With this article I make clear what a robust account of moral distress should be able to explain and how the most common notions in the existing literature leave significant explanatory gaps. I present several cases of interest and, with careful reflection upon their distinguishing features, I establish important desiderata for an explanatorily satisfying account. With these fundamental demands left unsatisfied by the leading accounts, we see the persisting need for a conception of moral distress that can capture and delimit the range of cases of interest.
Moral distress in healthcare has been an increasingly prevalent topic of discussion. Most authors characterize it as a negative phenomenon, while few have considered its potentially positive value. In this essay, I argue that moral distress can reveal and affirm some of our most important concerns as moral agents. Indeed, the experience of it under some circumstances appears to be partly constitutive of an honorable character and can allow for crucial moral maturation. The potentially positive value, then, is twofold; moral distress carries both aretaic and instrumental value. Granted, this position is not without its caveats, but by making these clear, I provide a novel framework for policy recommendations regarding when, if ever, we should work to reduce moral distress.
Responsibility is among the most widespread buzzwords in the ethics of artificial intelligence (AI) and robotics. Yet, the term often remains unsubstantiated when employed in these important technological domains. Indeed, notions like ‘responsible AI’ and ‘responsible robotics’ may sound appealing, for they seem to convey a sense of moral goodness or ethical approval, thereby inciting psychological connections to self-regulation, social acceptance, or political correctness. For AI and ethics to come together in truly harmonious ways, we will need to work toward establishing a common appreciation. In this commentary, I breakdown three varieties of the term and invoke insights from the analytic ethics literature as a means of offering a robust understanding of moral responsibility in emerging technology. While I do not wish to accuse any parties of incorrect usage, my hope is that together researchers in AI and ethics can be better positioned to appreciate and to develop notions of responsibility for technological domains.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.