Self-driving cars hold out the promise of being safer than manually driven cars. Yet they cannot be a 100 % safe. Collisions are sometimes unavoidable. So self-driving cars need to be programmed for how they should respond to scenarios where collisions are highly likely or unavoidable. The accident-scenarios self-driving cars might face have recently been likened to the key examples and dilemmas associated with the trolley problem. In this article, we critically examine this tempting analogy. We identify three important ways in which the ethics of accidentalgorithms for self-driving cars and the philosophy of the trolley problem differ from each other. These concern: (i) the basic decision-making situation faced by those who decide how selfdriving cars should be programmed to deal with accidents; (ii) moral and legal responsibility; and (iii) decision-making in the face of risks and uncertainty. In discussing these three areas of disanalogy, we isolate and identify a number of basic issues and complexities that arise within the ethics of the programming of self-driving cars.
The concept of meaningful work has recently received increased attention in philosophy and other disciplines. However, the impact of the increasing robotization of the workplace on meaningful work has received very little attention so far. Doing work that is meaningful leads to higher job satisfaction and increased worker well-being, and some argue for a right to access to meaningful work. In this paper, we therefore address the impact of robotization on meaningful work. We do so by identifying five key aspects of meaningful work: pursuing a purpose, social relationships, exercising skills and self-development, self-esteem and recognition, and autonomy. For each aspect, we analyze how the introduction of robots into the workplace may diminish or enhance the meaningfulness of work. We also identify a few ethical issues that emerge from our analysis. We conclude that robotization of the workplace can have both significant negative and positive effects on meaningful work. Our findings about ways in which robotization of the workplace can be a threat or opportunity for meaningful work can serve as the basis for ethical arguments for how to-and how not to-implement robots into workplaces.
Many ethicists writing about automated systems (e.g. self-driving cars and autonomous weapons systems) attribute agency to these systems. Not only that; they seemingly attribute an autonomous or independent form of agency to these machines. This leads some ethicists to worry about responsibility-gaps and retribution-gaps in cases where automated systems harm or kill human beings. In this paper, I consider what sorts of agency it makes sense to attribute to most current forms of automated systems, in particular automated cars and military robots. I argue that whereas it indeed makes sense to attribute different forms of fairly sophisticated agency to these machines, we ought not to regard them as acting on their own, independently of any human beings. Rather, the right way to understand the agency exercised by these machines is in terms of human–robot collaborations, where the humans involved initiate, supervise, and manage the agency of their robotic collaborators. This means, I argue, that there is much less room for justified worries about responsibility-gaps and retribution-gaps than many ethicists think.
Self‐driving cars hold out the promise of being much safer than regular cars. Yet they cannot be 100% safe. Accordingly, they need to be programmed for how to deal with crash scenarios. Should cars be programmed to always prioritize their owners, to minimize harm, or to respond to crashes on the basis of some other type of principle? The article first discusses whether everyone should have the same “ethics settings.” Next, the oft‐made analogy with the trolley problem is examined. Then follows an assessment of recent empirical work on lay‐people's attitudes about crash algorithms relevant to the ethical issue of crash optimization. Finally, the article discusses what traditional ethical theories such as utilitarianism, Kantianism, virtue ethics, and contractualism imply about how cars should handle crash scenarios. The aim of the article is to provide an overview of the existing literature on these topics and to assess how far the discussion has gotten so far.
This chapter looks into the possibility of genuine loving relationships with robots (mutual love). Our primary aim is to offer a framework for approaching the question of mutual love. But we also sketch a tentative answer. Our tentative answer is that whereas mutual love between humans and sex-robots is not in principle impossible, it is hard to achieve. Nevertheless, building robots capable of mutual love may help to address concerns raised by critics of human-robot sexual relationships. Our discussion below generates a “job description” that advanced sex-robots would need to live up in order to be able to participate in relationships that can be recognized as mutual love.
In this paper, we discuss the ethics of automated driving. More specifically, we discuss responsible human-robot coordination within mixed traffic: i.e. traffic involving both automated cars and conventional human-driven cars. We do three main things. First, we explain key differences in robotic and human agency and expectation-forming mechanisms that are likely to give rise to compatibility-problems in mixed traffic, which may lead to crashes and accidents. Second, we identify three possible solution-strategies for achieving better human-robot coordination within mixed traffic. Third, we identify important ethical challenges raised by each of these three possible strategies for achieving optimized human-robot cordination in this domain. Among other things, we argue that we should not just explore ways of making robotic driving more like human driving. Rather, we ought also to take seriously potential ways (e.g. technological means) of making human driving more like robotic driving. Nor should we assume that complete automation is always the ideal to aim for; in some traffic-situations, the best results may be achieved through human-robot collaboration. Ultimately, our main aim in this paper is to argue that the new field of the ethics of automated driving needs take seriously the ethics of mixed traffic and responsible human-robot coordination.
One of the topics that often comes up in ethical discussions of deep brain stimulation (DBS) is the question of what impact DBS has, or might have, on the patient's self. This is often understood as a question of whether DBS poses a threat to personal identity, which is typically understood as having to do with psychological and/or narrative continuity over time. In this article, we argue that the discussion of whether DBS is a threat to continuity over time is too narrow. There are other questions concerning DBS and the self that are overlooked in discussions exclusively focusing on psychological and/or narrative continuity. For example, it is also important to investigate whether DBS might sometimes have a positive (e.g., a rehabilitating) effect on the patient's self. To widen the discussion of DBS, so as to make it encompass a broader range of considerations that bear on DBS's impact on the self, we identify six features of the commonly used concept of a person's "true self." We apply these six features to the relation between DBS and the self. And we end with a brief discussion of the role DBS might play in treating otherwise treatment-refractory anorexia nervosa. This further highlights the importance of discussing both continuity over time and the notion of the true self.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.