Self-driving cars hold out the promise of being safer than manually driven cars. Yet they cannot be a 100 % safe. Collisions are sometimes unavoidable. So self-driving cars need to be programmed for how they should respond to scenarios where collisions are highly likely or unavoidable. The accident-scenarios self-driving cars might face have recently been likened to the key examples and dilemmas associated with the trolley problem. In this article, we critically examine this tempting analogy. We identify three important ways in which the ethics of accidentalgorithms for self-driving cars and the philosophy of the trolley problem differ from each other. These concern: (i) the basic decision-making situation faced by those who decide how selfdriving cars should be programmed to deal with accidents; (ii) moral and legal responsibility; and (iii) decision-making in the face of risks and uncertainty. In discussing these three areas of disanalogy, we isolate and identify a number of basic issues and complexities that arise within the ethics of the programming of self-driving cars.
The concept of meaningful work has recently received increased attention in philosophy and other disciplines. However, the impact of the increasing robotization of the workplace on meaningful work has received very little attention so far. Doing work that is meaningful leads to higher job satisfaction and increased worker well-being, and some argue for a right to access to meaningful work. In this paper, we therefore address the impact of robotization on meaningful work. We do so by identifying five key aspects of meaningful work: pursuing a purpose, social relationships, exercising skills and self-development, self-esteem and recognition, and autonomy. For each aspect, we analyze how the introduction of robots into the workplace may diminish or enhance the meaningfulness of work. We also identify a few ethical issues that emerge from our analysis. We conclude that robotization of the workplace can have both significant negative and positive effects on meaningful work. Our findings about ways in which robotization of the workplace can be a threat or opportunity for meaningful work can serve as the basis for ethical arguments for how to-and how not to-implement robots into workplaces.
In this paper, we discuss the ethics of automated driving. More specifically, we discuss responsible human-robot coordination within mixed traffic: i.e. traffic involving both automated cars and conventional human-driven cars. We do three main things. First, we explain key differences in robotic and human agency and expectation-forming mechanisms that are likely to give rise to compatibility-problems in mixed traffic, which may lead to crashes and accidents. Second, we identify three possible solution-strategies for achieving better human-robot coordination within mixed traffic. Third, we identify important ethical challenges raised by each of these three possible strategies for achieving optimized human-robot cordination in this domain. Among other things, we argue that we should not just explore ways of making robotic driving more like human driving. Rather, we ought also to take seriously potential ways (e.g. technological means) of making human driving more like robotic driving. Nor should we assume that complete automation is always the ideal to aim for; in some traffic-situations, the best results may be achieved through human-robot collaboration. Ultimately, our main aim in this paper is to argue that the new field of the ethics of automated driving needs take seriously the ethics of mixed traffic and responsible human-robot coordination.
IntroductionIn this article, I will argue that there is a moral case for setting mandatory speed alerts and speed limiters in all cars. These technologies are fairly intrusive. Nevertheless, my claim is that we should accept these measures in our cars to solve a major problem in road safety: speeding. In 2010, in Europe, more than 30,000 people were killed and 1.4 million were injured in road traffic, with speeding as a major cause. Current enforcement measures work to some extent but are clearly not sufficient. Intelligent Speed Adaptation (ISA) systems are highly effective additional measures to counter speeding. Advisory ISA warns drivers if they transgress the speed limits. Limiting ISA makes speeding impossible, and consequently this technology can prevent up to 50% of fatal accidents. 1 Intelligent Speed Adaptation is indispensable for reducing the risks of car driving to a more acceptable level. Many philosophers uncritically refer to driving as an example of acceptable risk imposition.2 The benefits of car driving are considered to justify the risks involved, which are perceived as being relatively low. Car driving is regarded as a morally acceptable practice from which we all benefit. However, as I will argue below, this view is problematic even with regard to lawful car driving. Moreover, in appealing to car driving as an example of acceptable risk imposition, one fails to appreciate the fact that the practice involves massive transgressions of the rules. Pedestrians, cyclists, and lawful drivers have good reason to reject the risks involved in our actual car driving practice. No tacit consent to the risks of driving can be inferred from individuals' choice to walk, cycle, and drive.
This paper critically assesses John Danaher’s ‘ethical behaviourism’, a theory on how the moral status of robots should be determined. The basic idea of this theory is that a robot’s moral status is determined decisively on the basis of its observable behaviour. If it behaves sufficiently similar to some entity that has moral status, such as a human or an animal, then we should ascribe the same moral status to the robot as we do to this human or animal. The paper argues against ethical behaviourism by making four main points. First, it is argued that the strongest version of ethical behaviourism understands the theory as relying on inferences to the best explanation when inferring moral status. Second, as a consequence, ethical behaviourism cannot stick with merely looking at the robot’s behaviour, while remaining neutral with regard to the difficult question of which property grounds moral status. Third, not only behavioural evidence ought to play a role in inferring a robot’s moral status, but knowledge of the design process of the robot and of its designer’s intention ought to be taken into account as well. Fourth, knowledge of a robot’s ontology and how that relates to human biology often is epistemically relevant for inferring moral status as well. The paper closes with some concluding observations.
The development of new effective but expensive medical treatments leads to discussions about whether and how such treatments should be funded in solidaritybased healthcare systems. Solidarity is often seen as an elusive concept; it appears to be used to refer to different sets of concerns, and its interrelations with the concept of justice are not well understood. This paper provides a conceptual analysis of the concept of solidarity as it is used in discussions on the allocation of healthcare resources and the funding of expensive treatments. It contributes to the clarification of the concept of solidarity by identifying in the literature and discussing four uses of the concept: (1) assisting patients in need, (2) upholding the solidarity-based healthcare system, (3) willingness to contribute and (4) promoting equality. It distinguishes normative and descriptive uses of the concept and outlines the overlap and differences between solidarity and justice. Our analysis shows that the various uses of the concept of solidarity point to different, even conflicting, ethical stances on whether and how access to effective, expensive treatments should be provided.We conclude that the concept of solidarity has a role to play in discussions on the accessibility and funding of newly approved medical treatments. It requires, for instance, that healthcare policies promote and maintain both societal willingness to contribute to the care of others and the value of providing care to vulnerable patients through public funding.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.