Purpose of Review To provide readers with a compact account of ongoing academic and diplomatic debates about autonomy in weapons systems, that is, about the moral and legal acceptability of letting a robotic system to unleash destructive force in warfare and take attendant life-or-death decisions without any human intervention. Recent Findings A précis of current debates is provided, which focuses on the requirement that all weapons systems, including autonomous ones, should remain under meaningful human control (MHC) in order to be ethically acceptable and lawfully employed. Main approaches to MHC are described and briefly analyzed, distinguishing between uniform, differentiated, and prudential policies for human control on weapons systems. Summary The review highlights the crucial role played by the robotics research community to start ethical and legal debates about autonomy in weapons systems. A concise overview is provided of the main concerns emerging in those early debates: respect of the laws of war, responsibility ascription issues, violation of the human dignity of potential victims of autonomous weapons systems, and increased risks for global stability. It is pointed out that these various concerns have been jointly taken to support the idea that all weapons systems, including autonomous ones, should remain under meaningful human control (MHC). Main approaches to MHC are described and briefly analyzed. Finally, it is emphasized that the MHC idea looms large on shared control policies to adopt in other ethically and legally sensitive application domains for robotics and artificial intelligence. Keywords Autonomous weapons systems. Roboethics. International humanitarian law. Human-robot shared control. Meaningful human control This article is part of the Topical Collection on Roboethics
In order to be counted as autonomous, a weapons system must perform the critical functions of target selection and engagement without any intervention by human operators. Human rights organizations, as well as a growing number of States, have been arguing for banning weapons systems satisfying this condition – that are usually referred to as autonomous weapons system (AWS) in this account – and for maintaining meaningful human control (MHC) over any weapons systems. This twofold goal has been pursued by leveraging on ethical and legal arguments, which spell out a variety of deontological or consequentialist reasons. Roughly speaking, deontological arguments support the conclusion that by deploying AWS one is likely or even bound to violate moral and legal obligations of special sorts of agents (military commanders and operators) or moral and legal rights of special sorts of patients (AWS potential victims). Consequentialist arguments substantiate the conclusion that prohibiting AWS is expected to protect peace and security, thereby enhancing collective human welfare, more effectively than the incompatible choice of permitting their use. Contrary to a widespread view, this paper argues that deontological and consequentialist reasons can be coherently combined so as to provide mutually reinforcing ethical and legal reasons for banning AWS. To this end, a confluence model is set forth that enables one to solve potential conflicts between these two approaches by prioritizing deontological arguments over consequentialist ones. Finally, it is maintained that the proposed confluence model significantly bears on the issue of what it is to exercise genuine MHC on existing and future AWS. Indeed, full autonomy is allowed by the confluence model in the case of some anti-materiel defensive AWS; it is to be curbed instead in the case of both lethal AWS and future AWS which may seriously jeopardize peace and stability.
The notion of meaningful human control (MHC) has gathered overwhelming consensus and interest in the autonomous weapons systems (AWS) debate. By shifting the focus of this debate to MHC, one sidesteps recalcitrant definitional issues about the autonomy of weapons systems and profitably moves the normative discussion forward. Some delegations participating in discussions at the Group of Governmental Experts on Lethal Autonomous Weapons Systems meetings endorsed the notion of MHC with the proviso that one size of human control does not fit all weapons systems and uses thereof. Building on this broad suggestion, we propose a “differentiated”—but also “principled” and “prudential”—framework for MHC over weapons systems. The need for a differentiated approach—namely, an approach acknowledging that the extent of normatively required human control depends on the kind of weapons systems used and contexts of their use—is supported by highlighting major drawbacks of proposed uniform solutions. Within the wide space of differentiated MHC profiles, distinctive ethical and legal reasons are offered for principled solutions that invariably assign to humans the following control roles: (1) “fail-safe actor,” contributing to preventing the weapon's action from resulting in indiscriminate attacks in breach of international humanitarian law; (2) “accountability attractor,” securing legal conditions for international criminal law (ICL) responsibility ascriptions; and (3) “moral agency enactor,” ensuring that decisions affecting the life, physical integrity, and property of people involved in armed conflicts be exclusively taken by moral agents, thereby alleviating the human dignity concerns associated with the autonomous performance of targeting decisions. And the prudential character of our framework is expressed by means of a rule, imposing by default the more stringent levels of human control on weapons targeting. The default rule is motivated by epistemic uncertainties about the behaviors of AWS. Designated exceptions to this rule are admitted only in the framework of an international agreement among states, which expresses the shared conviction that lower levels of human control suffice to preserve the fail-safe actor, accountability attractor, and moral agency enactor requirements on those explicitly listed exceptions. Finally, we maintain that this framework affords an appropriate normative basis for both national arms review policies and binding international regulations on human control of weapons systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.