regulatory interface. As one of us has suggested previously, 1 there are several possibilities for the creation of company structures that might provide functional and adaptive legal "housing" for advanced software, various types of artificial intelligence, and other programmatic systems and organizations-phenomena that we refer to here collectively as autonomous systems, for ease of reference. In particular, this prior work introduces the notion that an operating agreement or private entity constitution (such as a corporation's charter or a partnership's operating agreement) can adopt, as the acts of a legal entity, the state or actions of arbitrary physical systems. We call this the algorithm-agreement equivalence principle. 2 Given this principle and the present capacities existing forms of legal entities, companies of various kinds can serve as a mechanism through which autonomous systems might engage with the legal system. This paper considers the implications of this possibility from a comparative and international perspective. Our goal is to suggest how, under U.S., German, Swiss, and U.K. law, company law might furnish the functional and adaptive legal "housing" for an autonomous system-and, in turn, we aim to inform systems designers, regulators, and others who are interested in, encouraged by, or alarmed at the possibility that an autonomous system may "inhabit" a company and thereby gain some of the incidents of legal personality. We do not aim here to be normative. Instead, the paper lays out a template suggesting how existing laws might provide a potentially unexpected regulatory framework for autonomous systems, and to explore some legal consequences of this possibility. We do suggest that these considerations might spur others to consider the relevant provisions of their own national laws with a view to locating similar legal "spaces" that autonomous systems could "inhabit."
IntroductionInformation technology has become a decisive element in modern warfare, in particular when armed forces of developed countries are involved. Modern weapon systems would not function without sophisticated computing power, but also the planning and executing of military operations in general heavily rely on information technology. In addition, armed forces, but also police, border control and civil protection organizations increasingly rely on robotic systems with growing autonomous capacities. This poses tactical and strategic, but also ethical and legal issues that are of particular relevance when procurement organizations are evaluating such systems for security applications.In order to support the evaluation of such systems from an ethical perspective, this report presents an evaluation schema for the ethical use of autonomous robotic systems in security applications, which also considers legal aspects to some degree. The focus is on two types of applications: First, systems whose purpose is not to destroy objects or to harm people (e.g. rescue robots, surveillance systems); although weaponization cannot be excluded. Second, systems that deliberately possess the capacity to harm people or destroy objects -both defensive and offensive, lethal and non-lethal systems. The cyber-domain where autonomous systems also are increasingly used (software agents, specific types of cyber weapons etc.) has been excluded from this analysis.The research that has resulted in this report outlines the most important evaluations and scientific publications that are contributing to the international debate on the regulation of autonomous systems in the security context, in particular in the case of so-called lethal autonomous weapons systems (LAWS). The goal of the research is twofold: First, it should support the procurement of security/defense systems, e.g. to avoid reputation risks or costly assessments for systems that are ethically problematic and entail political risks. Second, the research should contribute to the international discussion on the use of autonomous systems in the security context (e.g
This article proposes five arguments about major aspects of artificial intelligence and their implications for international law. The aspects are: automation, personhood, weapons systems, control, and standardisation. The arguments in aggregate convey an idea of where international law needs to be adapted in order to cope with the artificial intelligence revolution under way. The arguments also show the inspiration that may be drawn from existing international law for the governance of artificial intelligence.
This paper presents the findings of a study that used applied ethics to evaluate autonomous robotic systems practically. Using a theoretical tool developed by a team of researchers in 2017, which one of the authors contributed to, we conducted a study of four existing autonomous robotic systems in July 2020. The methods used to carry out the study and the results are highlighted by examining the specific example of ANYmal, an autonomous robotic system that is one component of the CERBERUS team that won first place in DARPA’s Subterranean Challenge Systems Competition in September 2021.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.