AI is here now, available to and extending the powers of anyone with access to digital technology and the Internet. But its consequences for our social order are not only not understood, but barely even yet the subject of study. How can we guide the way technology is changing society? Since 2015, the IEEE has been developing principles for ethical design for intelligent and autonomous systems.
This paper proposes a set of five ethical principles, together with seven high-level messages, as a basis for responsible robotics. The Principles of Robotics were drafted in 2010 and published online in 2011. Since then the principles have influenced, and continue to influence, a number of initiatives in robot ethics but have not,
Evolutionary robotics is heading towards fully embodied evolution in real-time and real-space. In this paper we introduce the Triangle of Life, a generic conceptual framework for such systems in which robots can actually reproduce. This framework can be instantiated with different hardware approaches and different reproduction mechanisms, but in all cases the system revolves around the conception of a new robot organism. The other components of the Triangle capture the principal stages of such a system; the Triangle as a whole serves as a guide for realizing this anticipated breakthrough and building systems where robot morphologies and controllers can evolve in real-time and real-space. After discussing this framework and the corresponding vision, we present a case study using the SYMBRION research project that realized some fragments of such a system in modular robot hardware.
This review paper sets out to explore the question of how future complex engineered systems based upon the swarm intelligence paradigm could be assured for dependability. The paper introduces the new concept of 'swarm engineering': a fusion of dependable systems engineering and swarm intelligence. The paper reviews the disciplines and processes conventionally employed to assure the dependability of conventional complex (and safety critical) systems in the light of swarm intelligence research and in so doing tries to map processes of analysis, design and test for safety-critical systems against relevant research in swarm intelligence. A case study of a swarm robotic system is used to illustrate this mapping. The paper concludes that while some of the tools needed to assure a swarm for dependability exist, many do not, and hence much work needs to be done before dependable swarms become a reality.
Pei Wang's paper titled "On Defining Artificial Intelligence" was published in a special issue of the Journal of Artificial General Intelligence (JAGI) in December of last year (Wang, 2019). Wang has been at the forefront of AGI research for over two decades. His non-axiomatic approach to reasoning has stood as a singular example of what may lie beyond narrow AI, garnering interest from NASA and Cisco, among others. We consider his article one of the strongest attempts, since the beginning of the field, to address the long-standing lack of consensus for how to define the field and topic of artificial intelligence (AI). In the recent AGISI survey on defining intelligence (Monett and Lewis, 2018), Pei Wang's definition, The essence of intelligence is the principle of adapting to the environment while working with insufficient knowledge and resources. Accordingly, an intelligent system should rely on finite processing capacity, work in real time, open to unexpected tasks, and learn from experience. This working definition interprets "intelligence" as a form of "relative rationality" (Wang, 2008), 1. Most striking in these numbers is the glaring absence of female authors. A common reason among female academics for rejecting our invitation to contribute was overcommitment. As a community, we may want to think of new, different ways of engaging the full spectrum of AI practitioners if we value inclusion as an essential constituent of a healthy scientific growth. Self determination and willingness to participate are also essential. This is an open access article licensed under the Creative Commons BY-NC-ND License.
Concerns over the risks associated with advances in Artificial Intelligence have prompted calls for greater efforts toward robust and beneficial AI, including machine ethics. Recently, roboticists have responded by initiating the development of so-called ethical robots. These robots would, ideally, evaluate the consequences of their actions and morally justify their choices. This emerging field promises to develop extensively over the next years. However, in this paper, we point out an inherent limitation of the emerging field of ethical robots. We show that building ethical robots also necessarily facilitates the construction of unethical robots. In three experiments, we show that it is remarkably easy to modify an ethical robot so that it behaves competitively, or even aggressively. The reason for this is that the specific AI, required to make an ethical robot, can always be exploited to make unethical robots. Hence, the development of ethical robots will not guarantee the responsible deployment of AI. While advocating for ethical robots, we conclude that preventing the misuse of robots is beyond the scope of engineering, and requires instead governance frameworks underpinned by legislation. Without this, the development of ethical robots will serve to increase the risks of robotic malpractice instead of diminishing it.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.