As humans are being progressively pushed further downstream in the decision-making process of autonomous systems, the need arises to ensure that moral standards, however defined, are adhered to by these robotic artifacts. While meaningful inroads have been made in this area regarding the use of ethical lethal military robots, including work by our laboratory, these needs transcend the warfighting domain and are pervasive, extending to eldercare, robot nannies, and other forms of service and entertainment robotic platforms. This paper presents an overview of the spectrum and specter of ethical issues raised by the advent of these systems, and various technical results obtained to date by our research group, geared towards managing ethical behavior in autonomous robots in relation to humanity. This includes: (1) the use of an ethical governor capable of restricting robotic behavior to predefined social norms; (2) an ethical adaptor which draws upon the moral emotions to allow a system to constructively and proactively modify its behavior based on the consequences of its actions; (3) the development of models of robotic trust in humans and its dual, deception, drawing on psychological models of interdependence theory; and (4) concluding with an approach towards the maintenance of dignity in human-robot relationships.
We vary the ability of robots to mitigate a participant's risk in a navigation guidance task to determine the effect this has on the participant's trust in the robot in a second round. A significant loss of trust was found after a single robot failure.
Deception is utilized by a variety of intelligent systems ranging from insects to human beings. It has been argued that the use of deception is an indicator of theory of mind [2] and of social intelligence [4]. We use interdependence theory and game theory to explore the phenomena of deception from the perspective of robotics, and to develop an algorithm which allows an artificially intelligent system to determine if deception is warranted in a social situation. Using techniques introduced in [1], we present an algorithm that bases a robot's deceptive action selection on its model of the individual it's attempting to deceive. Simulation and robot experiments using these algorithms which investigate the nature of deception itself are discussed.
The COVID-19 pandemic will have a profound and long-lasting impact on the entire scientific endeavor. Scientists already are adapting research programs to adapt to changes in what is prioritized—and what is possible; educators are changing the way that the next generation of researchers are trained, and flagship conferences in many fields are being cancelled, postponed, and fundamentally transformed.
These broad-reaching changes are particularly impactful to human-oriented domains such as human-robot interaction (HRI). Because in-person human-subject experiments can take a year or more to conduct, the research we will see published in the field in the immediate future may appear to be “business as usual,” with accounts of laboratory studies with large numbers of in-person participants. The research currently being performed, however, is of course a different story entirely. Studies that were under way when the current crisis began will be truncated, resulting either in work that cannot be published or in work whose true impact is difficult to accurately assess. Yet HRI research performed in the coming years will be changed in fundamentally different ways; the inability to perform—or expect future performance of—in-person human subjects research, especially research involving tactile or multiparty interaction, will change both the dominant methodological techniques employed by HRI researchers and the very research questions that the field chooses to—and is able to—address.
These challenges demand that HRI researchers identify precisely how the field can maintain research quality and impact while the ability to conduct human-subject studies is severely impaired for an undetermined amount of time. A natural inclination may be simply to wait the crisis out in the hope of a speedy return to normalcy; however, in this article, we argue that the community can also take this opportunity to reevaluate and refocus how research in this field is conducted and how students are mentored in ways that will yield benefits for years to come after the current crisis has ended.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.