Modern human societies demand enforcement of social and moral norms using two types of sanctions that have distinct historical origins. Informal sanctions (e.g., chiding a relative) have existed since the dawn of humanity, whereas formal sanctions (e.g., punishment by the state) emerged more recently—over the last few thousand years, when laws began to separate norm violations into illegal and non-illegal violations. However, little research has investigated the psychological mechanisms underlying people’s use of these two distinct systems of sanctions. We show for the first time that these different cultural histories have left detectable traces in people’s moral judgments of today. When considering formal sanctions, people are experts in discriminating among illegal violations of varying severity, which is an adaptation to the culturally recent introduction of authority and law. When considering light informal sanctions (e.g., disapproval), people are experts in discriminating among non-illegal violations of varying severity, which serves the regulation of countless social norms of modern times. Most strikingly, when considering heavy informal sanctions (e.g., lashing out), people show equal discrimination expertise for both illegal and non-illegal violations, which likely reflects the moral responses of ancient group living.
Establishing when, how, and why robots should be considered moral agents is key for advancing human-robot interaction. For instance, whether a robot is considered a moral agent has significant implications for how researchers, designers, and users can, should, and do make sense of robots and whether their agency in turn triggers social and moral cognitive and behavioral processes in humans. Robotic moral agency also has significant implications for how people should and do hold robots morally accountable, ascribe blame to them, develop trust in their actions, and determine when these robots wield moral influence. In this workshop on Perspectives on Moral Agency in Human-Robot Interaction, we plan to bring together participants who are interested in or have studied the topics concerning a robot's moral agency and its impact on human behavior. We intend to provide a platform for holding interdisciplinary discussions about (1) which elements should be considered to determine the moral agency of a robot, (2) how these elements can be measured, (3) how they can be realized computationally and applied to the robotic system, and (4) what societal impact is anticipated when moral agency is assigned to a robot. We encourage participants from diverse research fields, such as computer science, psychology, cognitive science, and philosophy, as well as participants from social groups marginalized in terms of gender, ethnicity, and culture. CCS CONCEPTS• Human-centered computing → HCI theory, concepts and models.
In two studies, we evaluated the trust and usefulness of automated compared to manual parking using an experimental paradigm and by surveying owners of vehicles with automated parking features. In Study 1, we compared participants' ability to manually park a Tesla Model X and use the Autopark feature to complete perpendicular and parallel parking maneuvers. We investigated differences in parking success and duration, intervention behavior, self-reported levels of trust in and workload associated with the automation, as well as eye and head movements related to monitoring the automation. We found higher levels of trust in the automated parallel parking maneuvers compared to perpendicular parking. The Tesla’s automated perpendicular parking was found to be less efficient than manually executing this maneuver. Study 2 investigated the frequency with which owners of vehicles with automated parking features used those features and probed why they chose not to use them. Vehicle owners reported using their vehicle's autonomous parking features in ways consistent with the empirical findings from Study 1: higher usage rates of autonomous parallel parking. The results from both studies revealed that 1) automated parking is error-prone, 2) drivers nonetheless have calibrated trust in the automated parking system, and 3) the benefits of automated parallel parking surpass those of automated perpendicular parking with the current state of the technology.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.