Research in moral psychology has found that robots, more than humans, are expected to make utilitarian decisions. This expectation is found specifically when contrasting utilitarian action to deontological inaction. In a series of eight experiments (total N = 3752), we compared judgments about robots' and humans' decisions in a rescue dilemma with no possibility of deontological inaction. A robot's decision to rescue an innocent victim of an accident was judged more positively than the decision to rescue two people culpable for the accident (Studies 1-2b). This pattern repeated in a largescale web survey (Study 3, N = ∼19,000) and reversed when all victims were equally culpable/innocent (Study 5). Differences in judgments about humans' and robots' decisions were largest for norm-violating decisions. In sum, robots are not always expected to make utilitarian decisions, and their decisions are judged differently from those of humans based on other moral standards as well.
Our study explores the folk concept of personal identity in the developmental context. Two hundred and seventeen Czech children participated in an interview study based on a hypothetical scenario about a sudden change in their friend, someone they know, or some other unspecified person. The children were asked to judge to what extent particular changes (from 6 categories of traits) would change the identity core of their friend or some other person on a 7-point scale. We introduced both positive and negative versions of the changes. Our data suggest that children considered moral traits connected to interpersonal relationships crucial for preserving personal identity. Memory connected to personal experiences also scored highly. On the other hand, a change in physical appearance seemed to have the least important impact on personal identity. Negative changes turned out to have a significantly
Autonomous vehicles (henceforth AVs) are expected to significantly benefit our transportation systems, their safety, efficiency, and impact on environment. However, many technical, social, legal, and moral questions and challenges concerning AVs and their introduction to the mass market still remain. One of the pressing moral issues has to do with the choice between AV types that differ in their built-in algorithms for dealing with situations of unavoidable lethal collision. In this paper we present the results of our study of moral preferences with respect to three types of AVs: (1) selfish AVs that protect the lives of passenger(s) over any number of bystanders; (2) altruistic AVs that minimize the number of casualties, even if this leads to death of passenger(s); and (3) conservative AVs that abstain from interfering in such situations even if it leads to the death of a higher number of subjects or death of passenger(s). We furthermore differentiate between scenarios in which participants are to make their decisions privately or publicly, and for themselves or for their offspring. We disregard gender, age, health, biological species and other characteristics of (potential) casualties that can affect the preferences and decisions of respondents in our scenarios. Our study is based on a sample of 2769 mostly Czech volunteers (1799 women, 970 men; age IQR: 25-32). The data come from our web-based questionnaire which was accessible from May 2017 to December 2017. We aim to answer the following two research questions: (1) Whether the public visibility of an AV type choice makes this choice more altruistic and (2) which type of situation is more problematic with regard to the altruistic choice: opting for society as a whole, for oneself, or for one’s offspring.Our results show that respondents exhibit a clear preference for an altruistic utilitarian strategy for AVs. This preference is reinforced if the AV signals its strategy to others. The altruistic preference is strongest when people choose software for everybody else, weaker in personal choice, and weakest when choosing for one’s own child. Based on the results we conclude that, in contrast to a private choice, a public choice is considerably more likely to pressure consumers in their personal choice to accept a non-selfish solution, making it a reasonable and relatively cheap way to shift car owners and users towards higher altruism. Also, a hypothetical voting in Parliament about a single available program is less selfish when the voting does not take place in secret.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.