The idea of separating a person's consciousness and transferring it to another medium-'mind upload'-is being actively discussed in science, philosophy, and science fiction. Mind upload technologies are currently also being developed by private companies in Silicon Valley, and similar technological developments have received significant funding in the EU. Mind upload has important existential and ethical implications, yet little is known about how ordinary people actually feel about it. The current paper aims to provide a thorough moral psychological evaluation about various cognitive factors that explain people's feelings and reactions towards the use of mind upload technology. In four studies (including pilot) with a total of 952 participants, it was shown that biological and cultural cognitive factors help to determine how strongly people condemn mind upload. Both experimental manipulations in a laboratory and cross-sectional correlative online study designs were employed. The results showed that people who value purity norms and have higher sexual disgust sensitivity are more inclined to condemn mind upload. Furthermore, people who are anxious about death and condemn suicidal acts were more accepting of mind upload. Finally, higher science fiction literacy and/or hobbyism strongly predicted approval of mind upload. Several possible confounding factors were ruled out, including personality, values, individual tendencies towards rationality, and theory of mind capacities. Possible idiosyncrasies in the stimulus materials (whether consciousness is uploaded onto a computer, chimpanzee, artificial brain, or android; and whether the person's body physically dies during the process) were ruled out. The core findings inform ongoing philosophical discussions on how mind upload could (or should) be used in the future, and imply that mind upload is a much more salient topic for the general population than previously thought.
The role of emotional disgust and disgust sensitivity in moral judgment and decision-making has been debated intensively for over 20 years. Until very recently, there were two main evolutionary narratives for this rather puzzling association. One of the models suggest that it was developed through some form of group selection mechanism, where the internal norms of the groups were acting as pathogen safety mechanisms. Another model suggested that these mechanisms were developed through hygiene norms, which were piggybacking on pathogen disgust mechanisms. In this study we present another alternative, namely that this mechanism might have evolved through sexual disgust sensitivity. We note that though the role of disgust in moral judgment has been questioned recently, few studies have taken disgust sensitivity to account. We present data from a large sample (N = 1300) where we analyzed the associations between The Three Domain Disgust Scale and the most commonly used 12 moral dilemmas measuring utilitarian/deontological preferences with Structural Equation Modeling. Our results indicate that of the three domains of disgust, only sexual disgust is associated with more deontological moral preferences. We also found that pathogen disgust was associated with more utilitarian preferences. Implications of the findings are discussed.
Artificial intelligences (AIs) are widely used in tasks ranging from transportation and healthcare to military. Many tasks carried out by autonomous AIs have consequences for human well-being, but it is still unclear how people would prefer them to act in ethically difficult situations. In six studies with data from two cultures (five quantitative experiments, n = 1569, and a qualitative anthropological field study, n = 30), we presented people with hypothetical situations where a human or an advanced robot nurse is ordered to forcefully medicate an unwilling patient. We measured moral acceptance, perceived trust, and allocation of responsibility relating to the nurse's decision of either following orders to forcefully medicate the patient, or disregarding orders to protect the patient's autonomy. Our participants were aversive to robot nurses who forcefully medicated the patient, and preferred robot nurses who respected patient autonomy by disobeying orders. Under certain conditions, the decision to respect patient autonomy was more acceptable for robot nurses than for human nurses. Thus, our results suggest that people prefer robots that are capable of disobeying orders in favor of abstract moral principles such as valuing personal autonomy. These findings were relatively robust against manipulating the nurse's perceived reputation and character, and whether or not the patient lived or died afterwards. We also found that moral judgment is distinct from evaluations of trust and responsibility. In general, our participants did not trust robot nurses or hold them responsible for their actions; on the other hand human nurses who forcefully medicated a patient were morally condemned but also trusted. It seems that Moral Psychology of Robotics is a new and increasingly relevant sub-field of moral psychology that requires extensive attention.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.