Recent research suggests that attributions of aliveness and mental capacities to faces are influenced by social group membership. In this article, we investigated group related biases in mind perception in participants from a Western and Eastern culture, employing faces of varying ethnic groups. In Experiment 1, Caucasian faces that ranged on a continuum from real to artificial were evaluated by participants in the UK (in-group) and in India (out-group) on animacy, abilities to plan and to feel pain, and having a mind. Human features were found to be assigned to a greater extent to faces when these belonged to in-group members, whereas out-group faces had to appear more realistic in order to be perceived as human. When participants in India evaluated South Asian (in-group) and Caucasian (out-group) faces in Experiment 2, the results closely mirrored those of the first experiment. For both studies, ratings of out-group faces were significantly predicted by participants’ levels of ethnocultural empathy. The findings highlight the role of intergroup processes (i.e., in-group favoritism, out-group dehumanization) in the perception of human and mental qualities and point to ethnocultural empathy as an important factor in responses to out-groups.
Robots have the potential to transform our existing categorical distinctions between “property” and “persons.” Previous research has demonstrated that humans naturally anthropomorphize them, and this tendency may be amplified when a robot is subject to abuse. Simultaneously, robots give rise to hopes and fears about the future and our place in it. However, most available evidence on these mechanisms is either anecdotal, or based on a small number of laboratory studies with limited ecological validity. The present work aims to bridge this gap through examining responses of participants ( N = 160) to four popular online videos of a leading robotics company (Boston Dynamics) and one more familiar vacuum cleaning robot (Roomba). Our results suggest that unexpectedly human-like abilities might provide more potent cues to mind perception than appearance, whereas appearance may attract more compassion and protection. Exposure to advanced robots significantly influences attitudes toward future artificial intelligence. We discuss the need for more research examining groundbreaking robotics outside the laboratory.
Previous research has shown that when people read vignettes about the infliction of harm upon an entity appearing to have no more than a liminal mind, their attributions of mind to that entity increased. Currently, we investigated if the presence of a facial wound enhanced the perception of mental capacities (experience and agency) in response to images of robotic and human-like avatars, compared with unharmed avatars. The results revealed that harmed versions of both robotic and human-like avatars were imbued with mind to a higher degree, irrespective of the baseline level of mind attributed to their unharmed counterparts. Perceptions of capacity for pain mediated attributions of experience, while both pain and empathy mediated attributions of abilities linked to agency. The findings suggest that harm, even when it appears to have been inflicted unintentionally, may augment mind perception for robotic as well as for nearly human entities, at least as long as it is perceived to elicit pain.
A robot's decision to harm a person is sometimes considered to be the ultimate proof of it gaining a human‐like mind. Here, we contrasted predictions about attribution of mental capacities from moral typecasting theory, with the denial of agency from dehumanization literature. Experiments 1 and 2 investigated mind perception for intentionally and accidentally harmful robotic agents based on text and image vignettes. Experiment 3 disambiguated agent intention (malevolent and benevolent), and additionally varied the type of agent (robotic and human) using short computer‐generated animations. Harmful robotic agents were consistently imbued with mental states to a lower degree than benevolent agents, supporting the dehumanization account. Further results revealed that a human moral patient appeared to suffer less when depicted with a robotic agent than with another human. The findings suggest that future robots may become subject to human‐like dehumanization mechanisms, which challenges the established beliefs about anthropomorphism in the domain of moral interactions.
A ccording to moral typecasting theory, good-and evil-doers (agents) interact with the recipients of their actions (patients) in a moral dyad. When this dyad is completed, mind attribution towards intentionally harmed liminal minds is enhanced. However, from a dehumanisation view, malevolent actions may instead result in a denial of humanness. To contrast both accounts, a visual vignette experiment (N = 253) depicted either malevolent or benevolent intentions towards robotic or human avatars. Additionally, we examined the role of harm-salience by showing patients as either harmed, or still unharmed. The results revealed significantly increased mind attribution towards visibly harmed patients, mediated by perceived pain and expressed empathy. Benevolent and malevolent intentions were evaluated respectively as morally right or wrong, but their impact on the patient was diminished for the robotic avatar. Contrary to dehumanisation predictions, our manipulation of intentions failed to affect mind perception. Nonetheless, benevolent intentions reduced dehumanisation of the patients. Moreover, when pain and empathy were statistically controlled, the effect of intentions on mind perception was mediated by dehumanisation. These findings suggest that perceived intentions might only be indirectly tied to mind perception, and that their role may be better understood when additionally accounting for empathy and dehumanisation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.