Artificial intelligence plays a crucial role on our daily lives. At the same time, artificial intelligence is often met with reluctance and distrust. Previous research demonstrated that faces that are visibly artificial are considered to be less trustworthy and remembered less accurately compared to natural faces. Current technology, however, enables the generation of artificial faces that are indistinguishable from natural faces. Accordingly, we tested whether natural faces that are merely labelled to be artificial are also trusted less. In three experiments (N = 399), we observed that natural faces merely labeled as being artificial were judged to be less trustworthy. This bias was robust and did not depend on the degree of trustworthiness and attractiveness of the faces, nor could it be modulated by changing raters’ attitude towards artificial intelligence. At the same time, we did not observe differences in recall performance. We conclude that understanding and changing social evaluations towards artificial intelligence goes beyond eliminating physical differences between artificial and natural entities.
Does the gender of a Voice Assistant influence the perceived appropriateness of responses to verbal sexual harassment? To answer this question, perceived appropriateness of actual responses to sexual harassment towards conversational systems were tested, manipulating the gender of the voices. The results show an effect of gender on perceived appropriateness. Thus, male senders are perceived as more appropriate than female senders, depending on the appropriateness category.
Artificial intelligence increasingly plays a crucial role in daily life. At the same time, artificial intelligence is often met with reluctance and distrust. Previous research demonstrated that faces that are visibly artificial are considered to be less trustworthy and remembered less accurately compared to natural faces. Current technology, however, enables the generation of artificial faces that are indistinguishable from natural faces. In five experiments (total N = 867), we tested whether natural faces that are merely labelled to be artificial are also trusted less. A meta-analysis of all five experiments suggested that natural faces merely labeled as being artificial were judged to be less trustworthy. This bias did not depend on the degree of trustworthiness and attractiveness of the faces (Experiments 1-3). It was not modulated by changing raters’ attitude towards artificial intelligence (Experiments 2-3) or by information communicated by the faces (Experiment 4). We also did not observe differences in recall performance between faces labelled as artificial or natural (Experiment 3). When participants only judged one type of face (i.e., either labelled as artificial or natural), the difference in trustworthiness judgments was eliminated (Experiment 5) suggesting that the contrast between the natural and artificial categories in the same task promoted the labelling effect. We conclude that faces that are merely labelled to be artificial are trusted less in situations that also include faces labelled to be real. We propose that understanding and changing social evaluations towards artificial intelligence goes beyond eliminating physical differences between artificial and natural entities.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.