Understanding people's perceptions and inferences about social robots and, thus, their responses toward them, constitutes one of the most pervasive research themes in the field of Human-Robot Interaction today. We herein augment and extend this line of work by investigating, for the first time, the novel proposition that one's implicit self-theory orientation (underlying beliefs about the malleability of self-attributes, such as one's intelligence), can influence one's perceptions of emerging social robots developed for everyday use. We show that those who view selfattributes as fixed (entity theorists) express greater robot anxiety than those who view self-attributes as malleable (incremental theorists). This result holds even when controlling for well-known covariate influences, like prior robot experience, media exposure to science fiction, technology commitment, and certain demographic factors. However, only marginal effects were obtained for both attitudinal and intentional robot acceptance, respectively. In addition, we show that incremental theorists respond more favorably to social robots, compared to entity theorists. Furthermore, we find evidence indicating that entity theorists exhibit more favorable responses to a social robot positioned as a servant. We conclude with a discussion about our findings.
Robots that are capable of outperforming human beings on mental and physical tasks provoke perceptions of threat. In this article we propose that implicit self-theory (core beliefs about the malleability of self-attributes, such as intelligence) is a determinant of whether one person experiences threat perception to a greater degree than another. We test for this possibility in a novel experiment in which participants watched a video of an apparently autonomous intelligent robot defeating human quiz players in a general knowledge game. Following the video, participants received either social comparison feedback, improvement-oriented feedback, or no feedback, and were then given the opportunity to play against the robot. We show that those who adopt a malleable self-theory (incremental theorists) are more likely to play against a robot after imagining losing to it, as well as exhibit more favorable responses and less identity threats than entity theorists (those adopting a fixed self-theory). Moreover, entity theorists (vs. incremental theorists) perceive autonomous intelligent robots to be significantly more threatening (both in terms of realistic and identity threats). These findings offer novel theoretical and practical implications, in addition to enriching the HRI literature by demonstrating that implicit selftheory is, in fact, an influential variable underpinning perceived threat.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.