In most everyday life situations, the brain needs to engage not only in making decisions, but also in anticipating and predicting the behavior of others. In such contexts, gaze can be highly informative about others’ intentions, goals and upcoming decisions. Here, we investigated whether a humanoid robot’s gaze (mutual or averted) influences the way people strategically reason in a social decision-making context. Specifically, participants played a strategic game with the robot iCub while we measured their behavior and neural - activity by means of electroencephalography (EEG). Participants were slower to respond when iCub established mutual gaze prior to their decision, relative to averted gaze. This was associated with a higher decision threshold in the drift diffusion model and accompanied by more synchronized EEG alpha activity. In addition, we found that participants reasoned over the robot’s actions in both conditions. However, those who mostly experienced the averted gaze were more likely to adopt a self-oriented strategy and their neural activity showed higher sensitivity to outcomes. Altogether, these findings suggest that robot gaze acts as a strong social signal for humans, modulating response times, decision threshold, neural synchronization, as well as choice strategies and sensitivity to outcomes. This has strong implications for all contexts involving human-robot interaction, from robotics to clinical applications.
Trust is fundamental in building meaningful social interactions. With the advance of social robotics in collaborative settings, trust in Human-Robot Interaction (HRI) is gaining more and more scientific attention. Indeed, understanding how different factors may affect users’ trust toward robots is of utmost importance. In this study, we focused on two factors related to the robot’s behavior that could modulate trust. In a two-forced choice task where a virtual robot reacted to participants’ performance, we manipulated the human-likeness of the robot’s motion and the valence of the feedback it provided. To measure participant’s subjective level of trust, we used subjective ratings throughout the task as well as a post-task questionnaire, which distinguishes capacity and moral dimensions of trust. We expected the presence of feedback to improve trust toward the robot and human-likeness to strengthen this effect. Interestingly, we observed that humans equally trust the robot in most conditions but distrust it when it shows no social feedback nor human-like behavior. In addition, we only observed a positive correlation between subjective trust ratings and the moral and capacity dimensions of trust when robot was providing feedback during the task. These findings suggest that the presence and human-likeness of feedback behaviors positively modulate trust in HRI and thereby provide important insights for the development of non-verbal communicative behaviors in social robots.
Understanding how and when humans attribute intentionality to artificial agents is a key issue in contemporary human and technological sciences. This paper addresses the question of whether adopting intentional stance can be modulated by exposure to a 3D animated robot character, and whether this depends on the human-likeness of the character's behavior. We report three experiments investigating how appearance and behavioral features of a virtual character affect humans’ attribution of intentionality toward artificial social agents. The results show that adoption of intentional stance can be modulated depending on participants' expectations about the agent. This study brings attention to specific features of virtual agents and insights for further work in the field of virtual interaction.
Decision-making processes are involved in a large portion of humans’ everyday life and often occur in social situations. Although social decision-making has received increasing interest in literature, the influence of communicative signals, like social cues and feedback, on social decision-making is still poorly understood. In particular, the issue of whether social signals exhibited by non-human agents influence decision making has not yet been addressed. This question is of great importance today, in the new era in which artificial agents, such as robots or avatars start interacting with humans on a daily basis. This study aimed to examine whether robot non-verbal communicative behavior has an effect on human decision-making and can be perceived as a social signal. To this end, we implemented a 2-alternative-choice task with a between-subjects design where a robot acted as a game partner. We manipulated robot’s cues before participant’s decision and robot’s feedback after the decision. We found that manipulating robot signals affected participants’ performance. In particular, participants were slower in the condition where cues were mostly invalid and the robot reacted positively to wins. We show that this effect could not be attributed to attentional mechanisms, feedback expectation, or cognitive control. Instead, our results suggest that the effect is rather due to the violation of expectations about the incongruence between social signals. Our findings also indicate differences in these social expectations when they are about robot signals compared to human signals.
Recent studies suggest that people can interact with robots as social agents. However, it is still unclear what mental processes people rely on when inter-acting with robots. One core process in social cognition is the adoption of intentional stance, a strategy that humans use to interpret the behavior of others with reference to mental states. In this work, we sought to examine how the adoption of intentional stance may be modulated by the type of behaviors exhibited by a virtual robot and the context in which people are exposed to it. We developed an interactive virtual task and used the InStance Test to measure the attribution of intentionality to the robot. Our results show that participants attributed more intentionality to the virtual robot after interacting with it, independently of the type of behavior. Leveraging data from a previous study, we also show this increase is stronger than in a non-interactive, purely observational scenario. This study thus improves our understanding of how different contexts can affect the attribution of intention-al stance and anthropomorphism in Human-Robot Interaction.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.