The ability to impute mental states to oneself or others, or Theory of Mind (ToM), has been intrinsically linked to trust between humans. However, less is known about how a robot mimicking ToM affects users' trust and behaviour. We explore this through an online study, where we compare three robot personas in a cooperative maze navigation task: one neutral, one that explains its reasoning in technical terms, and one that mimics ToM. We show that ToM influences human decision-making behaviour and trust in a way that makes it more appropriate with respect to the competencies of the robot. This is key for humanrobot collaboration and adoption of robotics moving forward.
This study implemented a Delphi Method, a systematic technique which relies on a panel of experts to achieve consensus, to evaluate which questionnaire items would be the most relevant for developing a new Propensity to Trust scale. Two surveys were administered to academic lecturers, professors and Ph.D. candidates specialising in the fields of either individual differences, human robot interaction, or occupational psychology. Results from 28 experts produced 33 final questionnaire items that were deemed relevant for evaluating trust. We discuss the importance of content validity when implementing scales, while we also emphasise the need for more documented scale development processes in psychology. Furthermore, we propose that the Delphi approach could be utilised as an effective and economical method for achieving content validity while also providing more transparency of a scale’s creation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.