Trust is perhaps one of the most important constructs when trying to understand human interactions with systems. As such, there has been a considerable amount of research done on factors such as individual differences in trust, environmental factors affecting trust, and even ‘contagion’ effects of trust. However, at times the literature presents results that are inconsistent with existing findings, definitions, and at times even logic, with no clear path towards reconciliation. This manuscript attempts to highlight some of these inconsistencies within individual differences (personality and gender), task and environment, and system-wide trust, while also attempting to identify outstanding questions. While this review and critique do not encompass the totality of the literature, it provides representative examples of some of the issues the study of trust is facing. For each section, we offer a series of research questions, which has resulted in 17 proposed questions that need answering. Finally, we propose 3 questions that researchers should ask in every study which may help mitigate some of these issues in the future.
Previous research has shown that the design of robots can impact the level of trust, liking, and empathy that a user feels towards a robot. Additionally, this empathy can have direct impacts on users’ interactions with the system. Existing research has looked at how empathy can influence user willingness to, for example, put the robot in harm’s way or to destroy the robot. However, these studies have been inherently reliant upon narrative driven manipulations, which may result in experimental demands which could have influenced the results. As such, we provide a human-likeness manipulation in order to evaluate the impacts of design which may evoke empathy, on use of robots in high-risk environments. Results indicate no significant difference in robot use between conditions. These results are in conflict with previous research. More research is needed to understand when users are/are not willing to use a robot in a high-risk environment.
As autonomous systems become responsible for more complex decisions, it is crucial to consider how these systems will respond in situations wherein they must make potentially controversial decisions without input from users. While previous literature has suggested that users prefer machinelike systems that act to promote the greater good, little research has focused on how the humanlikeness of an agent influences how moral decisions are perceived. We ran two online studies where participants and an automated agent made a decision in an adapted trolley problem. Our results conflicted with previous literature as they did not support the idea that humanlike agents are trusted in a manner analogous to humans in moral dilemmas. Conversely, our study did support the importance for trust of shared moral view between users and systems. Further investigation is necessary to clarify how humanlikeness and moral view interact to form impressions of trust in a system.
Social media is omnipresent in many lives, and its popularity has made it a prime delivery method for misinformation. This problem is widely recognized, even by social media companies. The debate rages on as to what the right approach should be to combat misinformation; some suggest removal of misinformation; others suggest labeling misinformation. Therefore, it is important to understand how labeling misinformation may interact with individual differences to affect recall of what is and is not misinformation. In addition to factors like confirmation bias, in-group versus out-group assignment, and other cognitive effects, there may be individual differences that could affect the likeliness of misremembering the veracity of information. In this study, the effects of differences in working memory and personality on recall of misinformation labels are tested, with the aim being to determine what, if any, effect these factors have on the utility of misinformation labels. Results indicate no predictive capability for knowing individuals’ working memory, extroversion, and conscientiousness on their recall of whether information was labeled as false.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.