Given the rise of automation adoption in a variety of industries, it is essential we understand how individuals perceive these systems. Previous studies have found that a failure of one component in a system leads to decreased trust across a whole system, but few, if any, studies have considered how users define the confines of a system. To address this gap, this study replicated and extended Mehta et al. (2019) by incorporating measures of trust, similarity, and functional relatedness between six human crew members and six automated components. We noted that our functional relatedness and similarity measures were predictors of the magnitude of trust difference between conditions, with a high degree of shared variance. Our lack of a within subjects design means that future studies should further test whether measures approximating perceptions of ‘system’ (e.g., perceived functional relatedness and similarity) predict the size of contagion effects.
Trust is perhaps one of the most important constructs when trying to understand human interactions with systems. As such, there has been a considerable amount of research done on factors such as individual differences in trust, environmental factors affecting trust, and even ‘contagion’ effects of trust. However, at times the literature presents results that are inconsistent with existing findings, definitions, and at times even logic, with no clear path towards reconciliation. This manuscript attempts to highlight some of these inconsistencies within individual differences (personality and gender), task and environment, and system-wide trust, while also attempting to identify outstanding questions. While this review and critique do not encompass the totality of the literature, it provides representative examples of some of the issues the study of trust is facing. For each section, we offer a series of research questions, which has resulted in 17 proposed questions that need answering. Finally, we propose 3 questions that researchers should ask in every study which may help mitigate some of these issues in the future.
Previous research has shown that the design of robots can impact the level of trust, liking, and empathy that a user feels towards a robot. Additionally, this empathy can have direct impacts on users’ interactions with the system. Existing research has looked at how empathy can influence user willingness to, for example, put the robot in harm’s way or to destroy the robot. However, these studies have been inherently reliant upon narrative driven manipulations, which may result in experimental demands which could have influenced the results. As such, we provide a human-likeness manipulation in order to evaluate the impacts of design which may evoke empathy, on use of robots in high-risk environments. Results indicate no significant difference in robot use between conditions. These results are in conflict with previous research. More research is needed to understand when users are/are not willing to use a robot in a high-risk environment.
Social media is omnipresent in many lives, and its popularity has made it a prime delivery method for misinformation. This problem is widely recognized, even by social media companies. The debate rages on as to what the right approach should be to combat misinformation; some suggest removal of misinformation; others suggest labeling misinformation. Therefore, it is important to understand how labeling misinformation may interact with individual differences to affect recall of what is and is not misinformation. In addition to factors like confirmation bias, in-group versus out-group assignment, and other cognitive effects, there may be individual differences that could affect the likeliness of misremembering the veracity of information. In this study, the effects of differences in working memory and personality on recall of misinformation labels are tested, with the aim being to determine what, if any, effect these factors have on the utility of misinformation labels. Results indicate no predictive capability for knowing individuals’ working memory, extroversion, and conscientiousness on their recall of whether information was labeled as false.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.