As autonomous systems become responsible for more complex decisions, it is crucial to consider how these systems will respond in situations wherein they must make potentially controversial decisions without input from users. While previous literature has suggested that users prefer machinelike systems that act to promote the greater good, little research has focused on how the humanlikeness of an agent influences how moral decisions are perceived. We ran two online studies where participants and an automated agent made a decision in an adapted trolley problem. Our results conflicted with previous literature as they did not support the idea that humanlike agents are trusted in a manner analogous to humans in moral dilemmas. Conversely, our study did support the importance for trust of shared moral view between users and systems. Further investigation is necessary to clarify how humanlikeness and moral view interact to form impressions of trust in a system.
Social media is omnipresent in many lives, and its popularity has made it a prime delivery method for misinformation. This problem is widely recognized, even by social media companies. The debate rages on as to what the right approach should be to combat misinformation; some suggest removal of misinformation; others suggest labeling misinformation. Therefore, it is important to understand how labeling misinformation may interact with individual differences to affect recall of what is and is not misinformation. In addition to factors like confirmation bias, in-group versus out-group assignment, and other cognitive effects, there may be individual differences that could affect the likeliness of misremembering the veracity of information. In this study, the effects of differences in working memory and personality on recall of misinformation labels are tested, with the aim being to determine what, if any, effect these factors have on the utility of misinformation labels. Results indicate no predictive capability for knowing individuals’ working memory, extroversion, and conscientiousness on their recall of whether information was labeled as false.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.