Forecasting advice from human advisors is often utilized more than advice from automation. There is little understanding of why “algorithm aversion” occurs, or specific conditions that may exaggerate it. This paper first reviews literature from two fields—interpersonal advice and human–automation trust—that can inform our understanding of the underlying causes of the phenomenon. Then, an experiment is conducted to search for these underlying causes. We do not replicate the finding that human advice is generally utilized more than automated advice. However, after receiving bad advice, utilization of automated advice decreased significantly more than advice from humans. We also find that decision makers describe themselves as having much more in common with human than automated advisors despite there being no interpersonal relationship in our study. Results are discussed in relation to other findings from the forecasting and human–automation trust fields and provide a new perspective on what causes and exaggerates algorithm aversion.
Using the linguistic software Linguistic Inquiry Word Count, we analyzed transcripts of group discussions of whether the words “under God” should be in the Pledge of Allegiance. We hypothesized that members with an extreme opinion would use less complex language and more you pronouns than other members. Furthermore, extreme members would have less influence when they used you pronouns or more complex language consistent with the illusion of understanding. Extreme members were more confident and perceived themselves as more knowledgeable, but they did not use less complex language than other members. When extreme members did use complex language, they were less influential. Extreme members used more you pronouns and use of you pronouns reduced their influence in the group. Groups containing at least one extreme member had a much lower level of complexity in their discourse than groups without extreme members. Results are situated within research in integrative complexity, illusion of understanding, and attitude extremity.
Because operating room (OR) management decisions with optimal choices are made with ubiquitous biases, decisions are improved with decision-support systems. We reviewed experimental social-psychology studies to explore what an OR leader can do when working with stakeholders lacking interest in learning the OR management science but expressing opinions about decisions, nonetheless. We considered shared information to include the rules-of-thumb (heuristics) that make intuitive sense and often seem "close enough" (e.g., staffing is planned based on the average workload). We considered unshared information to include the relevant mathematics (e.g., staffing calculations). Multiple studies have shown that group discussions focus more on shared than unshared information. Quality decisions are more likely when all group participants share knowledge (e.g., have taken a course in OR management science). Several biases in OR management are caused by humans' limited abilities to estimate tails of probability distributions in their heads. Groups are more susceptible to analogous biases than are educated individuals. Since optimal solutions are not demonstrable without groups sharing common language, only with education of most group members can a knowledgeable individual influence the group. The appropriate model of decision-making is autocratic, with information obtained from stakeholders. Although such decisions are good quality, the leaders often are disliked and the decisions considered unjust. In conclusion, leaders will find the most success if they do not bring OR management operational decisions to groups, but instead act autocratically while obtaining necessary information in 1:1 conversations. The only known route for the leader making such decisions to be considered likable and for the decisions to be considered fair is through colleagues and subordinates learning the management science.
This study investigates the effects of task demonstrability and replacing a human advisor with a machine advisor. Outcome measures include advice-utilization (trust), the perception of advisors, and decision-maker emotions. Participants were randomly assigned to make a series of forecasts dealing with either humanitarian planning (low demonstrability) or management (high demonstrability). Participants received advice from either a machine advisor only, a human advisor only, or their advisor was replaced with the other type of advisor (human/machine) midway through the experiment. Decision-makers rated human advisors as more expert, more useful, and more similar. Perception effects were strongest when a human advisor was replaced by a machine. Decision-makers also experienced more negative emotions, lower reciprocity, and faulted their advisor more for mistakes when a human was replaced by a machine.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.