Social influence is the process by which individuals adapt their opinion, revise their beliefs, or change their behavior as a result of social interactions with other people. In our strongly interconnected society, social influence plays a prominent role in many self-organized phenomena such as herding in cultural markets, the spread of ideas and innovations, and the amplification of fears during epidemics. Yet, the mechanisms of opinion formation remain poorly understood, and existing physics-based models lack systematic empirical validation. Here, we report two controlled experiments showing how participants answering factual questions revise their initial judgments after being exposed to the opinion and confidence level of others. Based on the observation of 59 experimental subjects exposed to peer-opinion for 15 different items, we draw an influence map that describes the strength of peer influence during interactions. A simple process model derived from our observations demonstrates how opinions in a group of interacting people can converge or split over repeated interactions. In particular, we identify two major attractors of opinion: (i) the expert effect, induced by the presence of a highly confident individual in the group, and (ii) the majority effect, caused by the presence of a critical mass of laypeople sharing similar opinions. Additional simulations reveal the existence of a tipping point at which one attractor will dominate over the other, driving collective opinion in a given direction. These findings have implications for understanding the mechanisms of public opinion formation and managing conflicting situations in which self-confident and better informed minorities challenge the views of a large uninformed majority.
How do people decide whether to try out novel options as opposed to tried-and-tested ones? We argue that they infer a novel option's reward from contextual information learned from functional relations and take uncertainty into account when making a decision. We propose a Bayesian optimization model to describe their learning and decision making. This model relies on similarity-based learning of functional relationships between features and rewards, and a choice rule that balances exploration and exploitation by combining predicted rewards and the uncertainty of these predictions. Our model makes two main predictions. First, decision makers who learn functional relationships will generalize based on the learned reward function, choosing novel options only if their predicted reward is high. Second, they will take uncertainty about the function into account, and prefer novel options that can reduce this uncertainty. We test these predictions in three preregistered experiments in which we examine participants' preferences for novel options using a feature-based multi-armed bandit task in which rewards are a noisy function of observable features. Our results reveal strong evidence for functional exploration and moderate evidence for uncertainty-guided exploration. However, whether or not participants chose a novel option also depended on their attention, as well as reflecting on the value of the options. These results advance our understanding of people's reactions in the face of novelty.
Most choices people make are about "matters of taste" on which there is no universal, objective truth. Nevertheless, people can learn from the experiences of individuals with similar tastes who have already evaluated the available options-a potential harnessed by recommender systems. We mapped recommender system algorithms to models of human judgment and decision making about "matters of fact" and recast the latter as social learning strategies for "matters of taste." Using computer simulations on a large-scale, empirical dataset, we studied how people could leverage the experiences of others to make better decisions. Our simulation showed that experienced individuals can benefit from relying mostly on the opinions of seemingly similar people; inexperienced individuals, in contrast, cannot reliably estimate similarity and are better off picking the mainstream option despite differences in taste. Crucially, the level of experience beyond which people should switch to similarity-heavy strategies varies substantially across individuals and depends on (i) how mainstream (or alternative) an individual's tastes are and (ii) the level of dispersion in taste similarity with the other people in the group.
A wide body of empirical research has revealed the descriptive shortcomings of expected value and expected utility models of risky decision making. In response, numerous models have been advanced to predict and explain people’s choices between gambles. Although some of these models have had a great impact in the behavioral, social, and management sciences, there is little consensus about which model offers the best account of choice behavior. In this paper, we conduct a large-scale comparison of 58 prominent models of risky choice, using 19 existing behavioral data sets involving more than 800 participants. This allows us to comprehensively evaluate models in terms of individual-level predictive performance across a range of different choice settings. We also identify the psychological mechanisms that lead to superior predictive performance and the properties of choice stimuli that favor certain types of models over others. Moreover, drawing on research on the wisdom of crowds, we argue that each of the existing models can be seen as an expert that provides unique forecasts in choice predictions. Consistent with this claim, we find that crowds of risky choice models perform better than individual models and thus provide a performance bound for assessing the historical accumulation of knowledge in our field. Our results suggest that each model captures unique aspects of the decision process and that existing risky choice models offer complementary rather than competing accounts of behavior. We discuss the implications of our results on theories of risky decision making and the quantitative modeling of choice behavior. This paper was accepted by Yuval Rottenstreich, behavioral economics and decision analysis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.