Vital scientific communications are frequently misinterpreted by the lay public as a result of motivated reasoning, where people misconstrue data to fit their political and psychological biases. In the case of climate change, some people have been found to systematically misinterpret climate data in ways that conflict with the intended message of climate scientists. While prior studies have attempted to reduce motivated reasoning through bipartisan communication networks, these networks have also been found to exacerbate bias. Popular theories hold that bipartisan networks amplify bias by exposing people to opposing beliefs. These theories are in tension with collective intelligence research, which shows that exchanging beliefs in social networks can facilitate social learning, thereby improving individual and group judgments. However, prior experiments in collective intelligence have relied almost exclusively on neutral questions that do not engage motivated reasoning. Using Amazon's Mechanical Turk, we conducted an online experiment to test how bipartisan social networks can influence subjects' interpretation of climate communications from NASA. Here, we show that exposure to opposing beliefs in structured bipartisan social networks substantially improved the accuracy of judgments among both conservatives and liberals, eliminating belief polarization. However, we also find that social learning can be reduced, and belief polarization maintained, as a result of partisan priming. We find that increasing the salience of partisanship during communication, both through exposure to the logos of political parties and through exposure to the political identities of network peers, can significantly reduce social learning.
Amidst widespread reports of digital influence operations during major elections, policymakers, scholars, and journalists have become increasingly interested in the political impact of social media 'bots.' Most recently, platform companies like Facebook and Twitter have been summoned to testify about bots as part of investigations into digitally-enabled foreign manipulation during the 2016 US Presidential election. Facing mounting pressure from both the public and from legislators, these companies have been instructed to crack down on apparently malicious bot accounts. But as this article demonstrates, since the earliest writings on bots in the 1990s, there has been substantial confusion as to exactly what a 'bot' is and what exactly a bot does. We argue that multiple forms of ambiguity are responsible for much of the complexity underlying contemporary bot-related policy, and that before successful policy interventions can be formulated, a more comprehensive understanding of bots -especially how they are defined and measured -will be needed. In this article, we provide a history and typology of different types of bots, provide clear guidelines to better categorize political automation and unpack the impact that it can have on contemporary technology policy, and outline the main challenges and ambiguities that will face both researchers and legislators concerned with bots in the future.
Since the publication of "Complex Contagions and the Weakness of Long Ties" in 2007, complex contagions have been studied across an enormous variety of social domains. In reviewing this decade of research, we discuss recent advancements in applied studies of complex contagions, particularly in the domains of health, innovation diffusion, social media, and politics. We also discuss how these empirical studies have spurred complementary advancements in the theoretical modeling of contagions, which concern the effects of network topology on diffusion, as well as the effects of individual-level attributes and thresholds. In synthesizing these developments, we suggest three main directions for future research. The first concerns the study of how multiple contagions interact within the same network and across networks, in what may be called an ecology of contagions. The second concerns the study of how the structure of thresholds and their behavioral consequences can vary by individual and social context. The third area concerns the roles of diversity and homophily in the dynamics of complex contagion, including both diversity of demographic profiles among local peers, and the broader notion of structural diversity within a network. Throughout this discussion, we make an effort to highlight the theoretical and empirical opportunities that lie ahead.
The standard measure of distance in social networks – average shortest path length – assumes a model of “simple” contagion, in which people only need exposure to influence from one peer to adopt the contagion. However, many social phenomena are “complex” contagions, for which people need exposure to multiple peers before they adopt. Here, we show that the classical measure of path length fails to define network connectedness and node centrality for complex contagions. Centrality measures and seeding strategies based on the classical definition of path length frequently misidentify the network features that are most effective for spreading complex contagions. To address these issues, we derive measures of complex path length and complex centrality, which significantly improve the capacity to identify the network structures and central individuals best suited for spreading complex contagions. We validate our theory using empirical data on the spread of a microfinance program in 43 rural Indian villages.
Individuals vary widely in how they categorize novel and ambiguous phenomena. This individual variation has led influential theories in cognitive and social science to suggest that communication in large social groups introduces path dependence in category formation, which is expected to lead separate populations toward divergent cultural trajectories. Yet, anthropological data indicates that large, independent societies consistently arrive at highly similar category systems across a range of topics. How is it possible for diverse populations, consisting of individuals with significant variation in how they categorize the world, to independently construct similar category systems? Here, we investigate this puzzle experimentally by creating an online “Grouping Game” in which we observe how people in small and large populations collaboratively construct category systems for a continuum of ambiguous stimuli. We find that solitary individuals and small groups produce highly divergent category systems; however, across independent trials with unique participants, large populations consistently converge on highly similar category systems. A formal model of critical mass dynamics in social networks accurately predicts this process of scale-induced category convergence. Our findings show how large communication networks can filter lexical diversity among individuals to produce replicable society-level patterns, yielding unexpected implications for cultural evolution.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.