Chat apps such as WhatsApp, Telegram, and Signal are increasingly popular platforms for communication. Their sometimes-closed nature and encryption affordances present researchers, governments, and law enforcement with unique problems of access, traceability, and, ultimately, understanding. It also makes them useful vectors for sowing disinformation. This research assumes a multi-platform perspective, describing the particularities of how chat apps can be used toward disseminating mis- and disinformation by way of cascade logic—the means by which information in chat app ecologies is trafficked upstream (making its way from private conversations into the mainstream) as well as downstream (allowing information to withdraw from the public eye), providing space for distortion along the way. Cascade logic also describes how chat apps allow the gradual withdrawal and self-segregation of individuals into, or emergence out of, layered spaces of privacy and obfuscation. We present an interview-based study exploring chat apps in three countries, synthesizing unifying dimensions across cultures and contexts: India, the United States, and Mexico. We analyze data from in-depth conversations with 33 individuals who work to either produce or track political content on chat apps. These interviewees work for a wide array of organizations: political parties, governments, extremist groups, digital political consultancies, news entities, and civil society organizations. We reveal key insights into the tactics of producers of political content on chat apps and show how these platforms are particularly suitable for harnessing human connections, or leveraging communities of trust, to sow disinformation.
The popular encrypted messaging and chat app WhatsApp played a key role in the election of Brazilian President Jair Bolsonaro in 2018. The present study builds on this knowledge and showcases how the app continued to be used in a governmental operation spreading false and misleading information popularly known in Brazil as the Office of Hatred (OOH). By harnessing in-depth expert interviews with documentarians of the office’s daily operations—researchers, journalists, and fact-checkers ( N = 10)—this study draws up a chronology of the OOH. Via this methodological approach, we trace and chronologize events, actions, and actors associated with the OOH. Specifically, findings (a) document the rise of antipetismo and disinformation campaigns associated with attacks on the Brazilian Worker’s party from 2012 until the election of Bolsonaro in 2018, (b) describe the emergence of the OOH at the heels of the election and subsequent radicalization in WhatsApp groups, (c) provide an overview of the types of disinformation that are spread on the app by the OOH, and (d) illustrate how the OOH operates by mapping key actors and places, communicative strategies, and audiences. These findings are discussed in light of ramifications that government-sponsored forms of disinformation might have in other antidemocratic polities marked by strongman populist leadership.
What does antisemitism look like in the context of political discussions on Twitter? In this article, we introduce the notion of platformed antisemitism. We first define it as a platform-agnostic concept, and then explore it through an exemplary case study of Twitter and its affordances by way of a mixed-methods analysis of discourse surrounding the 2018 US midterm election. Via qualitative textual analysis, we document how political discourse on Twitter is marred by antisemitic conspiracy theories that intersect with QAnon and Trump/MAGA support. Through quantitative content analysis of a sample of 99,062 tweets, we highlight a list of terms and hashtags most often associated with antisemitic speech on Twitter and showcase how specific affordances on the platform (quote-tweets, hashtags) amplify and/or diminutize antisemitic speech. Via Lasso regression, we introduce an antisemitism classifier that can be used to further refine future detection efforts of antisemitic speech.
Techniques designed to manipulate public opinion and undermine information ecosystems are rapidly evolving while research lags behind technological innovation and strategic expertise. As a more sophisticated generation of information operations is fast to mature, the papers in this panel shed light on some of the blind spots of scholarly inquiry making visible new thematic strategies, technical infrastructures and both political and economic incentives. The first two papers examine the progression from general political propaganda geared towards influencing elections to highly issue-specific micro-propaganda. The first paper presents an analysis of anti-Semitic disinformation campaigns and harassment during the 2018 US midterms on Twitter and offers rich evidence from interviews with Jewish American opinion leaders about their impact. Drawing on data from Twitter’s Election Integrity Initiative, the second paper examines the gender dimensions of foreign influence operations and how hostile state actors frame and discuss gender identity & politics. The third paper presents an analysis of search engine optimization strategies that extremist YouTubers use in an attempt to game the algorithm and increase their visibility in the network. The fourth paper investigates the relationship between partisan bias associated with Google Search results and the success of political candidates associated with the search queries during elections and finds that partisan search media is a predictor for election outcomes. The fifth paper examines the emergence of a global political economy for manipulation and offers a grounded typology of the vendors, marketplaces, services, and products that are designed to turn a profit from swaying public opinion.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.