In the absence of clear, consistent guidelines about the COVID-19 pandemic in the United States, many people use social media to learn about the virus, public health directives, vaccine distribution, and other health information. As people individually sift through a flood of information online, they collectively curate a small set of accounts, known as crowdsourced elites, that receive disproportionate attention for their COVID-19 content. However, these elites are not all created equal: not all accounts have received the same attention during the pandemic, and various demographic and ideological groups have crowdsourced their own elites. Using a mixed-methods approach with a panel of Twitter users in the United States over the first year of the COVID-19 pandemic, we identify COVID-19 crowdsourced elites. We distinguish sustained amplification from episodic amplification and demonstrate that crowdsourced elites vary across demographics with respect to race, geography, and political alignment. Specifically, we show that different subpopulations preferentially amplify elites that are demographically similar to them, and that they crowdsource different types of elite accounts, such as journalists, elected officials, and medical professionals, in different proportions. In light of this variation, we discuss the potential for using the disproportionate online voice of crowdsourced COVID-19 elites to equitably promote public health information and mitigate misinformation across networked publics.
Millions of conversations are generated every day on social media platforms. With limited attention, it is challenging for users to select which discussions they would like to participate in. Here we propose a new method for microblog conversation recommendation. While much prior work has focused on postlevel recommendation, we exploit both the conversational context, and user content and behavior preferences. We propose a statistical model that jointly captures: (1) topics for representing user interests and conversation content, and (2) discourse modes for describing user replying behavior and conversation dynamics. Experimental results on two Twitter datasets demonstrate that our system outperforms methods that only model content without considering discourse.
Debate and deliberation play essential roles in politics and government, but most models presume that debates are won mainly via superior style or agenda control. Ideally, however, debates would be won on the merits, as a function of which side has the stronger arguments. We propose a predictive model of debate that estimates the effects of linguistic features and the latent persuasive strengths of different topics, as well as the interactions between the two. Using a dataset of 118 Oxford-style debates, our model's combination of content (as latent topics) and style (as linguistic features) allows us to predict audience-adjudicated winners with 74% accuracy, significantly outperforming linguistic features alone (66%). Our model finds that winning sides employ stronger arguments, and allows us to identify the linguistic features associated with strong or weak arguments.
Individuals acquire increasingly more of their political information from social media, and ever more of that online time is spent in interpersonal, peer-to-peer communication and conversation. Yet, many of these conversations can be either acrimoniously unpleasant or pleasantly uninformative. Why do we seek out and engage in these interactions? Who do people choose to argue with, and what brings them back to repeated exchanges? In short, why do people bother arguing online? We develop a model of argument engagement using a new dataset of Twitter conversations about President Trump. The model incorporates numerous user, tweet, and thread-level features to predict user participation in conversations with over 98% accuracy. We find that users are likely to argue over wide ideological divides, and are increasingly likely to engage with those who are different from themselves. In addition, we find that the emotional content of a tweet has important implications for user engagement, with negative and unpleasant tweets more likely to spark sustained participation. Although often negative, these extended discussions can bridge political differences and reduce information bubbles. This suggests a public appetite for engaging in prolonged political discussions that are more than just partisan potshots or trolling, and our results suggest a variety of strategies for extending and enriching these interactions.
As an integral component of public discourse, Twitter is among the main data sources for scholarship in this area. However, there is much that scholars do not know about the basic mechanisms of public discourse on Twitter, including the prevalence of various modes of communication, the types of posts users make, the engagement those posts receive, or how these things vary with user demographics and across different topical events. This paper broadens our understanding of these aspects of public discourse. We focus on the first nine months of 2020, studying that period as a whole and giving particular attention to two monumentally important topics of that time: the Black Lives Matter movement and the COVID-19 pandemic. Leveraging a panel of 1.6 million Twitter accounts matched to U.S. voting records, we examine the demographics, activity, and engagement of 800,000 American adults who collectively posted nearly 300 million tweets during this time span. We find notable variation in user activity and engagement, in terms of modality (e.g., retweets vs. replies), demographic subgroup, and topical context. We further find that while Twitter can best be understood as a collection of interconnected publics, neither topical nor demographic variation perfectly encapsulates the "Twitter public." Rather, Twitter publics are fluid, contextual communities which form around salient topics and are informed by demographic identities. Together, this paper presents a disaggregated, multifaceted description of the demographics, activity, and engagement of American Twitter users in 2020.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.