Debate and deliberation play essential roles in politics and government, but most models presume that debates are won mainly via superior style or agenda control. Ideally, however, debates would be won on the merits, as a function of which side has the stronger arguments. We propose a predictive model of debate that estimates the effects of linguistic features and the latent persuasive strengths of different topics, as well as the interactions between the two. Using a dataset of 118 Oxford-style debates, our model's combination of content (as latent topics) and style (as linguistic features) allows us to predict audience-adjudicated winners with 74% accuracy, significantly outperforming linguistic features alone (66%). Our model finds that winning sides employ stronger arguments, and allows us to identify the linguistic features associated with strong or weak arguments.
Online communication is often characterized as dominated by antagonism or groupthink, with little in the way of meaningful interaction or persuasion. This essay examines how one can detect and measure instances of more productive conversation online, considered through the lens of deliberative theory. It begins with an examination of traditional deliberative democracy, then explores how these concepts have been applied to online deliberation and by those studying interpersonal conversation in social media more generally. These efforts to characterize and measure deliberative quality have resulted in a myriad of criteria, with elaborate checklists that are often as superficial as they are complex. This essay instead proposes targeting what is arguably the core deliberative process—a mutual consideration of conceptually interrelated ideas—in order to distinguish the better from the worse and to construct better conceptual structures. The essay finishes by discussing two computational models of argument quality and interdependence as templates for richer, scalable, nonpartisan measures of deliberative discussion online.
Figure 1: Transcript of the tenth debate of the 2020 U.S. Democratic primary election cycle, viewed with DEBATEVIS. The interactions graph (1) summarizes speaker behavior throughout the debate, including topics discussed and interactions with other speakers. The user can consult the annotated transcript (2) to read the full context of any statement in the debate. Rapid navigation through the transcript is possible by clicking on marks for the audience interactions, speaker interactions, and topic mentions automatically identified and visualized in the timeline (3). All three visualizations are connected through brushing and linking.
While there have been many efforts to monitor or predict Covid using digital traces such as social media, one of the most distinctive and diagnostically important symptoms of Covid -- anosmia, or loss of smell -- remains elusive due to the infrequency of discussions of smell online. It was recently hypothesized that an inadvertent indicator of this key symptom may be misplaced complaints in Amazon reviews that scented products such as candles have no smell. This paper presents a novel Bayesian vector autoregression model developed to test this hypothesis, finding that "no smell" reviews do indeed reflect changes in US Covid cases even when controlling for the seasonality of those reviews. A series of robustness checks suggests that this effect is also seen in perfume reviews, but did not hold for the flu prior to Covid. These results suggest that inadvertent digital traces may be an important tool for tracking epidemics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.