In their article “Predicting Elections with Twitter: What 140 Characters Reveal About Political Sentiment,” the authors Andranik Tumasjan, Timm O. Sprenger, Philipp G. Sandner, and Isabell M. Welpe (TSSW) the authors claim that it would be possible to predict election outcomes in Germany by examining the relative frequency of the mentions of political parties in Twitter messages posted during the election campaign. In this response we show that the results of TSSW are contingent on arbitrary choices of the authors. We demonstrate that as of yet the relative frequency of mentions of German political parties in Twitter message allows no prediction of election results.
The pervasive use of mobile information technologies brings new patterns of media usage, but also challenges to the measurement of media exposure. Researchers wishing to, for example, understand the nature of selective exposure on algorithmically driven platforms need to precisely attribute individuals' exposure to specific content. Prior research has used tracking data to show that survey-based self-reports of media exposure are critically unreliable. So far, however, little effort has been invested into assessing the specific biases of tracking methods themselves. Using data from a multi-method study, we show that tracking data from mobile devices is linked to systematic distortions in self-report biases. Further inherent but unobservable sources of bias, along with potential solutions, are discussed.The paper is structured as follows: We first review the existing state of research on the relation between self-reports and tracking data. Building on that literature, we discuss the theoretical and pragmatic limitations in the data collection process of various tracking methods, with a special focus on mobile devices.As that section will reveal, there are numerous potential sources of errors at various stages in the data collection process, which warrant an investigation into biases in tracking data. We go on to show empirically that such biases exist, drawing on original data from a multi-method study comprising survey and tracking data. In order to establish the validity of data and method, we first replicate existing findings of biased selfassessments (RQ 1). Using the differences between participants who provided mobile and/or desktop tracking data, we then show a genuinely new type of bias, namely a differential bias in self-reports of people willing to share mobile tracking data (RQ 2). Finally, we assess the impact of this bias through a simulation exercise (RQ 3), which builds on a realistic statistical model of perceived polarization to show how strong tracking bias will impact results. Literature Review: Self-report Bias, Direction and SourcesA growing list of study designs aims to bypass the insufficient reliability of self-reports by directly capturing trace data of digital media usage through various means (Revilla et al. 2017, Araujo et al. 2017, Vraga et al. 2016, Scharkow 2016. The results show rather consistently that there are strong systematic biases present in self-reports across different devices, settings and operationalizations. An early study that served to draw attention to the issues is Prior's (2009b) investigation of time spent on TV. By comparing survey-based self-assessments to Nielsen people meter data (which are generated from custom tracking devices on TVs, see Napoli 2003), he shows that individuals on average overestimate their TV usage by a factor of three, with younger respondents doing worse. Tapping into an earlier debate in political communication (Price & Zaller 1993), the paper suggests either using alternative methods for measuring exposure or instead focusing on deeper l...
Electronic petitions can serve as an influential mechanism for political participation. We present a study on the dynamics in the German e‐petition system which was introduced in late 2008. Drawing on a data set of signatures, we analyze four aspects: (a) the types of petitions found, (b) the temporal dynamics of petitions, (c) the types of users found, and (d) the intersection of different petitions' supporter populations. We present evidence that (a) the system is dominated by a very small number of high‐volume petitions and (b) these high‐volume petitions have a delayed boosting effect on the base activity in the petition system. We furthermore (c) present a typology of users, showing that although highly active “new lobbyists” and “hit‐and‐run activists” exist, one‐ or two‐time petitioners have the largest impact. Finally, it is indicated that (d) many of the high‐volume petitions share a significant part of their user base, hinting at a complex, topically motivated network of supporters. Through the application of methods from what has been called “Computational Social Sciences,” we illuminate a highly relevant field of political behavior online, while demonstrating the capability of data‐driven approaches in such novel domains.
In this article, we examine the relationship between metrics documenting politics-related Twitter activity with election results and trends in opinion polls. Various studies have proposed the possibility of inferring public opinion based on digital trace data collected on Twitter and even the possibility to predict election results based on aggregates of mentions of political actors. Yet, a systematic attempt at a validation of Twitter as an indicator for political support is lacking. In this article, building on social science methodology, we test the validity of the relationship between various Twitter-based metrics of public attention toward politics with election results and opinion polls. All indicators tested in this article suggest caution in the attempt to infer public opinion or predict election results based on Twitter messages. In all tested metrics, indicators based on Twitter mentions of political parties differed strongly from parties' results in elections or opinion polls. This leads us to question the power of Twitter to infer levels of political support of political actors. Instead, Twitter appears to promise insights into temporal dynamics of public attention toward politics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.