Whereas bots that spread malware and unsolicited content disseminated antivaccine messages, Russian trolls promoted discord. Accounts masquerading as legitimate users create false equivalency, eroding public consensus on vaccination. Public Health Implications. Directly confronting vaccine skeptics enables bots to legitimize the vaccine debate. More research is needed to determine how best to combat bot-driven content.
Social media have been proposed as a data source for influenza surveillance because they have the potential to offer real-time access to millions of short, geographically localized messages containing information regarding personal well-being. However, accuracy of social media surveillance systems declines with media attention because media attention increases “chatter” – messages that are about influenza but that do not pertain to an actual infection – masking signs of true influenza prevalence. This paper summarizes our recently developed influenza infection detection algorithm that automatically distinguishes relevant tweets from other chatter, and we describe our current influenza surveillance system which was actively deployed during the full 2012-2013 influenza season. Our objective was to analyze the performance of this system during the most recent 2012–2013 influenza season and to analyze the performance at multiple levels of geographic granularity, unlike past studies that focused on national or regional surveillance. Our system’s influenza prevalence estimates were strongly correlated with surveillance data from the Centers for Disease Control and Prevention for the United States (r = 0.93, p < 0.001) as well as surveillance data from the Department of Health and Mental Hygiene of New York City (r = 0.88, p < 0.001). Our system detected the weekly change in direction (increasing or decreasing) of influenza prevalence with 85% accuracy, a nearly twofold increase over a simpler model, demonstrating the utility of explicitly distinguishing infection tweets from other chatter.
Accurate disease forecasts are imperative when preparing for influenza epidemic outbreaks; nevertheless, these forecasts are often limited by the time required to collect new, accurate data. In this paper, we show that data from the microblogging community Twitter significantly improves influenza forecasting. Most prior influenza forecast models are tested against historical influenza-like illness (ILI) data from the U.S. Centers for Disease Control and Prevention (CDC). These data are released with a one-week lag and are often initially inaccurate until the CDC revises them weeks later. Since previous studies utilize the final, revised data in evaluation, their evaluations do not properly determine the effectiveness of forecasting. Our experiments using ILI data available at the time of the forecast show that models incorporating data derived from Twitter can reduce forecasting error by 17-30% over a baseline that only uses historical data. For a given level of accuracy, using Twitter data produces forecasts that are two to four weeks ahead of baseline models. Additionally, we find that models using Twitter data are, on average, better predictors of influenza prevalence than are models using data from Google Flu Trends, the leading web data source.
Fuzzy-trace theory assumes that decision-makers process qualitative “gist” representations and quantitative “verbatim” representations in parallel. We develop a lattice model of fuzzy-trace theory that explains both processes. Specifically, the model provides a novel formalization of how: 1) decision-makers encode multiple representations of options in parallel; 2) representations compete or combine so that choices often turn on the simplest representation of encoded gists; and 3) choices between representations are made based on positive vs. negative valences associated with social and moral principles stored in long-term memory (e.g., saving lives is good). The model integrates effects of individual differences in numeracy, metacognitive monitoring and editing, and sensation seeking. We conducted a systematic review of variations on framing effects and the Allais Paradox, both core phenomena of risky decision-making, and tested whether our model could predict observed choices: The model successfully predicted 82 out of 88 (93%) pairs of studies (comparing gain to loss conditions) demonstrating 16 variations on effects, theoretically critical manipulations that eliminate or exaggerate framing effects. When examining these conditions individually, the model successfully predicted 153 (90%) out of 170 eligible studies. Parameters of the model varied in theoretically meaningful ways with differences in numeracy, metacognitive monitoring, and sensation seeking, accounting for risk preferences at the group level. New experiments show similar results at the individual level. The model is also shown to be scientifically parsimonious using standard measures. Relations to current theories, such as Cumulative Prospect Theory, and potential extensions are discussed.
Faculty diversity is a longstanding challenge in the US. However, we lack a quantitative and systemic understanding of how the career transitions into assistant professor positions of PhD scientists from underrepresented minority (URM) and well-represented (WR) racial/ethnic backgrounds compare. Between 1980 and 2013, the number of PhD graduates from URM backgrounds increased by a factor of 9.3, compared with a 2.6-fold increase in the number of PhD graduates from WR groups. However, the number of scientists from URM backgrounds hired as assistant professors in medical school basic science departments was not related to the number of potential candidates (R2=0.12, p>0.07), whereas there was a strong correlation between these two numbers for scientists from WR backgrounds (R2=0.48, p<0.0001). We built and validated a conceptual system dynamics model based on these data that explained 79% of the variance in the hiring of assistant professors and posited no hiring discrimination. Simulations show that, given current transition rates of scientists from URM backgrounds to faculty positions, faculty diversity would not increase significantly through the year 2080 even in the context of an exponential growth in the population of PhD graduates from URM backgrounds, or significant increases in the number of faculty positions. Instead, the simulations showed that diversity increased as more postdoctoral candidates from URM backgrounds transitioned onto the market and were hired.DOI: http://dx.doi.org/10.7554/eLife.21393.001
Objectives. To adapt and extend an existing typology of vaccine misinformation to classify the major topics of discussion across the total vaccine discourse on Twitter. Methods. Using 1.8 million vaccine-relevant tweets compiled from 2014 to 2017, we adapted an existing typology to Twitter data, first in a manual content analysis and then using latent Dirichlet allocation (LDA) topic modeling to extract 100 topics from the data set. Results. Manual annotation identified 22% of the data set as antivaccine, of which safety concerns and conspiracies were the most common themes. Seventeen percent of content was identified as provaccine, with roughly equal proportions of vaccine promotion, criticizing antivaccine beliefs, and vaccine safety and effectiveness. Of the 100 LDA topics, 48 contained provaccine sentiment and 28 contained antivaccine sentiment, with 9 containing both. Conclusions. Our updated typology successfully combines manual annotation with machine-learning methods to estimate the distribution of vaccine arguments, with greater detail on the most distinctive topics of discussion. With this information, communication efforts can be developed to better promote vaccines and avoid amplifying antivaccine rhetoric on Twitter.
In February 2020, the World Health Organization announced an ‘infodemic’ -- a deluge of both accurate and inaccurate health information -- that accompanied the global pandemic of COVID-19 as a major challenge to effective health communication. We assessed content from the most active vaccine accounts on Twitter to understand how existing online communities contributed to the ‘infodemic’ during the early stages of the pandemic. While we expected vaccine opponents to share misleading information about COVID-19, we also found vaccine proponents were not immune to spreading less reliable claims. In both groups, the single largest topic of discussion consisted of nar-ratives comparing COVID-19 to other diseases like seasonal influenza, often downplaying the severi-ty of the novel coronavirus. When considering the scope of the ‘infodemic,’ researchers and health communicators must move beyond focusing on known bad actors and the most egregious types of misinformation to scrutinize the full spectrum of information -- from both reliable and unreliable sources -- that the public is likely to encounter online.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.