Introduction
Twitter represents a mainstream news source for the American public, offering a valuable vehicle for learning how citizens make sense of pandemic health threats like Covid-19. Masking as a risk mitigation measure became controversial in the US. The social amplification risk framework offers insight into how a risk event interacts with psychological, social, institutional, and cultural communication processes to shape Covid-19 risk perception.
Methods
Qualitative content analysis was conducted on 7,024 mask tweets reflecting 6,286 users between January 24 and July 7, 2020, to identify how citizens expressed Covid-19 risk perception over time. Descriptive statistics were computed for (a) proportion of tweets using hyperlinks, (b) mentions, (c) hashtags, (d) questions, and (e) location.
Results
Six themes emerged regarding how mask tweets amplified and attenuated Covid-19 risk: (a) severity perceptions (18.0%) steadily increased across 5 months; (b) mask effectiveness debates (10.7%) persisted; (c) who is at risk (26.4%) peaked in April and May 2020; (d) mask guidelines (15.6%) peaked April 3, 2020, with federal guidelines; (e) political legitimizing of Covid-19 risk (18.3%) steadily increased; and (f) mask behavior of others (31.6%) composed the largest discussion category and increased over time. Of tweets, 45% contained a hyperlink, 40% contained mentions, 33% contained hashtags, and 16.5% were expressed as a question.
Conclusions
Users ascribed many meanings to mask wearing in the social media information environment revealing that COVID-19 risk was expressed in a more expanded range than objective risk. The simultaneous amplification and attenuation of COVID-19 risk perception on social media complicates public health messaging about mask wearing.
Recurrent neural networks are efficient ways of training language models, and various RNN networks have been proposed to improve performance. However, with the increase of network scales, the overfitting problem becomes more urgent. In this paper, we propose a framework—G2Basy—to speed up the training process and ease the overfitting problem. Instead of using predefined hyperparameters, we devise a gradient increasing and decreasing technique that changes the parameters training batch size and input dropout simultaneously by a user-defined step size. Together with a pretrained word embedding initialization procedure and the introduction of different optimizers at different learning rates, our framework speeds up the training process dramatically and improves performance compared with a benchmark model of the same scale. For the word embedding initialization, we propose the concept of “artificial features” to describe the characteristics of the obtained word embeddings. We experiment on two of the most often used corpora—the Penn Treebank and WikiText-2 datasets—and both outperform the benchmark results and show potential towards further improvement. Furthermore, our framework shows better results with the larger and more complicated WikiText-2 corpus than with the Penn Treebank. Compared with other state-of-the-art results, we achieve comparable results with network scales hundreds of times smaller and within fewer training epochs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.