As of March 2021, the SARS-CoV-2 virus has been responsible for over 115 million cases of COVID-19 worldwide, resulting in over 2.5 million deaths. As the virus spread exponentially, so did its media coverage, resulting in a proliferation of conflicting information on social media platforms—a so-called “infodemic.” In this viewpoint, we survey past literature investigating the role of automated accounts, or “bots,” in spreading such misinformation, drawing connections to the COVID-19 pandemic. We also review strategies used by bots to spread (mis)information and examine the potential origins of bots. We conclude by conducting and presenting a secondary analysis of data sets of known bots in which we find that up to 66% of bots are discussing COVID-19. The proliferation of COVID-19 (mis)information by bots, coupled with human susceptibility to believing and sharing misinformation, may well impact the course of the pandemic.
Black Lives Matter (BLM) is a decentralized social movement protesting violence against Black individuals and communities, with a focus on police brutality.
The movement gained significant attention following the killings of Ahmaud Arbery, Breonna Taylor, and George Floyd in 2020. The #BlackLivesMatter social media hashtag has come to represent the grassroots movement, with similar hashtags counter protesting the BLM movement, such as #AllLivesMatter, and #BlueLivesMatter. We introduce a data set of 63.9 million tweets from 13.0 million users from over 100 countries which contain one of the following keywords: BlackLivesMatter, AllLivesMatter, and BlueLivesMatter. This data set contains all currently available tweets from the beginning of the BLM movement in 2013 to 2021. We summarize the data set and show temporal trends in use of both the BlackLivesMatter keyword and keywords associated with counter movements.
Additionally, for each keyword, we create and release a set of Latent Dirichlet Allocation (LDA) topics (i.e., automatically clustered groups of semantically co-occuring words) to aid researchers in identifying linguistic patterns across the three keywords.
Background & Aims
Previous studies have shown that nonsuicidal self-injury (NSSI) has addictive features, and an addiction model of NSSI has been considered. Addictive features have been associated with severity of NSSI and adverse psychological experiences. Yet, there is debate over the extent to which NSSI and substance use disorders (SUDs) are similar experientially.
Methods
To evaluate the extent that people who self-injure experience NSSI like an addiction, we coded the posts of users of the subreddit r/selfharm (n = 500) for each of 11 DSM-5 SUD criteria adapted to NSSI.
Results
A majority (76.8%) of users endorsed at least two adapted SUD criteria in their posts, indicative of mild, moderate, or severe addiction. The most frequently endorsed criteria were urges or cravings (67.6%), escalating severity or tolerance (46.7%), and NSSI that is particularly hazardous. User-level addictive features positively predicted number of methods used for NSSI, number of psychiatric disorders, and particularly hazardous NSSI, but not suicidality. We also observed frequent use of language and concepts common in SUD recovery circles like Alcoholics Anonymous.
Discussion & Conclusion
Our findings support previous work describing the addiction potential of NSSI and associating addictive features with clinical severity. These results suggest that NSSI and SUD may share experiential similarities, which has implications for the treatment of NSSI. We also contribute to a growing body of work that uses social media as a window into the subjective experiences of stigmatized populations.
UNSTRUCTURED
As of December 2020, the SARS-CoV-2 virus has been responsible for over 78 million cases of COVID-19 worldwide, resulting in over 1.7 million deaths. In the United States in particular, protective measures against the COVID-19 pandemic have been hampered by political polarization and discrepancies among federal, state, and local policies. As a result, a huge amount of information surrounding COVID-19, some of it contradictory or blatantly false, has proliferated on social media. In this mixed scoping review, we survey the role of automated accounts, or “bots,” in spreading misinformation during past epidemics, natural disasters, and politically polarizing events through the lens of the COVID-19 pandemic. We also review strategies used by bots to spread (mis)information and machine learning methods for detecting bot activity. We conclude by conducting and presenting a secondary analysis of known bots, finding that up to 66% of bots are discussing COVID-19. The proliferation of COVID-19 (mis)information by bots, coupled with human susceptibility to believing and sharing misinformation, may well impact the course of the pandemic.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.