The study of the coordinated manipulation of conversations on social media has become more prevalent as social media’s role in amplifying misinformation, hate, and polarization has come under greater scrutiny. We discuss how successful generalized coordination detection algorithms could be used to reinforce existing power imbalances, such as those between marginalized groups and government agencies. We propose an alternative method of identifying manipulation—detecting synchronized actions—which reduces this risk. We further consider how responsible coordination detection may be carried out by analyzing synchronized actions. We propose a synchronized action framework for detecting automated coordination by constructing and analyzing multi-view networks. We validate our framework by examining a large Twitter dataset surrounding the Reopen America conversation from 2020. We first discover three simple coordinated campaigns, and then investigate synchronized actions between users discussing the protests that could be consistent with covert coordination. This task is far more complex than examples evaluated in prior work, which demonstrates the need for our multi-view approach. Next, we identify a cluster of suspicious users and detail the activity of three members. These three users amplify protest messages using the same hashtags at very similar times, though they all focus on different states. This analysis highlights the potential usefulness of coordination detection algorithms in investigating amplification, as well as the need to carefully and responsibly deploy such tools.
We analyze a Singapore-based COVID-19 Telegram group with more than 10000 participants. First, we study the group’s opinion over time, focusing on five dimensions: participation, sentiment, negative emotions, topics, and message types. We find that participation peaked when the Ministry of Health raised the disease alert level, but this engagement was not sustained. Second, we investigate the prevalence of, and reactions to, authority-identified misinformation in the group. We find that authority-identified misinformation is rare, and that participants affirm, deny, and question misinformation. Third, we explore searching for user skepticism as one strategy for identifying misinformation, finding misinformation not previously identified by authorities.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.