As the scourge of “fake news” continues to plague our information environment, attention has turned toward devising automated solutions for detecting problematic online content. But, in order to build reliable algorithms for flagging “fake news,” we will need to go beyond broad definitions of the concept and identify distinguishing features that are specific enough for machine learning. With this objective in mind, we conducted an explication of “fake news” that, as a concept, has ballooned to include more than simply false information, with partisans weaponizing it to cast aspersions on the veracity of claims made by those who are politically opposed to them. We identify seven different types of online content under the label of “fake news” (false news, polarized content, satire, misreporting, commentary, persuasive information, and citizen journalism) and contrast them with “real news” by introducing a taxonomy of operational indicators in four domains—message, source, structure, and network—that together can help disambiguate the nature of online news content.
False rumors on WhatsApp, the world’s largest messaging app, have led to mob lynching in India and other countries. Doctored videos sent over the platform have elicited visceral responses among users, resulting in the wrongful death of innocent people. Would the responses have been so strong if the false news were circulated in text or audio? Is video modality the reason for such powerful effects? We explored this question by comparing reactions to three false stories prepared in either text-only, audio-only, or video formats, among rural and urban users in India. Our findings reveal that video is processed more superficially, and therefore users believe in it more readily and share it with others. Aside from advancing our theoretical understanding of modality effects in the context of mobile media, our findings also hold practical implications for design of modality-based flagging of fake news, and literacy campaigns to inoculate users against misinformation.
Given the scale of user-generated content online, the use of artificial intelligence (AI) to flag problematic posts is inevitable, but users do not trust such automated moderation of content. We explore if (a) involving human moderators in the curation process and (b) affording “interactive transparency,” wherein users participate in curation, can promote appropriate reliance on AI. We test this through a 3 (Source: AI, Human, Both) × 3 (Transparency: No Transparency, Transparency-Only, Interactive Transparency) × 2 (Classification Decision: Flagged, Not Flagged) between-subjects online experiment (N = 676) involving classification of hate speech and suicidal ideation. We discovered that users trust AI for the moderation of content just as much as humans, but it depends on the heuristic that is triggered when they are told AI is the source of moderation. We also found that allowing users to provide feedback to the algorithm enhances trust by increasing user agency.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.