Disinformation on social media—commonly called “fake news”—has become a major concern around the world, and many fact-checking initiatives have been launched in response. However, if the presentation format of fact-checked results is not persuasive, fact-checking may not be effective. For instance, Facebook tested the idea of flagging dubious articles in 2017 but concluded that it was ineffective and removed the feature. We conducted three experiments with social media users to investigate two different approaches to implementing a fake news flag—one designed to be most effective when processed by automatic cognition (System 1) and the other designed to be most effective when processed by deliberate cognition (System 2). Both interventions were effective, and an intervention that combined both approaches was about twice as effective. The awareness training on the meaning of the flags increased the effectiveness of the System 2 intervention but not the System 1 intervention. Believability influenced the extent to which users would engage with the article (e.g., read, like, comment, and share). Our results suggest that both theoretical routes can be used—separately or together—in the presentation of fact-checking results in order to reduce the influence of fake news on social media users.
AbstractObjectiveThe objective was to understand how people respond to COVID-19 screening chatbots.Materials and MethodsWe conducted an online experiment with 371 participants who viewed a COVID-19 screening session between a hotline agent (chatbot or human) and a user with mild or severe symptoms.ResultsThe primary factor driving user response to screening hotlines (human or chatbot) is perceptions of the agent’s ability. When ability is the same, users view chatbots no differently or more positively than human agents. The primary factor driving perceptions of ability is the user’s trust in the hotline provider, with a slight negative bias against chatbots’ ability. Asians perceived higher ability and benevolence than Whites.ConclusionEnsuring that COVID-19 screening chatbots provide high quality service is critical, but not sufficient for widespread adoption. The key is to emphasize the chatbot’s ability and assure users that it delivers the same quality as human agents.
We investigate whether the news presentation format affects the believability of a news story and the engagement level of social media users. Specifically, we test to see if highlighting the source delivering the story can nudge the users to think more critically about the truthfulness of the story that they see, and for obscure sources, whether source ratings can affect how the users evaluate the truthfulness. We also test whether the believability can influence the users' engagement level for the presented news post (e.g., read, like, comment, and share). We find that such changes in the news presentation format indeed have significant impacts on how social media users perceive and act on news items.
Controlling digital piracy has remained a top priority for manufacturers of information goods, as well as for many governments around the world. Among the many forms taken by digital piracy, we focus on an increasingly common one-namely, online piracy-that is facilitated by torrent sites and cyberlockers who bring together consumers of pirated content and its suppliers. Motivated by recent empirical literature which makes a clear distinction between anti-piracy efforts that restrict supply of pirated goods (supply-side enforcement) and ones that penalize illegal consumption (demand-side enforcement), we develop a simple economic model and discover some fundamental differences between these two types in terms of their impacts on innovation and welfare. All in all, supply-side enforcement turns out to be the "longer arm"-it has a more desirable economic impact in the long run. Our results have clear implications for manufacturers, consumers, and policymakers.
We investigate whether the news presentation format affects the believability of a news story and the engagement level of social media users. Specifically, we test to see if highlighting the source delivering the story can nudge the users to think more critically about the truthfulness of the story that they see, and for obscure sources, whether source ratings can affect how the users evaluate the truthfulness. We also test whether the believability can influence the users' engagement level for the presented news post (e.g., read, like, comment, and share). We find that such changes in the news presentation format indeed have significant impacts on how social media users perceive and act on news items.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.