With misinformation being one of the biggest issues of current times, many organisations are emerging to offer verifications of information and assessments of news sources. However, it remains unclear how they relate in terms of coverage, overlap and agreement. In this paper we introduce a comparison of the assessments produced by different organisations, in order to measure their overlap and agreement on news sources. Relying on the general term of credibility, we map each of the different assessments to a unified scale. Then we compare two different levels of credibility assessments (source level and document level) by using the data published by various organisations, including fact-checkers, to see which sources they assess more than others, how much overlap there is between them, and how much agreement there is between their verdicts. Our results show that the overlap between the different origins is generally quite low, meaning that different experts and tools provide evaluations for a rather disjoint set of sources, also when considering fact-checking. For agreement, instead we find that there are origins that agree more than others on the verdicts.
This paper summarises work where we combined semantic web technologies with deep learning systems to obtain state-of-the art explainable misinformation detection. We proposed a conceptual and computational model to describe a wide range of misinformation detection systems based around the concepts of credibility and reviews. We described how Credibility Reviews (CRs) can be used to build networks of distributed bots that collaborate for misinformation detection which we evaluated by building a prototype based on publicly available datasets and deep learning models.
Containing misinformation spread on social media has been acknowledged as a great socio-technical challenge in the last years.Despite advances, practical and timely solutions to properly communicate verified (mis)information to social media users are an evidenced need. We introduce a multi-agent approach to bridge Twitter users with fact-checked information. First, a social bot, which nudges users sharing verified misinformation, and a conversational agent that verifies if there is a reputable fact-check available and explains existing assessments in natural language. Both agents share the same requirements of evoking trust and being perceived by Twitter users as an opportunity to build their media literacy. To this end, two preliminary human-centred studies are presented, the first one looking for an adequate identity for the bot, and the second for understanding preferences for credibility indicators when explaining the assessment of misinformation. The results indicate what this design research should pursue to create agents that are consistent in their presentation, friendly, engaging, and credible.CCS Concepts: • Human-centered computing → Natural language interfaces.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.