Fact-checking verifies a multitude of claims and remains a promising solution to fight fake news. The spread of rumors, hoaxes, and conspiracy theories online is evident in times of crisis, when fake news ramped up across platforms, increasing fear and confusion among the population as seen in the COVID-19 pandemic. This article explores fact-checking initiatives in Latin America, using an original Markov-based computational method to cluster topics on tweets and identify their diffusion between different datasets. Drawing on a mixture of quantitative and qualitative methods, including time-series analysis, network analysis and in-depth close reading, our article proposes an in-depth tracing of COVID-related false information across the region, comparing if there is a pattern of behavior through the countries. We rely on the open Twitter application programming interface connection to gather data from public accounts of the six major fact-checking agencies in Latin America, namely Argentina (Chequeado), Brazil (Agência Lupa), Chile (Mala Espina Check), Colombia (Colombia Check from Consejo de Redacciín), Mexico (El Sabueso from Animal Polótico) and Venezuela (Efecto Cocuyo). In total, these profiles account for 102,379 tweets that were collected between January and July 2020. Our study offers insights into the dynamics of online information dissemination beyond the national level and demonstrates how politics intertwine with the health crisis in this period. Our method is capable of clustering topics in a period of overabundance of information, as we fight not only a pandemic but also an infodemic, evidentiating opportunities to understand and slow the spread of false information.
Fact-checking verifies a multitude of claims and remains a promising solution to fight fake news. The spread of rumors, hoaxes, and conspiracy theories online is evident in times of crisis, when fake news ramped up across platforms, increasing fear and confusion amongst the population as seen in the COVID-19 pandemic. This article explores fact-checking initiatives in Latin America, using an original Markov-based computational method to cluster topics on tweets and identify their diffusion between different datasets. Drawing on a mixture of quantitative and qualitative methods, including time-series analysis, network analysis and in-depth close reading, our article proposes an in-depth tracing of COVID-related false information across the region, comparing if there is a pattern of behavior through the countries. We rely on the open Twitter application programming interface (API) connection to gather data from public accounts of the six major fact-checking agencies in Latin America, namely: Argentina ( Chequeado ), Brazil ( Agência Lupa ), Chile ( Mala Espina Check ), Colombia ( Colombia Check from Consejo de Redacción ), Mexico ( El Sabueso from Animal Político ) and Venezuela ( Efecto Cocuyo ). In total, these profiles account for 102,379 tweets that were collected between January and July 2020. Our study offers insights into the dynamics of online information dissemination beyond the national level and demonstrates how politics intertwine with the health crisis in this period. Our method is capable of clustering topics in a period of overabundance of information, as we fight not only a pandemic but also an infodemic, evidentiating opportunities to understand and slow the spread of false information.
In recent years, news media has been greatly disrupted by the potential of technologically driven approaches in the creation, production, and distribution of news products and services. Artificial intelligence (AI) has emerged from the realm of science fiction and has become a very real tool that can aid society in addressing many issues, including the challenges faced by the news industry. The ubiquity of computing has become apparent and has demonstrated the different approaches that can be achieved using AI. We analyzed the news industry’s AI adoption based on the seven subfields of AI: (i) machine learning; (ii) computer vision (CV); (iii) speech recognition; (iv) natural language processing (NLP); (v) planning, scheduling, and optimization; (vi) expert systems; and (vii) robotics. Our findings suggest that three subfields are being developed more in the news media: machine learning, computer vision, and planning, scheduling, and optimization. Other areas have not been fully deployed in the journalistic field. Most AI news projects rely on funds from tech companies such as Google. This limits AI’s potential to a small number of players in the news industry. We made conclusions by providing examples of how these subfields are being developed in journalism and presented an agenda for future research.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.