After the 2016 US presidential elections, the term ‘fake news’ became synonymous with disinformation and a catch-all term for the problems that social networks were bringing to communication. Four years later, there are dozens of empirical studies that have attempted to describe and analyse an issue that, despite still being in the process of definition, has been identified as one of the key COVID-19 cyberthreats by Interpol, is considered a threat to democracy by many states and supranational institutions and, as a consequence, is subject to regulation or even criminalization. These legislative and criminal policy interventions form part of the first stage in the construction of a moral panic that may lead to the restriction of freedom of expression and information. By analysing empirical research that attempts to measure the extent of the issue and its impact, the present article aims to provide critical reflection on the process of constructing fake news as a threat. Via a systematic review of the literature, we observe, firstly, that the concept of fake news used in empirical research is limited and should be refocused because it has not been constructed according to scientific criteria and can fail to include relevant elements and actors, such as governments and traditional media. Secondly, the article analyses what is known scientifically about the extent, consumption and impact of fake news and argues that it is problematic to establish causal relationships between the issue and the effects it has been said to produce. This conclusion requires us to conduct further research and to reconsider the position of fake news as a threat as well as the resulting regulation and criminalization.
This article analyses Spanish media treatment of a certain type of immigrant: the unaccompanied foreign minor ('MENA' in Spanish). The media play an important role in creating and disseminating ideas and images amongst the general public, thereby promoting the articulation of sets of meanings called discourses. The main goal of this research is to identify the discursive approaches that have been constructed around the term “MENA” in the main Spanish daily newspapers. To this end, we gathered and analysed all the news reports published between January 2017 and October 2019 by the digital editions of the four most widely-read newspapers in Spain (La Vanguardia, El País, El Mundo and ABC). This analysis was performed using text mining techniques (an important field in data science) such as term frequency, inverse document frequency, and correlation networks between words. Our results show that the term “MENA” evokes a criminalising, moralistic, welfare-dependent discourse that is articulated from an adult-centric, nationalist perspective. The study concluded that the conservative press uses the acronym more frequently than the left-wing media. However, no significant discursive differences were observed between conservative and progressive press in terms of the language used, which often had negative connotations that stigmatised the young people concerned. Este artículo tiene como objeto abordar el tratamiento mediático realizado por la prensa española sobre una tipología de inmigrante: el menor extranjero no acompañado «MENA». Los medios de comunicación tienen un papel relevante en la creación y difusión de conceptos e imágenes entre el público, de tal modo, promueven la articulación de conjuntos de significados llamados discursos. La investigación se propone identificar los enfoques discursivos construidos en torno a la sigla «MENA» en los principales diarios de la prensa española. Para llevar a cabo esta tarea se han recopilado y analizado todas las piezas informativas publicadas sobre menores migrantes en las ediciones digitales de los cuatro diarios más leídos en España (La Vanguardia, El País, El Mundo y ABC) entre el 1 de enero de 2017 y 31 de octubre de 2019. Estas piezas han sido analizadas mediante técnicas de minería de datos (un área relevante dentro de la ciencia de datos) tales como la observación de «term frequency» y de «inverse document frequency». Estas técnicas, junto a la construcción de redes de correlaciones entre palabras, han permitido observar que el término «MENA» evoca un discurso asistencialista, criminalizador y moralista desde un enfoque adultocéntrico y nacionalista. Asimismo, se concluye que la prensa conservadora usa más la sigla que la prensa progresista, pero sin divergencias significativas en el lenguaje utilizado.
On February 28th, shortly after the Russian invasion of Ukraine, Twitter announced the expansion of its labelling policy for “Russia state-affiliated media”, in order to address disinformation in favour of the Russian government. While this ‘soft’ approach does not include the removal of content, it entails issues for the freedom of expression and information. This article addresses the consequences of this labelling policy for the range and impact of accounts labelled “Russia state-affiliated media” during the Ukrainian war. Using an iterative detection method, a total of 90 accounts of both media outlets and individual journalists with this label were identified. The analysis of these accounts’ information and timeline, as well as the comparison of the impact of their tweets before and after February 28th with an ARIMA model, has revealed that this policy, despite its limited scope, lead to a significant reduction in the impact of the sampled tweets. These results provide empirical evidence to guide critical reflection on this content moderation policy.
In recent decades, many sectors of our society have been digitized, and much of our life has moved to cyberspace, especially in terms of entertainment. Users meet, relate, and cooperate in the new public space that is the internet and form digital communities. Video games play a leading role in the formation of such communities. However, these communities also present antisocial behaviors, ranging from disruptive actions to harassment and hate speech. Such behaviors, encompassed under the umbrella term toxicity, are a major concern for both users and those in charge of moderating these spaces. This article focuses on toxicity in today’s leading online video game League of Legends. Three hundred twenty-eight matches were reviewed using a system of two judges to study the prevalence of these problematic behaviors. We find that 70% of matches were affected by disruptive behavior. Nevertheless, only 10.9% of the analyzed matches were exclusively affected by downright harmful behavior. In our view, the results have relevant implications for content moderation policy that are also addressed in this paper.
Disinformation has been described as a threat to political discourse and public health. Even if this presumption is questionable, instruments such as criminal law or soft law have been utilised to tackle this phenomenon. Recently, technological solutions aiming to detect and remove false information, among other illicit content, have also been developed. These artificial intelligence (AI) tools have been criticised for being incapable of understanding the context in which content is shared on social media, thus causing the removal of posts that are protected by freedom of expression. However, in this short contribution, we argue that further problems arise, mostly in relation to the concepts that developers utilise to programme these systems. The Twitter policy on state-affiliated media labelling is a good example of how social media can use AI to affect accounts by relying on a questionable definition of disinformation.
Movements such as #MeToo have shown how an online trend can become the vehicle for collectively sharing personal experiences of sexual victimisation that often remains unreported to the criminal justice system. These social media trends offer new opportunities to social scientists who investigate complex phenomena that, despite existing since time immemorial, are still taboo and difficult to access. They also bring technical difficulties, as the challenge to identify reports of victimisation, and new questions about the characteristic of the events, the role that victimisation testimonies play and the capacity to detect them by analysing their characteristics. To address these issues, we collected 91,501 tweets under the hashtag #MeTooInceste, posted from the 20 to 27 January 2021. A model was fitted using Latent Dirichlet Allocation that detected 1688 tweets disclosing experiences of child sexual abuse, with an accuracy of 91.3% [± 3%] and a recall of 93.1% [± 5%]. We performed Conjunctive Analysis of Case Configurations on the tweets identified as disclosures of victimisation and found that long tweets posted by users with small accounts, without URL or picture, were more likely to be related to disclosure of child sexual abuse. We discuss the possibilities of these trends and techniques offer for research and practice.
The emergence of algorithmic tools and Artificial Intelligence and their use in criminal justice has raised a relevant theoretical and political debate. This article unpacks and synthesizes the debate on the role of causality for the scientific method to analyze predictive decision support systems, their practical value and epistemic problems. As a result of this discussion, it is argued that the measured usage of theory and causation-based algorithms is preferable over correlational (i.e., causally opaque) algorithms as support tools in the penal system. At the same time, the usage of the latter is supported when it is critically accompanied by abductive reasoning. Finally, the arguments put forth in this article suggest that the field of criminology needs a deeper epistemological understanding of the scientific value of data-driven tools in order to entertain a serious debate on their use.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.