During the last years, there has been a growing multidisciplinary interest in alternative educational approaches, such as serious games, aiming at enhancing thinking skills and media literacy. Likewise, the objective of this study is to present the design and the development of an educational web application for learning the necessary steps towards the detection of bogus content, according to the fact-checking procedures. The game presents news articles, which have to be characterized as fake or real by the players. During the effort to reach the correct decision, the players can use tools and practices for identifying relevant information regarding the clues, which frame a news story (title, date, creator, source, containing images). After presenting the progress of interface design and development, this paper reports the results of a randomized online field study (n = 111), which provides some preliminary evidence. Specifically, it is validated that the game can raise awareness, teach about authentication tools, and highlight the importance of patterns that include evidence regarding the authenticity of articles. Additionally, thorough discussion was conducted within a media class (n = 35) to receive useful feedback/evaluation about the offered utilities and their usability. The findings suggest that educational games may be a promising vehicle to inoculate the public against misinformation.
The chapter investigates content authentication strategies and their use in media practice. Remarkable research progress has been conducted on media veracity methods and algorithms, however without providing that much straightforward tools to users involved in real-world applications. Hence, there is an urgent need for further support content verification by exploiting all the available methods in properly integrated online environments, forming a media authentication network. On-demand training (and feedback) on these technologies is considered of major importance, enabling users to collaborate with media and forgery experts towards adoption, refinement, and widespread dissemination of best practices. Better comprehension of the involved tools and algorithms would propel their broad exploitation in practice, gaining valuable feedback for further improvements. Thus, a continuously updated online repository, containing documented examples, learning resources, and media veracity tools could be adaptively accommodated, better supporting various users and applications needs.
The present chapter investigates content authentication strategies and their use in media practice. Remarkable research progress has been conducted on media veracity methods and algorithms, however, without providing that much straightforward tools to users involved in real-world applications. Hence, there is an urgent need for further supporting content verification by exploiting all the available methods in properly integrated online environments, forming a Media Authentication Network. On-demand training (and feedback) on these technologies is considered of major importance, enabling users to collaborate with media and forgery experts towards adoption, refinement and widespread dissemination of best practices. Better comprehension of the involved tools and algorithms would propel their broad exploitation in practice, gaining valuable feedback for further improvements. Thus, a continuously updated online repository, containing documented examples, learning resources and media veracity tools, could be adaptively accommodated, better supporting various users and applications needs.
Photos have been used as evident material in news reporting almost since the beginning of Journalism. In this context, manipulated or tampered pictures are very common as part of informing articles, in today's misinformation crisis. The current paper investigates the ability of people to distinguish real from fake images. The presented data derive from two studies. Firstly, an online cross-sectional survey (N = 120) was conducted to analyze ordinary human skills in recognizing forgery attacks. The target was to evaluate individuals' perception in identifying manipulated visual content, therefore, to investigate the feasibility of “crowdsourced validation”. This last term refers to the process of gathering fact-checking feedback from multiple users, thus collaborating towards assembling pieces of evidence on an event. Secondly, given that contemporary veracity solutions are coupled with both journalistic principles and technology developments, an experiment in two phases was employed: a) A repeated measures experiment was conducted to quantify the associated abilities of Media and Image Experts (N = 5 + 5) in detecting tampering artifacts. In this latter case, image verification algorithms were put into the core of the analysis procedure to examine their impact on the authenticity assessment task. b) Apart from conducting interview sessions with the selected experts and their proper guidance in using the tools, a second experiment was also deployed on a larger scale through an online survey (N = 301), aiming at validating some of the initial findings. The primary intent of the deployed analysis and their combined interpretation was to evaluate image forensic services, offered as real-world tools, regarding their comprehension and utilization by ordinary people, involved in the everyday battle against misinformation. The outcomes confirmed the suspicion that only a few subjects had prior knowledge of the implicated algorithmic solutions. Although these assistive tools often lead to controversial or even contradictory conclusions, their experimental treatment with the systematic training in their proper use boosted the participants' performance. Overall, the research findings indicate that the scores of successful detections, relying exclusively on human observations, cannot be disregarded. Hence, the ultimate challenge for the “verification industry” should be to balance between forensic automations and the human experience, aiming at defending the audience from inaccurate information propagation.
Media authentication relies on the detection of inconsistencies that may indicate malicious editing in audio and video files. Traditionally, authentication processes are performed by forensics professionals using dedicated tools. There is rich research on the automation of this procedure, but the results do not yet guarantee the feasibility of providing automated tools. In the current approach, a computer-supported toolbox is presented, providing online functionality for assisting technically inexperienced users (journalists or the public) to investigate visually the consistency of audio streams. Several algorithms based on previous research have been incorporated on the backend of the proposed system, including a novel CNN model that performs a Signal-to-Reverberation-Ratio (SRR) estimation with a mean square error of 2.9%. The user can access the web application online through a web browser. After providing an audio/video file or a YouTube link, the application returns as output a set of interactive visualizations that can allow the user to investigate the authenticity of the file. The visualizations are generated based on the outcomes of Digital Signal Processing and Machine Learning models. The files are stored in a database, along with their analysis results and annotation. Following a crowdsourcing methodology, users are allowed to contribute by annotating files from the dataset concerning their authenticity. The evaluation version of the web application is publicly available online.
This study focuses on the way(s) that the economic and the pandemic crisis were covered by media outlets and aims to research whether journalists’ own feelings and experiences of covering both these traumatic events were depicted in their news articles. Drawing on Semetko and Valkenburg’s (2000) set of five generic frames this study focuses on Greece, a country that has been severely hit by both these crises and brings together theories about journalism as emotional labour that defy the prevailing notion of the distant and neutral observers. Moving one step further, this study argues that journalists convey their source’s emotions, but in some cases, they also reveal their own feelings through their news articles. Findings suggest that apart from the already documented frames, (i.e., attribution of responsibility, conflict, human interest, economic consequences, and morality), journalists used the trauma frame, a notion we use to refer to news articles that essentially reflect and reveal journalists’ own emotions. This finding refutes the traditional understanding of quality journalistic discourse as one stripped of emotional expression and opens new pathways for research.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.