This is the accepted version of the paper.This version of the publication may differ from the final published version. Permanent repository link IntroductionAlthough, in the US, the internet has now overtaken newspapers as a source of news (Purcell et al, 2010), the traditional newspaper-and broadcast-providers remain responsible for the bulk of news consumed online. Nearly all of the top 25 most-viewed news websites in the US are either established news brands or aggregator sites that take most of their content from existing news providers (Pew, 2011). The hopes and fears of the early analysts (for example: Negroponte, 1995;and Sunstein, 2002) that the internet's potential to democratize publishing would lead to the eclipse of traditional mass media news organizations have not been realized. Although there has been a huge increase in blogs and other forms of independent online publishing, very few are viewed by a mass audience (Hindman, 2008), and they are almost exclusively dependent on newspaper and broadcast networks for the stories they discuss (Pew, 2010a).However, the established media face a number of challenges in relation to their internet sites; and personalization, the subject of this paper, is both a cause and a response. The challenges arise, in large part, from the consumption patterns of the online audience, and from the economics of advertising, which provides the primary means of support for online news publications. Because the online audience is relatively promiscuous (Pew, 2010b), and only stays on individual websites a short time, building loyalty has been difficult for news websites. In addition, the ability to track users as they move around the web means that advertisers can identify and target their desired upmarket audience without necessarily having to advertise on premium news websites. As a result, premium publishers have been losing advertising sales 1 ; furthermore, as online advertising becomes more sophisticated, 2 the publishers' margins are being squeezed by the companies that collect user and This is a preprint of an article whose final and definitive form has been published in Journalism Studies © 2012 Taylor & Francis; Journalism Studies is available online at informaworld TM : http://www.tandfonline.com/doi/abs/10. 1080/1461670X.2012.664341 2/18 behavioural data to target advertising, and by those that host online advertising delivery platforms.Personalization has emerged as an increasingly popular strategy for news publishers, who hope that it can increase their sites' 'stickiness', and allow them to capture data about users, thus reducing their dependence on the external suppliers of such information.3 Recent examples include The Washington Post's Trove, a site that "aggregates news and enables users to personalize their news stream based on their interests" (Lavrusik, 2011) and The New York Times-backed News.me, which "uses artificial intelligence to…learn what [people] like to read… [and] provides articles and links…of interest" (Wortham, 2011). This paper addres...
Fake or misleading multimedia content and its distribution through social networks such as Twitter constitutes an increasingly important and challenging problem, especially in the context of emergencies and critical situations. In this paper, the aim is to explore the challenges involved in applying a computational verification framework to automatically classify tweets with unreliable media content as fake or real. We created a data corpus of tweets around big events focusing on the ones linking to images (fake or real) of which the reliability could be verified by independent online sources. Extracting content and user features for each tweet, we explored the fake prediction accuracy performance using each set of features separately and in combination. We considered three approaches for evaluating the performance of the classifier, ranging from the use of standard cross-validation, to independent groups of tweets and to cross-event training. The obtained results included a 81% for tweet features and 75% for user ones in the case of cross-validation. When using different events for training and testing, the accuracy is much lower (up to %58) demonstrating that the generalization of the predictor is a very challenging issue.
This is the submitted version of the paper.This version of the publication may differ from the final published version. Permanent
The financial crisis that began in autumn 2008 has attracted considerable attention in regard to the role of the media. This article examines both the audience and the content of the coverage of the crisis on the BBC News website, the largest online news provider in the UK. It demonstrates that online news was a significant part of the overall media coverage of the crisis. Online consumption patterns are very different from those of other media, but the claim that online audiences are ‘dumbed down’ or that they were not provided with a sophisticated range of information and analysis is critically examined. The study also questions whether the content of news coverage was as negative as has been suggested. The research is based on unique access to the BBC News web server-logs, which allow researchers to track audiences not only for the online site as a whole, but also for individual stories, and to match that to content analysis. It makes an important contribution to providing evidence-based research to examine the competing claims that have been made about the role of the business media in the financial crisis.
Social media is now used as an information source in many different contexts. For professional journalists, the use of social media for news production creates new challenges for the verification process. This article describes the development and evaluation of the ‘Truthmeter’ – a tool that automatically scores the journalistic credibility of social media contributors in order to inform overall credibility assessments. The Truthmeter was evaluated using a three-stage process that used both qualitative and quantitative methods, consisting of (1) obtaining a ground truth, (2) building a description of existing practices and (3) calibration, modification and testing. As a result of the evaluation process, which could be generalized and applied in other contexts, the Truthmeter produced credibility scores that were closely aligned with those of trainee journalists. Substantively, the evaluation also highlighted the importance of ‘relational’ credibility assessments, where credibility may be attributed based on networked connections to other credible contributors.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.