Search engines are the primary gateways of information. Yet, they do not take into account the credibility of search results. There is a growing concern that YouTube, the second largest search engine and the most popular video-sharing platform, has been promoting and recommending misinformative content for certain search topics. In this study, we audit YouTube to verify those claims. Our audit experiments investigate whether personalization (based on age, gender, geolocation, or watch history) contributes to amplifying misinformation. After shortlisting five popular topics known to contain misinformative content and compiling associated search queries representing them, we conduct two sets of audits-Search-and Watch-misinformative audits. Our audits resulted in a dataset of more than 56K videos compiled to link stance (whether promoting misinformation or not) with the personalization attribute audited. Our videos correspond to three major YouTube components: search results, Up-Next, and Top 5 recommendations. We find that demographics, such as, gender, age, and geolocation do not have a significant effect on amplifying misinformation in returned search results for users with brand new accounts. On the other hand, once a user develops a watch history, these attributes do affect the extent of misinformation recommended to them. Further analyses reveal a filter bubble effect, both in the Top 5 and Up-Next recommendations for all topics, except vaccine controversies; for these topics, watching videos that promote misinformation leads to more misinformative video recommendations. In conclusion, YouTube still has a long way to go to mitigate misinformation on its platform.
In the past half-decade, Amazon Mechanical Turk has radically changed the way many scholars do research. The availability of a massive, distributed, anonymous crowd of individuals willing to perform general human-intelligence micro-tasks for micro-payments is a valuable resource for researchers and practitioners. This paper addresses the challenges of obtaining quality annotations for subjective judgment oriented tasks of varying difficulty. We design and conduct a large, controlled experiment (N=68,000) to measure the efficacy of selected strategies for obtaining high quality data annotations from non-experts. Our results point to the advantages of person-oriented strategies over process-oriented strategies. Specifically, we find that screening workers for requisite cognitive aptitudes and providing training in qualitative coding techniques is quite effective, significantly outperforming control and baseline conditions. Interestingly, such strategies can improve coder annotation accuracy above and beyond common benchmark strategies such as Bayesian Truth Serum (BTS).
Crowdfunding sites like Kickstarter-where entrepreneurs and artists look to the internet for funding-have quickly risen to prominence. However, we know very little about the factors driving the "crowd" to take projects to their funding goal. In this paper we explore the factors which lead to successfully funding a crowdfunding project. We study a corpus of 45K crowdfunded projects, analyzing 9M phrases and 59 other variables commonly present on crowdfunding sites. The language used in the project has surprising predictive poweraccounting for 58.56% of the variance around successful funding. A closer look at the phrases shows they exhibit general persuasion principles. For example, also receive two reflects the principle of Reciprocity and is one of the top predictors of successful funding. We conclude this paper by announcing the release of the predictive phrases along with the control variables as a public dataset, hoping that our work can enable new features on crowdfunding sites-tools to help both backers and project creators make the best use of their time and money.
The anti-vaccination movement threatens public health by reducing the likelihood of disease eradication. With social media’s purported role in disseminating anti-vaccine information, it is imperative to understand the drivers of attitudes among participants involved in the vaccination debate on a communication channel critical to the movement: Twitter. Using four years of longitudinal data capturing vaccine discussions on Twitter, we identify users who persistently hold pro and anti attitudes, and those who newly adopt anti attitudes towards vaccination. After gathering each user’s entire Twitter timeline, totaling to over 3 million tweets, we explore differences in the individual narratives across the user cohorts. We find that those with long-term anti-vaccination attitudes manifest conspiratorial thinking, mistrust in government, and are resolute and in-group focused in language. New adoptees appear to be predisposed to form anti-vaccination attitudes via similar government distrust and general paranoia, but are more social and less certain than their long-term counterparts. We discuss how this apparent predisposition can interact with social media-fueled events to bring newcomers into the anti-vaccination movement. Given the strong base of conspiratorial thinking underlying anti-vaccination attitudes, we conclude by highlighting the need for alternatives to traditional methods of using authoritative sources such as the government when correcting misleading vaccination claims.
Online communities can promote illness recovery and improve well-being in the cases of many kinds of illnesses. However, for challenging mental health condition like anorexia, social media harbor both recovery communities as well as those that encourage dangerous behaviors. The effectiveness of such platforms in promoting recovery despite housing both communities is underexplored. Our work begins to fill this gap by developing a statistical framework using survival analysis and situating our results within the cognitive behavioral theory of anorexia. This model identifies content and participation measures that predict the likelihood of recovery. From our dataset of over 68M posts and 10K users that self-identify with anorexia, we find that recovery on Tumblr is protracted - only half of the population is estimated to exhibit signs of recovery after four years. We discuss the effectiveness of social media in improving well-being around anorexia, a unique health challenge, and emergent questions from this line of work.
How do journalists mark quoted content as certain or uncertain, and how do readers interpret these signals? Predicates such as thinks, claims, and admits offer a range of options for framing quoted content according to the author's own perceptions of its credibility. We gather a new dataset of direct and indirect quotes from Twitter, and obtain annotations of the perceived certainty of the quoted statements. We then compare the ability of linguistic and extra-linguistic features to predict readers' assessment of the certainty of quoted content. We see that readers are indeed influenced by such framing devices -and we find no evidence that they consider other factors, such as the source, journalist, or the content itself. In addition, we examine the impact of specific framing devices on perceptions of credibility.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.