Search engines are the primary gateways of information. Yet, they do not take into account the credibility of search results. There is a growing concern that YouTube, the second largest search engine and the most popular video-sharing platform, has been promoting and recommending misinformative content for certain search topics. In this study, we audit YouTube to verify those claims. Our audit experiments investigate whether personalization (based on age, gender, geolocation, or watch history) contributes to amplifying misinformation. After shortlisting five popular topics known to contain misinformative content and compiling associated search queries representing them, we conduct two sets of audits-Search-and Watch-misinformative audits. Our audits resulted in a dataset of more than 56K videos compiled to link stance (whether promoting misinformation or not) with the personalization attribute audited. Our videos correspond to three major YouTube components: search results, Up-Next, and Top 5 recommendations. We find that demographics, such as, gender, age, and geolocation do not have a significant effect on amplifying misinformation in returned search results for users with brand new accounts. On the other hand, once a user develops a watch history, these attributes do affect the extent of misinformation recommended to them. Further analyses reveal a filter bubble effect, both in the Top 5 and Up-Next recommendations for all topics, except vaccine controversies; for these topics, watching videos that promote misinformation leads to more misinformative video recommendations. In conclusion, YouTube still has a long way to go to mitigate misinformation on its platform.
There is a growing concern that e-commerce platforms are amplifying vaccine-misinformation. To investigate, we conduct two-sets of algorithmic audits for vaccine misinformation on the search and recommendation algorithms of Amazon-world's leading eretailer. First, we systematically audit search-results belonging to vaccine-related search-queries without logging into the platformunpersonalized audits. We find 10.47% of search-results promote misinformative health products. We also observe ranking-bias, with Amazon ranking misinformative search-results higher than debunking search-results. Next, we analyze the effects of personalization due to account-history, where history is built progressively by performing various real-world user-actions, such as clicking a product. We find evidence of filter-bubble effect in Amazon's recommendations; accounts performing actions on misinformative products are presented with more misinformation compared to accounts performing actions on neutral and debunking products. Interestingly, once user clicks on a misinformative product, homepage recommendations become more contaminated compared to when user shows an intention to buy that product. CCS CONCEPTS• Information systems → Personalization; Content ranking; Web crawling; • Human-centered computing → Human computer interaction (HCI).
Transparency in moderation practices is crucial to the success of an online community. To meet the growing demands of transparency and accountability, several academics came together and proposed the Santa Clara Principles on Transparency and Accountability in Content Moderation (SCP). In 2018, Reddit, home to uniquely moderated communities called subreddits, announced in its transparency report that the company is aligning its content moderation practices with the SCP. But do the moderators of subreddit communities follow these guidelines too? In this paper, we answer this question by employing a mixed-methods approach on public moderation logs collected from 204 subreddits over a period of five months, containing more than 0.5M instances of removals by both human moderators and AutoModerator. Our results reveal a lack of transparency in moderation practices. We find that while subreddits often rely on AutoModerator to sanction newcomer posts based on karma requirements and moderate uncivil content based on automated keyword lists, users are neither notified of these sanctions, nor are these practices formally stated in any of the subreddits' rules. We interviewed 13 Reddit moderators to hear their views on different facets of transparency and to determine why a lack of transparency is a widespread phenomenon. The interviews reveal that moderators' stance on transparency is divided, there is a lack of standardized process to appeal against content removal and Reddit's app and platform design often impede moderators' ability to be transparent in their moderation practices.
Increasing demands for fact-checking have led to a growing interest in developing systems and tools to automate the fact-checking process. However, such systems are limited in practice because their system design often does not take into account how fact-checking is done in the real world and ignores the insights and needs of various stakeholder groups core to the fact-checking process. This paper unpacks the fact-checking process by revealing the infrastructures---both human and technological---that support and shape fact-checking work. We interviewed 26 participants belonging to 16 fact-checking teams and organizations with representation from 4 continents. Through these interviews, we describe the human infrastructure of fact-checking by identifying and presenting, in-depth, the roles of six primary stakeholder groups, 1) Editors, 2) External fact-checkers, 3) In-house fact-checkers, 4) Investigators and researchers, 5) Social media managers, and 6) Advocators. Our findings highlight that the fact-checking process is a collaborative effort among various stakeholder groups and associated technological and informational infrastructures. By rendering visibility to the infrastructures, we reveal how fact-checking has evolved to include both short-term claims centric and long-term advocacy centric fact-checking. Our work also identifies key social and technical needs and challenges faced by each stakeholder group. Based on our findings, we suggest that improving the quality of fact-checking requires systematic changes in the civic, informational, and technological contexts.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.