Algorithms are said to affect social realities, often in unseen ways. This article explores conscious, instrumental interactions with algorithms, as a window into the complexities and extent of algorithmic power. Through a thematic analysis of online discussions among Instagram influencers, I observed that influencers' pursuit of influence resembles a game constructed around "rules" encoded in algorithms. Within the "visibility game," influencers' interpretations of Instagram's algorithmic architecture-and the "game" more broadly-act as a lens through which to view and mechanize the rules of the game. Illustrating this point, this article describes two prominent interpretations, which combine information influencers glean about Instagram's algorithms with preexisting discourses within influencer communities on authenticity and entrepreneurship. This article shows how directing inquiries toward the visibility game makes present the interdependency between users, algorithms, and platform owners and demonstrates how algorithms structure, but do not unilaterally determine user behavior.
Transparency can empower users to make informed choices about how they use an algorithmic decision-making system and judge its potential consequences. However, transparency is often conceptualized by the outcomes it is intended to bring about, not the specifics of mechanisms to achieve those outcomes. We conducted an online experiment focusing on how different ways of explaining Facebook's News Feed algorithm might affect participants' beliefs and judgments about the News Feed. We found that all explanations caused participants to become more aware of how the system works, and helped them to determine whether the system is biased and if they can control what they see. The explanations were less effective for helping participants evaluate the correctness of the system's output, and form opinions about how sensible and consistent its behavior is. We present implications for the design of transparency mechanisms in algorithmic decision-making systems based on these results.
Political campaigns increasingly rely on Facebook for reaching their constituents, particularly through ad targeting. Facebook’s business model is premised on a promise to connect advertisers with the “right” users: those likely to click, download, engage, purchase. The company pursues this promise (in part) by algorithmically inferring users’ interests from their data and providing advertisers with a means of targeting users by their inferred interests. In this study, we explore for whom this interest classification system works in order to build on conversations in critical data studies about the ways such systems produce knowledge about the world rooted in power structures. We critically analyze the classification system from a variety of empirical vantage points—via user data; Facebook documentation, training, and patents; and Facebook’s tools for advertisers—and through theoretical concepts from a variety of domains. In this, we focus on the ways the classification system shapes possibilities for political representation and voice, particularly for people of color, women, and LGBTQ+ people. We argue that this “big data-driven” classification system should be read as political: it articulates a stance not only on what issues are or are not important in the U.S. public sphere, but also on who is considered a significant enough public to be adequately accounted for.
During the onset of the COVID-19 pandemic, various officials flagged the critical threat of false information. In this study, we explore how three major social media platforms (Facebook, Twitter, and YouTube) responded to this “infodemic” during early stages of the pandemic via emergent fact-checking policies and practices, and consider what this means for ensuring a well-informed public. We accomplish this through a thematic analysis of documents published by the three platforms that address fact-checking, particularly those that focus on COVID-19. In addition to examining what the platforms said they did, we also examined what the platforms actually did in practice via a retrospective case study drawing on secondary data about the viral conspiracy video, Plandemic. We demonstrate that the platforms focused their energies primarily on the visibility of COVID-19 mis/disinformation on their sites via (often vaguely described) policies and practices rife with subjectivity. Moreover, the platforms communicated the expectation that users should ultimately be the ones to hash out what they believe is true. We argue that this approach does not necessarily serve the goal of ensuring a well-informed public, as has been the goal of fact-checking historically, and does little to address the underlying conditions and structures that permit the circulation and amplification of false information online.
A number of issues have emerged related to how platforms moderate and mitigate “harm.” Although platforms have recently developed more explicit policies in regard to what constitutes “hate speech” and “harmful content,” it appears that platforms often use subjective judgments of harm that specifically pertains to spectacular, physical violence—but harm takes on many shapes and complex forms. The politics of defining “harm” and “violence” within these platforms are complex and dynamic, and represent entrenched histories of how control over these definitions extends to people's perceptions of them. Via a critical discourse analysis of policy documents from three major platforms (Facebook, Twitter, and YouTube), we argue that platforms' narrow definitions of harm and violence are not just insufficient but result in these platforms engaging in a form of symbolic violence. Moreover, the platforms position harm as a floating signifier, imposing conceptions of not just what violence is and how it manifests, but who it impacts. Rather than changing the mechanisms of their design that enable harm, the platforms reconfigure intentionality and causality to try to stop users from being “harmful,” which, ironically, perpetuates harm. We provide a number of suggestions, namely a restorative justice‐focused approach, in addressing platform harm.
The growing ubiquity of algorithms in everyday life has prompted cross-disciplinary interest in what people know about algorithms. The purpose of this article is to build on this growing literature by highlighting a particular way of knowing algorithms evident in past work, but, as yet, not clearly explicated. Specifically, I conceptualize practical knowledge of algorithms to capture knowledge located at the intersection of practice and discourse. Rather than knowing that an algorithm is/does X, Y, or Z, practical knowledge entails knowing how to accomplish X, Y, or Z within algorithmically mediated spaces as guided by the discursive features of one’s social world. I conceptualize practical knowledge in conversation with past work on algorithmic knowledge and theories of knowing, and as empirically grounded in a case study of a leftist online community known as “BreadTube.”
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.