There is incredible and intrinsic, though hidden, power in technology settings, including those set by online platforms. The hidden levers of control embedded within the default settings influence users’ overall experience on platforms and with technology, especially in regard to issues of privacy and security. This paper examines the embedded assumptions and implications of technology and technical design on society. To this end, this study addresses the role and power of social media platforms in developing and applying privacy and security policies and norms for their users. The privacy and security choices by social media platforms affect billions of users worldwide. Further, this study will consider how platforms’ public-facing rhetoric aligns or differs with the actual implementation of privacy policies and privacy and security user settings. The implications of this research may have profound impact in the governance, policy, and regulation of platforms.
Privacy settings are a critical space of research. Settings are uniquely positioned at the intersection between users and digital platforms and regulation, providing a visible privacy architecture (unlike backend privacy infrastructure and code) as well as an opportunity for users to interact with privacy choices (unlike terms of service and privacy policy documents which offer only all-or-nothing options). This paper examines the structural power relations and hierarchies inherent within privacy settings. We address the conference theme of decolonizing the internet through a comprehensive analysis of privacy controls, a critical site of power for the “new colonising forces in the form of multinational tech giants who are re-fashioning the world in their own image” (#AoIR2022 CFP). This paper applies a theoretical framework of science & technology studies (STS) to analyze the affordances of social media platforms’ privacy settings. Further, we apply Ian Bogost’s theory of procedural rhetoric to examine how platforms apply “the art of using processes persuasively” (Bogost, 2007). We conduct a comparison study of privacy settings across the most popular social media platforms: Facebook, YouTube, WhatsApp, WeChat, Instagram, TikTok, Snapchat, Pintrest, Twitter, and Reddit. The purpose of this qualitative analysis is to examine how privacy is presented to users. How does each platform define privacy? Where do they locate different kinds of privacy settings? What kinds of privacy choices are offered? How do these choices differ? How a platform designs their choice architecture for privacy shapes a user’s understanding of what privacy is and means.
In 2006, Alaskan Senator Ted Stevens became a laughingstock and enduring meme for arguing during legislative deliberations that the Internet could be understood as "a series of tubes" and “not a big truck" (Belson 2006). The unintended humor of his analogies was ridiculed as evidence that this older lawmaker was too out of touch with modern communications technology to effectively govern them. Yet the episode itself can be understood as evidence of a larger truth—one that both exculpates Stevens somewhat and underlines a broader challenge for internet governance: Namely, that nearly all internet laws and regulations necessarily rely on imperfect metaphor and analogy to keep them in accordance with pre-digital law and constitutional principles, and that even lawmakers and judges with considerable expertise in the field must also rely upon such figurative language. Furthermore, because rhetorical comparisons are fundamentally interpretive, rather than indexical reflections of the things they describe, their use in internet governance amplifies the risk that the prevailing laws and regulations will benefit some users over others, and some uses over others. The internet, in other words, is like a series of analogies. In this article, we catalog many of these analogies and metaphors, document their use in internet governance and policy, and critically investigate how the choice of comparative rhetoric to render the internet knowable introduces hidden bias into the governance process, benefiting some stakeholders over others.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.