The recent proliferation of wearable self-tracking devices intended to regulate and measure the body has brought contingent questions of controlling, accessing and interpreting personal data. Given a socio-technical context in which individuals are no longer the most authoritative source on data about themselves, wearable self-tracking technologies reflect the simultaneous commodification and knowledge-making that occurs between data and bodies. In this article, we look specifically at wearable, self-tracking devices in order to set up an analytical comparison with a key historical predecessor, the weight scale. By taking two distinct cases of self-tracking – wearables and the weight scale – we can situate current discourses of big data within a historical framing of self-measurement and human subjectivity. While the advertising promises of both the weight scale and the wearable device emphasize self-knowledge and control through external measurement, the use of wearable data by multiple agents and institutions results in a lack of control over data by the user. In the production of self-knowledge, the wearable device is also making the user known to others, in a range of ways that can be both skewed and inaccurate. We look at the tensions surrounding these devices for questions of agency, practices of the body, and the use of wearable data by courtrooms and data science to enforce particular kinds of social and individual discipline.
‘@AP: Breaking: Two Explosions in the White House and Barack Obama is injured’. So read a tweet sent from a hacked Associated Press Twitter account @AP, which affected financial markets, wiping out $136.5 billion of the Standard & Poor’s 500 Index’s value. While the speed of the Associated Press hack crash event and the proprietary nature of the algorithms involved make it difficult to make causal claims about the relationship between social media and trading algorithms, we argue that it helps us to critically examine the volatile connections between social media, financial markets, and third parties offering human and algorithmic analysis. By analyzing the commentaries of this event, we highlight two particular currents: one formed by computational processes that mine and analyze Twitter data, and the other being financial algorithms that make automated trades and steer the stock market. We build on sociology of finance together with media theory and focus on the work of Christian Marazzi, Gabriel Tarde and Tony Sampson to analyze the relationship between social media and financial markets. We argue that Twitter and social media are becoming more powerful forces, not just because they connect people or generate new modes of participation, but because they are connecting human communicative spaces to automated computational spaces in ways that are affectively contagious and highly volatile.
No abstract
In this paper, I use The New York Times’ debate titled, “Can predictive policing be ethical and effective?” to examine what are seen as the key operations of predictive policing and what impacts they might have in our current culture and society. The debate is substantially focused on the ethics and effectiveness of the computational aspects of predictive policing including the use of data and algorithms to predict individual behaviour or to identify hot spots where crimes might happen. The debate illustrates both the benefits and the problems of using these techniques, and makes a strong stance in favor of human control and governance over predictive policing. Cultural techniques in the paper is used as a framework to discuss human agency and further elaborate how predictive policing is based on operations which have ethical, epistemological, and social consequences.
This article investigates the public confessions of a small group of ex-Facebook employees, investors, and founders who express regret helping to build the social media platform. Prompted by Facebook’s role in the 2016 United States elections and pointing to the platform’s unintended consequences, the confessions are more than formal admissions of sins. They speak of Facebook’s capacity to damage democratic decision-making and “exploit human psychology,” suggesting that individual users, children in particular, should disconnect. Rather than expressions of truth, this emerging form of corporate abdication constructs dystopian narratives that have the power shape our future visions of social platforms and give rise to new utopias. As such, and marking a stark break with decades of technological utopianism, the confessions are an emergent form of Silicon Valley dystopianism.
The needs and desires to disconnect, detox, and log out have been turned into commodities and found their expressions in detox camps, self-help books, and “offline” branded apparel. Disconnection studies have challenged the power of commodified disconnective practices to create real social change. In this article, we build on the notion of affective attunement to explore how disconnection commodities provide differential ways for individuals to respond to the challenges of connectivity, and how they can form larger patterns of resistance that cannot be dismissed as futile. We examine the ambiguity of disconnection commodities through three examples: a smartwatch kill switch and stealth mode features, detox floatation tank therapy, and make-up lines. Our approach turns the perspective from ends to the means of disconnection. We argue that these commodities do not offer hard breaks but they do let users attune to connectivity.
In October 2012 a group of non-governmental organizations formed a Campaign to Stop Killer Robots. The aim of this campaign was to preemptively ban fully autonomous weapons capable of selecting and engaging targets without human intervention. The campaign gained momentum swiftly, leading to different legal and political discussions and decision-makings. In this article, we use the framework of cultural techniques to analyze the different operational processes, tactics, and ethics underlying the debates surrounding developments of autonomous weapon systems. From reading the materials of the Campaign to Stop Killer Robots and focusing on current robotic research in the military context we argue that, instead of demonizing Killer Robots as such, we need to understand the tools, processes and operating procedures that create, support and validate these objects. The framework of cultural techniques help us to analyze how autonomous technologies draw distinctions between life and death, human and machine, culture and technology, and what it means to be in control of these systems in the 21st century.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.