Self-tracking devices point to a future in which individuals will be more involved in the management of their health and will generate data that will benefit clinical decision making and research. They have thus attracted enthusiasm from medical and public health professionals as key players in the move toward participatory and personalized healthcare. Critics, however, have begun to articulate a number of broader societal and ethical concerns regarding self-tracking, foregrounding their disciplining, and disempowering effects. This paper has two aims: first, to analyze some of the key promises and concerns that inform this polarized debate. I argue that far from being solely about health outcomes, this debate is very much about fundamental values that are at stake in the move toward personalized healthcare, namely, the values of autonomy, solidarity, and authenticity. The second aim is to provide a framework within which an alternative approach to self-tracking for health can be developed. I suggest that a practice-based approach, which studies how values are enacted in specific practices, can open the way for a new set of theoretical questions. In the last part of the paper, I sketch out how this can work by describing various enactments of autonomy, solidarity, and authenticity among self-trackers in the Quantified Self community. These examples show that shifting attention to practices can render visible alternative and sometimes unexpected enactments of values. Insofar as these may challenge both the promises and concerns in the debate on self-tracking for health, they can lay the groundwork for new conceptual interventions in future research.
This article foregrounds the ways in which members of the Quantified Self ascribe value and meaning to the data they generate in self-tracking practices. We argue that the widespread idea that what draws self-trackers to numerical data is its perceived power of truth and objectivity—a so-called “data fetishism”—is limiting. Using an ethnographic approach, we describe three ways in which self-trackers attribute meaning to their data-gathering practices which escape this data fetishist critique: self-tracking as a practice of mindfulness, as a means of resistance against social norms, and as a communicative and narrative aid. In light of this active engagement with data, we suggest that it makes more sense to view these practitioners as “ quantifying selves.” We also suggest that such fine-grained accounts of the appeal that data can have, beyond its allure of objectivity, are necessary if we are to achieve a fuller understanding of Big Data culture.
Since the outbreak of COVID-19, governments have turned their attention to digital contact tracing. In many countries, public debate has focused on the risks this technology poses to privacy, with advocates and experts sounding alarm bells about surveillance and mission creep reminiscent of the post 9/11 era. Yet, when Apple and Google launched their contact tracing API in April 2020, some of the world's leading privacy experts applauded this initiative for its privacy-preserving technical specifications. In an interesting twist, the tech giants came to be portrayed as greater champions of privacy than some democratic governments. This article proposes to view the Apple/Google API in terms of a broader phenomenon whereby tech corporations are encroaching into ever new spheres of social life. From this perspective, the (legitimate) advantage these actors have accrued in the sphere of the production of digital goods provides them with (illegitimate) access to the spheres of health and medicine, and more worrisome, to the sphere of politics. These sphere transgressions raise numerous risks that are not captured by the focus on privacy harms. Namely, a crowding out of essential spherical expertise, new dependencies on corporate actors for the delivery of essential, public goods, the shaping of (global) public policy by non-representative, private actors and ultimately, the accumulation of decision-making power across multiple spheres. While privacy is certainly an important value, its centrality in the debate on digital contact tracing may blind us to these broader societal harms and unwittingly pave the way for ever more sphere transgressions.
Consumer-oriented mobile technologies offer new ways of capturing multidimensional health data, and are increasingly seen as facilitators of medical research. This has opened the way for large consumer tech companies, like Apple, Google, Amazon and Facebook, to enter the space of health research, offering new methods for collecting, storing and analyzing health data. While these developments are often portrayed as 'disrupting' research in beneficial ways, they also raise many ethical issues. These can be organized into three clusters: questions concerning the quality of research; privacy/informed consent; and new power asymmetries based on access to data and control over technological infrastructures. I argue that this last cluster, insofar as it may affect future research agendas, deserves more critical attention.
In recent years, all major consumer technology corporations have moved into the domain of health research. This 'Googlization of health research' ('GHR') begs the question of how the common good will be served in this research. As critical data scholars contend, such phenomena must be situated within the political economy of digital capitalism in order to foreground the question of public interest and the common good. Here, trends like GHR are framed within a double, incommensurable logic, where private gain and economic value are pitted against public good and societal value. While helpful for highlighting the exploitative potential of digital capitalism, this framing is limiting, insofar as it acknowledges only one conception of the common good. This article uses the analytical framework of modes of justification developed by Boltanksi and Thévenot to identify a plurality of orders of worth and conceptualizations of the common good at work in GHR. Not just the 'civic' (doing good for society) and 'market' (enhancing wealth creation) orders, but also an 'industrial' (increasing efficiency), a 'project' (innovation and experimentation), and what I call a 'vitalist' (proliferating life) order. Using promotional material of GHR initiatives and preliminary interviews with participants in GHR projects, I ask what moral orientations guide different actors in GHR. Engaging seriously with these different conceptions of the common good is paramount. First, in order to critically evaluate them and explicate what is at stake in the move towards GHR, and ultimately, in order to develop viable governance solutions that ensure strong 'civic' components.
Mobile applications are increasingly regarded as important tools for an integrated strategy of infection containment in post-lockdown societies around the globe. This paper discusses a number of questions that should be addressed when assessing the ethical challenges of mobile applications for digital contact-tracing of COVID-19: Which safeguards should be designed in the technology? Who should access data? What is a legitimate role for "Big Tech" companies in the development and implementation of these systems? How should cultural and behavioural issues be accounted for in the design of these apps? Should use of these apps be compulsory? What does transparency and ethical oversight mean in this context? We demonstrate that responses to these questions are complex and contingent and argue that if digital contract-tracing is used, then it should be clear that this is on a trial basis and its use should be subject to independent monitoring and evaluation.
In the early months of 2020, the deadly Covid-19 disease spread rapidly around the world. In response, national and regional governments implemented a range of emergency lockdown measures, curtailing citizens’ movements and greatly limiting economic activity. More recently, as restrictions begin to be loosened or lifted entirely, the use of so-called contact tracing apps has figured prominently in many jurisdictions’ plans to reopen society. Critics have questioned the utility of such technologies on a number of fronts, both practical and ethical. However, little has been said about the ways in which the normative design choices of app developers, and the products that result therefrom, might contribute to ethical reflection and wider political debate. Drawing from scholarship in critical design and human–computer interaction, this paper examines the development of a QR code-based tracking app called Zwaai (‘Wave’ in Dutch), where its designers explicitly positioned the app as an alternative to the predominant Bluetooth and GPS-based approaches. Through analyzing these designers’ choices, this paper argues that QR code infrastructures can work to surface a set of ethical–political seams, two of which are discussed here—responsibilization and networked (im)permanence—that more ‘seamless’ protocols like Bluetooth actively aim to bypass, and which may go otherwise unnoticed by existing ethical frameworks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.