This paper explores and rehabilitates the value of decisional privacy as a conceptual tool, complementary to informational privacy, for critiquing personalized choice architectures employed by self-tracking technologies. Self-tracking technologies are promoted and used as a means to self-improvement. Based on large aggregates of personal data and the data of other users, self-tracking technologies offer personalized feedback that nudges the user into behavioral change. The real-time personalization of choice architectures requires continuous surveillance and is a very powerful technology, recently coined as Bhypernudging.^While users celebrate the increased personalization of their coaching devices, Bhypernudging^technologies raise concerns about manipulation. This paper addresses that intuition by claiming that decisional privacy is at stake. It thus counters the trend to solely focus on informational privacy when evaluating information and communication technologies. It proposes that decisional privacy and informational privacy are often part of a mutually reinforcing dynamic. Hypernudging is used as a key example to illustrate that the two dimensions should not be treated separately. Hypernudging self-tracking technologies compromise autonomy because they violate informational and decisional privacy. In order to effectively judge whether technologies that use hypernudges empower users, we need both privacy dimensions as conceptual tools.
This paper critically engages with new selftracking technologies. In particular, it focuses on a conceptual tension between the idea that disclosing personal information increases one's autonomy and the idea that informational privacy is a condition for autonomous personhood. I argue that while self-tracking may sometimes prove to be an adequate method to shed light on particular aspects of oneself and can be used to strengthen one's autonomy, self-tracking technologies often cancel out these benefits by exposing too much about oneself to an unspecified audience, thus undermining the informational privacy boundaries necessary for living an autonomous life.
This research statement presents a roadmap for the ethical evaluation of contact tracing apps. Assuming the possible development of an effective and secure contact tracing app, this roadmap explores three ethical concerns—privacy, data monopolists and coercion- based on three scenarios. The first scenario envisions and critically evaluates an app that is built on the conceptualization of privacy as anonymity and a mere individual right rather than a social value. The second scenario sketches and critically discusses an app that adequately addresses privacy concerns but is facilitated by data monopolists such as Google and Apple. The final scenario discusses the coerced installation and use of a privacy-friendly, independently developed contact tracing app. The main worry is coercion through societal exclusion and limited societal participation. The statement concludes with three suggestions for designing an ethical contact tracing app and a research agenda.
Mobile applications for digital contact tracing have been developed and introduced around the world in response to the COVID-19 pandemic. Proposed as a tool to support 'traditional' forms of contact-tracing carried out to monitor contagion, these apps have triggered an intense debate with respect to their legal and ethical permissibility, social desirability and general feasibility. Based on a large-scale study including qualitative data from 349 interviews conducted in nine European countries
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.