The proliferation of Internet of Things (IoT) devices for consumer "smart" homes raises concerns about user privacy. We present a survey method based on the Contextual Integrity (CI) privacy framework that can quickly and efficiently discover privacy norms at scale. We apply the method to discover privacy norms in the smart home context, surveying 1,731 American adults on Amazon Mechanical Turk. For $2,800 and in less than six hours, we measured the acceptability of 3,840 information flows representing a combinatorial space of smart home devices sending consumer information to first and third-party recipients under various conditions. Our results provide actionable recommendations for IoT device manufacturers, including design best practices and instructions for adopting our method for further research.CCS Concepts: • Security and privacy → Human and societal aspects of security and privacy; Privacy protections;• Human-centered computing → Empirical studies in ubiquitous and mobile computing;In this paper, we present a general, scalable survey method for discovering consumer privacy norms based on the Contextual Integrity (CI) privacy framework [44] (Section 3). CI is a well-established theory that defines privacy norms as the generally accepted appropriateness of specific information exchanges, or "information flows," in specific contexts. Information flows and associated contexts can be described using five parameters: sender, recipient, subject, attribute, and transmission principle. This precise formulation makes it possible to thoroughly investigate the combinatorial space of contextual information flows and associated privacy norms with an automated, large-scale survey on a crowdsourcing platform. Our use of CI also ensures that the method is repeatable, both for the same types of devices over time, as well as for entirely new classes of devices.The method we develop is effective for discovering privacy norms in general. In this paper, we focus on applying the method to discover smart home privacy norms. We conducted a survey with a population of 1,731 adults from the United States on the Amazon Mechanical Turk (MTurk) platform. The survey cost $2,800 and allowed us to query the acceptability of 3,840 information flows involving smart home devices in less than six hours and identify associated privacy norms (Section 4). Our results provide insightful observations and actionable recommendations for IoT device manufacturers, regulators, and consumer advocates (Section 5).Device manufacturers can use our survey method to perform their own research on how consumers might view the use of data that their products collect. We designed the method to make it easy to customize with new information flows and contexts, allowing manufacturers to discover privacy norms relevant to specific products, including ones we have not studied in this paper. The results will indicate whether existing or proposed devices may violate established privacy norms, providing an opportunity to preempt negative user feedback, public relati...
Privacy expectations during disasters differ significantly from nonemergency situations. This paper explores the actual privacy practices of popular disaster apps, highlighting location information flows. Our empirical study compares content analysis of privacy policies and government agency policies, structured by the contextual integrity framework, with static and dynamic app analysis documenting the personal data sent by 15 apps. We identify substantive gaps between regulation and guidance, privacy policies, and information flows, resulting from ambiguities and exploitation of exemptions. Results also indicate gaps between governance and practice, including the following: (a) Many apps ignore self‐defined policies; (b) while some policies state they “might” access location data under certain conditions, those conditions are not met as 12 apps included in our study capture location immediately upon initial launch under default settings; and (c) not all third‐party data recipients are identified in policy, including instances that violate expectations of trusted third parties.
Designing programmable privacy logic frameworks that correspond to social, ethical, and legal norms has been a fundamentally hard problem. Contextual integrity (CI) (Nissenbaum, 2010) offers a model for conceptualizing privacy that is able to bridge technical design with ethical, legal, and policy approaches. While CI is capable of capturing the various components of contextual privacy in theory, it is challenging to discover and formally express these norms in operational terms. In the following, we propose a crowdsourcing method for the automated discovery of contextual norms. To evaluate the effectiveness and scalability of our approach, we conducted an extensive survey on Amazon's Mechanical Turk (AMT) with more than 450 participants and 1400 questions. The paper has three main takeaways: First, we demonstrate the ability to generate survey questions corresponding to privacy norms within any context. Second, we show that crowdsourcing enables the discovery of norms from these questions with strong majoritarian consensus among users. Finally, we demonstrate how the norms thus discovered can be encoded into a formal logic to automatically verify their consistency.
This paper presents a case for the adoption of an information-centric architecture for a global disaster management system. Drawing from a case study of the 2010/2011 Queensland floods, we describe the challenges in providing every participant with relevant and actionable information. We use various examples to argue for a more flexible information dissemination framework which is designed from the ground up to minimise the effort needed to fix the unexpected and unavoidable information acquisition, quality, and dissemination challenges posed by any real disaster.
We present a method for analyzing privacy policies using the framework of contextual integrity (CI). This method allows for the systematized detection of issues with privacy policy statements that hinder readers’ ability to understand and evaluate company data collection practices. These issues include missing contextual details, vague language, and overwhelming possible interpretations of described information transfers. We demonstrate this method in two different settings. First, we compare versions of Facebook’s privacy policy from before and after the Cambridge Analytica scandal. Our analysis indicates that the updated policy still contains fundamental ambiguities that limit readers’ comprehension of Facebook’s data collection practices. Second, we successfully crowdsourced CI annotations of 48 excerpts of privacy policies from 17 companies with 141 crowdworkers. This indicates that regular users are able to reliably identify contextual information in privacy policy statements and that crowdsourcing can help scale our CI analysis method to a larger number of privacy policy statements.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.