2014
DOI: 10.1017/cbo9781107590205
|View full text |Cite
|
Sign up to set email alerts
|

Privacy, Big Data, and the Public Good

Abstract: Massive amounts of data on human beings can now be analyzed. Pragmatic purposes abound, including selling goods and services, winning political campaigns, and identifying possible terrorists. Yet 'big data' can also be harnessed to serve the public good: scientists can use big data to do research that improves the lives of human beings, improves government services, and reduces taxpayer costs. In order to achieve this goal, researchers must have access to this data - raising important privacy questions. What a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
33
0
1

Year Published

2017
2017
2021
2021

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 110 publications
(35 citation statements)
references
References 23 publications
1
33
0
1
Order By: Relevance
“…These, of course, are limited examples, but they represent the tip of an iceberg, which is a fundamental contradiction between anonymization tools and big data. As Barocas and Nissenbaum (2014) duly point out, comprehensiveness of databases and robust inference techniques available to data collectors drastically widen the space of privacy violation not covered by anonymization techniques. In the absence of a market-wide adoption for the new types of data analytics such as sMPC, mere obfuscation of collected private data hardly addresses IoT privacy issues of inferred data (Durante 2017).…”
Section: Anonymization and Data Marketsmentioning
confidence: 99%
“…These, of course, are limited examples, but they represent the tip of an iceberg, which is a fundamental contradiction between anonymization tools and big data. As Barocas and Nissenbaum (2014) duly point out, comprehensiveness of databases and robust inference techniques available to data collectors drastically widen the space of privacy violation not covered by anonymization techniques. In the absence of a market-wide adoption for the new types of data analytics such as sMPC, mere obfuscation of collected private data hardly addresses IoT privacy issues of inferred data (Durante 2017).…”
Section: Anonymization and Data Marketsmentioning
confidence: 99%
“…At the state level, the coverage is mostly between 10 and 25%, except for states in the Midwest where the coverage is mostly in the lower range of less than 10%. The mean and standard deviation of the national FD-to-MRTS coverage rate is 0.164 (i.e., 16.4%) and 0.041, respectively. Excluding the earlier period when new platforms were introduced in this industry (July 2012 through February 2014), the coverage consistently hovered around 18% (a mean of 0.184 and a standard deviation of 0.007).…”
Section: Displays a Visualization Of Suppression Rates-and Coverage Rmentioning
confidence: 99%
“…Understanding the uncertain and imprecise nature of these data-especially with respect to representativeness-is critically important when considering their use with official statistics. In fact, uncertain veracity is a primary characteristic of what is termed secondary, found, organic, or nonprobability sample data in survey research [2,13,14,16,17]. Third-party electronic payment data are collected for purposes not related to producing official statistics yet contain relevant information for measuring the retail trade economy.…”
Section: Introductionmentioning
confidence: 99%
“…This may relate to bodily practices, fo example unobserved presence in personal spaces, or to information generated based on individuals' digital traces (see e.g. Lane et al 2014;Beresford and Stajano 2003).…”
Section: Privacy and Securitymentioning
confidence: 99%