2014
DOI: 10.1007/978-3-319-11212-1_8
|View full text |Cite
|
Sign up to set email alerts
|

What’s the Gist? Privacy-Preserving Aggregation of User Profiles

Abstract: Over the past few years, online service providers have started gathering increasing amounts of personal information to build user profiles and monetize them with advertisers and data brokers. Users have little control of what information is processed and are often left with an all-or-nothing decision between receiving free services or refusing to be profiled. This paper explores an alternative approach where users only disclose an aggregate model -the "gist" -of their data. We aim to preserve data utility and … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
13
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
4
3
1

Relationship

2
6

Authors

Journals

citations
Cited by 21 publications
(13 citation statements)
references
References 27 publications
0
13
0
Order By: Relevance
“…Locasto et al [25] propose privacy-preserving data aggregation using Bloom filters, which, while constituting a one-way data structure, are vulnerable to simple guessing attacks. Secure distributed data aggregation is also discussed in [5,7]. While aggregation can help compute statistics, it only identifies most prolific attack sources and yields global models.…”
Section: Related Workmentioning
confidence: 99%
“…Locasto et al [25] propose privacy-preserving data aggregation using Bloom filters, which, while constituting a one-way data structure, are vulnerable to simple guessing attacks. Secure distributed data aggregation is also discussed in [5,7]. While aggregation can help compute statistics, it only identifies most prolific attack sources and yields global models.…”
Section: Related Workmentioning
confidence: 99%
“…Privacy-preserving techniques in the context of data analytics have a long history. Some recent papers propose new approaches, which allow users to protect their privacy selling aggregates of their data [25], [26]. The more classical framework of -differential privacy [27], [28], assumes that data are perturbed after an analysis has been conducted on unmodified inputs.…”
Section: Related Workmentioning
confidence: 99%
“…In general, we are interested in scenarios where providers need to train models based on aggregate statistics gathered from many data sources, and our goal is to do so without disclosing fine-grained information about single sources. In theory, we could turn to existing cryptographic protocols for privacy-friendly aggregation: using homomorphic encryption or secret sharing untrusted aggregators can collect encrypted readings but only decrypt the sum [9,13,15,33,47,61]. However, these tools require each data source to perform a number of cryptographic operations, and transmit a number of ciphertexts, linear in the size of their input, which makes them impractical when sources contribute large streams.…”
Section: Introductionmentioning
confidence: 99%