2017
DOI: 10.3390/e19120656
|View full text |Cite
|
Sign up to set email alerts
|

Context-Aware Generative Adversarial Privacy

Abstract: Preserving the utility of published datasets while simultaneously providing provable privacy guarantees is a well-known challenge. On the one hand, context-free privacy solutions, such as differential privacy, provide strong privacy guarantees, but often lead to a significant reduction in utility. On the other hand, context-aware privacy solutions, such as information theoretic privacy, achieve an improved privacy-utility tradeoff, but assume that the data holder has access to dataset statistics. We circumvent… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
120
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 131 publications
(123 citation statements)
references
References 82 publications
(123 reference statements)
0
120
0
Order By: Relevance
“…Differential privacy has been widely used to quantifying privacy issues related to databases [2]. More recently, generative adversarial privacy has been proposed [3]. In both cases, if users are present in multiple databases, knowledge of alignment is required to fully apply these frameworks.…”
Section: The Database Deanonymization Problemmentioning
confidence: 99%
“…Differential privacy has been widely used to quantifying privacy issues related to databases [2]. More recently, generative adversarial privacy has been proposed [3]. In both cases, if users are present in multiple databases, knowledge of alignment is required to fully apply these frameworks.…”
Section: The Database Deanonymization Problemmentioning
confidence: 99%
“…Then, they approximately solve the optimization problem using the cluster centroids. Huang et al [58] proposed to use generative adversarial networks to approximately solve the game-theoretic optimization problems. However, these approximate solutions do not have formal guarantees on utility loss of the public data.…”
Section: Defensesmentioning
confidence: 99%
“…removing text from images [7]. An optimal privacy mechanism can be formulated as a game between two players, a privatizer and an adversary, with an iterative minimax algorithm [11]. Moreover, the service provider can share a feature extractor based on an initial training set that is then re-trained by the user on their data and then sent back to the service provider [24,30].…”
Section: Related Workmentioning
confidence: 99%