2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2019
DOI: 10.1109/cvprw.2019.00018
|View full text |Cite
|
Sign up to set email alerts
|

DP-CGAN: Differentially Private Synthetic Data and Label Generation

Abstract: Generative Adversarial Networks (GANs) are one of the well-known models to generate synthetic data including images, especially for research communities that cannot use original sensitive datasets because they are not publicly accessible. One of the main challenges in this area is to preserve the privacy of individuals who participate in the training of the GAN models. To address this challenge, we introduce a Differentially Private Conditional GAN (DP-CGAN) training framework based on a new clipping and pertu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
113
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 141 publications
(125 citation statements)
references
References 42 publications
(83 reference statements)
0
113
0
Order By: Relevance
“…Reliable measurements need to be developed in the future to assure complete anonymity of the source individuals given the released AGs. In particular, we will investigate whether the differential privacy framework is performant in the context of large population genomics datasets [50,51].…”
Section: Discussionmentioning
confidence: 99%
“…Reliable measurements need to be developed in the future to assure complete anonymity of the source individuals given the released AGs. In particular, we will investigate whether the differential privacy framework is performant in the context of large population genomics datasets [50,51].…”
Section: Discussionmentioning
confidence: 99%
“…However, the model has to sacrifice a lot in terms of sample quality and diversity of synthetic images, making them not particularly useful for practical applications. For the specific task of generating class specific images, [23] introduces a differentially private extension to conditional GAN [24]. This can improve the downstream utility for a limited set of classification tasks.…”
Section: Private Gansmentioning
confidence: 99%
“…Our results in Theorem 4.1 theoretically show that the attack is possible in general. During the learning of the adversarial discriminator, injecting predefined noise is known to be effective to defend such attacks [41]. Meanwhile, users could quit or frequently opt-out the federated communication when the privacy budget (quantified by noise and Differential Privacy metric [7]) is low.…”
Section: Privacy Risks From Malicious Fade Usersmentioning
confidence: 99%