2020
DOI: 10.1609/aaai.v34i07.6930
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Learning of Privacy-Preserving and Task-Oriented Representations

Abstract: Data privacy has emerged as an important issue as data-driven deep learning has been an essential component of modern machine learning systems. For instance, there could be a potential privacy risk of machine learning systems via the model inversion attack, whose goal is to reconstruct the input data from the latent representation of deep networks. Our work aims at learning a privacy-preserving and task-oriented representation to defend against such model inversion attacks. Specifically, we propose an adversar… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
36
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 38 publications
(37 citation statements)
references
References 24 publications
1
36
0
Order By: Relevance
“…Overall, the results of this study show that reconstruction of the original images used for training of CNN models is possible, but is accompanied by some amount of blurring. This finding is also in line with the results from model inversion attack studies in computer vision [ 8 ]. The results of this study also suggest that the reconstructions from the U-Net are better than those for the SegNet.…”
Section: Discussionsupporting
confidence: 90%
See 4 more Smart Citations
“…Overall, the results of this study show that reconstruction of the original images used for training of CNN models is possible, but is accompanied by some amount of blurring. This finding is also in line with the results from model inversion attack studies in computer vision [ 8 ]. The results of this study also suggest that the reconstructions from the U-Net are better than those for the SegNet.…”
Section: Discussionsupporting
confidence: 90%
“…The inversion attack scenario used in this work was first described for computer vision applications [ 7 , 8 ] and is based on the following assumptions that hold true for most deep learning models that are publicly available and follow an encoder-decoder architecture: The attacker can access the latent space representation of arbitrary input images. The attacker knows the encoder’s architecture that is used to generate the latent space information of the images.…”
Section: Materials and Methodsmentioning
confidence: 99%
See 3 more Smart Citations