ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2020
DOI: 10.1109/icassp40776.2020.9053379
|View full text |Cite
|
Sign up to set email alerts
|

Learning Semi-Supervised Anonymized Representations by Mutual Information

Abstract: This paper addresses the problem of removing from a set of data (here images) a given private information, while still allowing other utilities on the processed data. This is obtained by training concurrently a GAN-like discriminator and an autoencoder. The optimization of the resulting structure involves a novel surrogate of the misclassification probability of the information to remove. Several examples are given, demonstrating that a good level of privacy can be obtained on images at the cost of the introdu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
9
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(10 citation statements)
references
References 9 publications
0
9
0
Order By: Relevance
“…Hence, an empirical, task-dependent privacy checks are used to provide a holistic measure of privacy. Recent work leverages adversarial networks to sanitize input images and adopts similar DL-based discriminators for privacy examination (Edwards and Storkey 2016;Raval, Machanavajjhala, and Cox 2017;Pittaluga, Koppal, and Chakrabarti 2019;Chen, Konrad, and Ishwar 2018;Wu et al 2018;Tseng and Wu 2020;Feutry, Piantanida, and Duhamel 2020;Maximov, Elezi, and Leal-Taixé 2020;Xiong et al 2019). These works employs adversarial training to jointly optimize both privacy and utility objectives.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Hence, an empirical, task-dependent privacy checks are used to provide a holistic measure of privacy. Recent work leverages adversarial networks to sanitize input images and adopts similar DL-based discriminators for privacy examination (Edwards and Storkey 2016;Raval, Machanavajjhala, and Cox 2017;Pittaluga, Koppal, and Chakrabarti 2019;Chen, Konrad, and Ishwar 2018;Wu et al 2018;Tseng and Wu 2020;Feutry, Piantanida, and Duhamel 2020;Maximov, Elezi, and Leal-Taixé 2020;Xiong et al 2019). These works employs adversarial training to jointly optimize both privacy and utility objectives.…”
Section: Related Workmentioning
confidence: 99%
“…These so-called "privacy-preserving GANs" (PP-GANs) can sanitize images of human faces such that only their facial expressions are preserved while other identifying information is replaced (Chen, Konrad, and Ishwar 2018). Other examples include: removing location-relevant information from vehicular camera data (Xiong et al 2019), obfuscating the identify of the person who produced a handwriting sample (Feutry, Piantanida, and Duhamel 2020), and removal of barcodes from images (Raval, Machanavajjhala, and Cox 2017). Given the expertise required to train such models, one expects that users will need to acquire a privacy preservation tool from a third party or outsource GAN training, so proper privacy evaluation is paramount.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…In this context, anonymization arises as a tool to mitigate the risks of obtaining and massively processing personal data [4]. We propose GAN-based anonymization [5] of private health data, so a seedbed would be obtained from the training data that allow not only to capture information from the original data, but to generate new information with a similar behaviour.…”
Section: Introductionmentioning
confidence: 99%
“…These so-called "privacy-preserving GANs" (PP-GANs) can sanitize images of human faces such that only their facial expressions are preserved while other identifying information is replaced (Chen, Konrad, and Ishwar 2018). Other examples include: removing location-relevant information from vehicular camera data (Xiong et al 2019), obfuscating the identity of the person who produced a handwriting sample (Feutry, Piantanida, and Duhamel 2020), and removal of barcodes from images (Raval, Machanavajjhala, and Cox 2017). Given the expertise required to train such models, one expects that users will need to acquire a privacy preservation tool from a third party or outsource GAN training, so proper privacy evaluation is paramount.…”
Section: Introductionmentioning
confidence: 99%