2020
DOI: 10.1016/j.jpdc.2019.09.010
|View full text |Cite
|
Sign up to set email alerts
|

Variational approach for privacy funnel optimization on continuous data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 7 publications
0
3
0
Order By: Relevance
“…Most of the state-of-the-art techniques for fairness penalty computations depend on mutual information-related measures [31,28,26]. These information-theoretic methods achieve fairness at the expense of data quality and utility.…”
Section: Fairness Objectivementioning
confidence: 99%
“…Most of the state-of-the-art techniques for fairness penalty computations depend on mutual information-related measures [31,28,26]. These information-theoretic methods achieve fairness at the expense of data quality and utility.…”
Section: Fairness Objectivementioning
confidence: 99%
“…Privacy-preserving deep learning [21,4,8] involves learning representations that incorporate features from the data relevant to the given task and ignore sensitive information (such as the identity of a person). The authors in [23] propose a simple variational approach for privacy-preserving representation learning. In contrast to existing privacy preservation works, the objective of the RCRMR-LD problem setting is to achieve class-level forgetting, i.e., if a class is declared as private/restricted, then all information about this class should be removed from the model trained on it, without affecting its ability to identify the remaining classes.…”
Section: Related Workmentioning
confidence: 99%
“…For this reason, approaches that take advantage of the scalability of deep learning have emerged. For instance, in [9] they learn the representations through adversarial learning, while in the the privacy preserving variational autoencoder (PPVAE) [17] and the unsupervised version of the variational fair autoencoder (VFAE) [27] they learn such representations with variational inference.…”
Section: Privacymentioning
confidence: 99%
“…Finally, the resulting approaches for privacy and fairness can be implemented with little modification to common algorithms for representation learning like the variational autoencoder (VAE) [15], the β-VAE [16], the variational information bottleneck (VIB) [17], or the nonlinear information bottleneck [18]. Therefore, it facilitates the incorporation of private and fair representations in current applications (see the supplementary material B for a guide on how to modify these algorithms).…”
Section: Introductionmentioning
confidence: 99%