Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society 2021
DOI: 10.1145/3461702.3462533
|View full text |Cite
|
Sign up to set email alerts
|

On the Privacy Risks of Model Explanations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
78
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 67 publications
(99 citation statements)
references
References 10 publications
0
78
0
Order By: Relevance
“…Providing adequate privacy preservation for EI applications is an area with open research challenges. To preserve the privacy of client's data, different enabling technology are utilised with or without cryptographic techniques, perturbation techniques, and anonymisation techniques [227,255]. On the one hand, these technologies and techniques provide a means to safeguard the client's data better but simultaneously struggles to maintain the effectiveness (accuracy in case of classification problem) of the AI model [7,65,105].…”
Section: Privacy-preservation In Edge-aimentioning
confidence: 99%
“…Providing adequate privacy preservation for EI applications is an area with open research challenges. To preserve the privacy of client's data, different enabling technology are utilised with or without cryptographic techniques, perturbation techniques, and anonymisation techniques [227,255]. On the one hand, these technologies and techniques provide a means to safeguard the client's data better but simultaneously struggles to maintain the effectiveness (accuracy in case of classification problem) of the AI model [7,65,105].…”
Section: Privacy-preservation In Edge-aimentioning
confidence: 99%
“…The goal of building explainability is to provide a global understanding of how the FL server makes decisions and how they impact each client's interest in order to build trust between the two parties. However, research in explainability in FL must be framed within the context of privacy preservation so as not to conflict with the primary goal of FL [48].…”
Section: Trust Building Through Explainabilitymentioning
confidence: 99%
“…In addition to classification and generative models, MIA have been investigated on various domains, including embedding models [91], regression models [28], sequence-to-sequence models [34,93], image segmentation [31,86], transfer learning models [11,58,125], algorithmic fairness [10], model explanations [88,89], adversarial machine learning [95,96], and graph neural network [30,74]. Song and Raghunathan [91] investigate the membership risks on embedding models and demonstrate a simple threshold attack can achieve a 30% improvement in attack accuracy over random guessing on Word2Vec [67], Fast-Text [7], Glove [76], LSTM [35], and Transformer [106] models.…”
Section: Membership Inference Attacks On Different Domainsmentioning
confidence: 99%
“…Liew and Takahashi [58] present that in the application of transfer learning on face recognition, an adversary can successfully implement MIA on the teacher model even when it only accesses the student models. Shokri et al [88,89] show that releasing transparency reports of ML models can result in membership privacy risks. Song et al [95,96] find that the adversarially trained model is more vulnerable to MIA.…”
Section: Membership Inference Attacks On Different Domainsmentioning
confidence: 99%